Back to The Times of Claw

OpenClaw with Mistral: Local AI for Your CRM

OpenClaw + Mistral AI: run Mistral models locally or via API for a fast, privacy-friendly CRM. Setup guide for DenchClaw with Mistral's local and cloud options.

Mark Rachapoom
Mark Rachapoom
·7 min read
OpenClaw with Mistral: Local AI for Your CRM

OpenClaw works with Mistral models both locally (via Ollama or LM Studio) and through Mistral's cloud API. If you're building a private AI CRM with DenchClaw, Mistral offers a compelling balance: capable models, open weights, European data residency, and some of the fastest inference speeds in the category.

Why Mistral for DenchClaw?#

Mistral AI has built a strong reputation for punching above their weight class:

  • Open weights: Mistral 7B, Mixtral 8x7B, and others are freely available for local deployment
  • Fast inference: Mistral models are architecturally efficient — faster than comparably-sized models
  • Function calling: Mistral Instruct models support reliable tool use, which DenchClaw Skills depend on
  • European jurisdiction: For EU teams with GDPR concerns, Mistral's API runs in European infrastructure
  • Mistral Large for complex tasks: When you need cloud-level reasoning, Mistral Large competes with top-tier models
  • Cost-efficient: Mistral API pricing is competitive, and the 7B model runs well on consumer hardware

For local deployment specifically, Mistral 7B is the go-to recommendation when you want a model that runs on everyday hardware without sacrificing too much capability.

Two Ways to Use Mistral with OpenClaw#

You have two main options:

  1. Mistral API — Use Mistral's cloud service. Faster, easier setup, no hardware requirements
  2. Local Mistral — Run Mistral models on your own hardware via Ollama or LM Studio

Both integrate with OpenClaw the same way — you just point at different endpoints.

Option A: Mistral Cloud API#

Step 1: Get an API Key#

  1. Go to console.mistral.ai
  2. Sign up and navigate to API Keys
  3. Generate a new key and copy it

Step 2: Configure OpenClaw#

openclaw config set apiKeys.mistral YOUR_MISTRAL_API_KEY

Or in ~/.openclaw/config.json:

{
  "apiKeys": {
    "mistral": "your-key-here"
  },
  "model": {
    "provider": "mistral",
    "model": "mistral-large-latest"
  }
}

Step 3: Available Cloud Models#

ModelBest ForSpeed
mistral-large-latestComplex reasoning, analysisModerate
mistral-medium-latestBalanced daily useFast
mistral-small-latestSimple tasks, high volumeVery fast
codestral-latestCode generation, data tasksFast

For DenchClaw daily use, start with mistral-small-latest for routine operations and switch to mistral-large-latest for complex analysis.

Option B: Local Mistral via Ollama#

If you want full data privacy with no API calls, run Mistral locally.

Step 1: Pull Mistral in Ollama#

# Standard Mistral 7B — best all-around local option
ollama pull mistral
 
# Mixtral 8x7B — bigger, more capable, needs 32GB+ RAM
ollama pull mixtral
 
# Mistral NeMo — updated architecture, strong performance
ollama pull mistral-nemo

Step 2: Configure OpenClaw for Local Mistral#

{
  "model": {
    "provider": "ollama",
    "baseUrl": "http://localhost:11434/v1",
    "model": "mistral",
    "apiKey": "ollama"
  }
}

See the Ollama setup guide for full Ollama configuration details.

Why Mistral 7B Is the Best Starter Local Model#

For teams new to running local LLMs with DenchClaw, Mistral 7B is the default recommendation for three reasons:

1. Hardware requirements are realistic: At ~4GB with Q4 quantization, it runs on any modern laptop with 8GB+ RAM. No GPU required (though GPU acceleration helps).

2. Instruction following is reliable: Mistral was fine-tuned specifically for instruction tasks. It follows the structured prompts that DenchClaw Skills use more reliably than many comparably-sized models.

3. Speed is good: On Apple Silicon (M1/M2/M3), Mistral 7B generates tokens fast enough for interactive use — 20-40 tokens/second is typical on M-series chips.

Step-by-Step: Local Mistral for Maximum Privacy#

Here's the full zero-cloud setup for DenchClaw + Mistral:

1. Install DenchClaw:

npx denchclaw

2. Install Ollama and pull Mistral:

brew install ollama
ollama serve &
ollama pull mistral

3. Configure OpenClaw:

openclaw config set model ollama/mistral
openclaw config set model.baseUrl http://localhost:11434/v1

4. Run DenchClaw:

openclaw start

Your entire stack — CRM data (DuckDB), agent framework (OpenClaw), model (Mistral via Ollama) — now runs locally. No network required after initial setup.

This is what DenchClaw's local-first architecture is designed for.

Mixtral 8x7B for Advanced Tasks#

When Mistral 7B isn't cutting it for complex reasoning or long-form analysis, Mixtral 8x7B is the next step up. It's a Mixture-of-Experts model that activates ~13B parameters per forward pass from a total 47B parameter space — making it more capable than 7B while being faster than a dense 47B model would be.

Requirements: 32-48GB RAM for Q4 quantization.

ollama pull mixtral
openclaw config set model ollama/mixtral

Mixtral 8x7B handles complex CRM analysis, multi-document synthesis, and nuanced writing tasks significantly better than Mistral 7B, while still running entirely locally on a MacBook Pro with 48GB+ RAM.

Using Codestral for Data Tasks#

If you're building custom Skills or working heavily with DuckDB queries, Mistral's Codestral model is worth trying. It's specialized for code and data tasks:

# Via Mistral API
openclaw config set model mistral/codestral-latest
 
# Or locally if you have it
ollama pull codestral

Codestral generates clean SQL, writes DenchClaw Skill scripts, and handles data transformation tasks particularly well.

Configuring Function Calling with Mistral#

DenchClaw Skills use function/tool calling to interact with your CRM data. Mistral's instruct models support this, but there are some configuration details to get right:

For the cloud API, function calling works automatically with mistral-large and mistral-small. Just make sure you're using an instruct variant for local models:

# Use instruct variants for tool calling
ollama pull mistral:7b-instruct

The base (non-instruct) variants don't reliably follow tool calling schemas.

Monitoring Mistral API Costs#

Mistral's pricing is straightforward and competitive. For a small team running DenchClaw with mistral-small:

  • Simple queries: under $0.01 each
  • Complex analysis: a few cents
  • Bulk enrichment (100 records): under $1

Track usage in your Mistral console under Usage. Set an alert if you're automating bulk operations.

Troubleshooting#

Local Mistral gives incoherent responses

You likely pulled the base model, not the instruct variant. Pull mistral:7b-instruct instead of mistral:

ollama pull mistral:7b-instruct

Mistral API returns 429 (rate limited)

Mistral's free tier has strict rate limits. Add billing information to your account to get higher limits.

Mixtral is too slow on my hardware

You may not have enough RAM and it's paging to disk. Check with ollama ps. For machines under 32GB, stick with Mistral 7B or use the Mistral cloud API.

Function calling isn't working with local Mistral

Ensure you're using version mistral:7b-instruct-v0.3 or later — earlier versions have weaker function calling. If still failing, consider switching to the cloud API for tool-heavy Skills.

FAQ#

Is Mistral open source?

Mistral 7B and Mixtral 8x7B are released under the Apache 2.0 license — you can use them commercially, modify them, and deploy them freely. Mistral's newer models (Large, Medium) are proprietary and API-only.

Can I self-host the Mistral API?

You can self-host the open-weight models (7B, 8x7B) using llama.cpp, vLLM, or Ollama. You cannot self-host the proprietary API models. For DenchClaw, Ollama is the simplest path to self-hosting.

How does Mistral compare to Llama for DenchClaw?

They're comparable in capability at the 7B size. Mistral tends to be faster and slightly better at instruction following; Llama 3 has a wider selection of fine-tunes and community models. Both work well — the choice often comes down to which runs better on your specific hardware.

Can I switch between Mistral and other models mid-session?

Yes. You can override the model per-command in OpenClaw with --model. Your conversation history and CRM data persist regardless of which model you use.

Does Mistral support embedding generation for semantic search?

Yes. Mistral offers embedding models (mistral-embed) that work with RAG (Retrieval-Augmented Generation) setups. If you want DenchClaw to do semantic search over your CRM data, this is the relevant model to use.

Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →

Mark Rachapoom

Written by

Mark Rachapoom

Building the future of AI CRM software.

Continue reading

DENCH

© 2026 DenchHQ · San Francisco, CA