Back to The Times of Claw

The EU AI Act and Local Software: What It Means

The EU AI Act creates compliance obligations for AI systems. Local-first AI software has a smaller regulatory footprint. Here's what you need to know.

Mark Rachapoom
Mark Rachapoom
·7 min read
The EU AI Act and Local Software: What It Means

The EU Artificial Intelligence Act — the world's first comprehensive AI regulation — entered into force in August 2024, with provisions phasing in through 2027. For businesses using or deploying AI systems, it creates a new compliance layer. For businesses using local-first AI software, the compliance picture looks considerably simpler.

Here's what the EU AI Act actually requires, and how local-first architecture affects your obligations.

The EU AI Act Framework#

The AI Act takes a risk-based approach, categorizing AI systems by the risk they pose:

Prohibited AI (banned entirely): Social scoring by governments, real-time biometric surveillance in public spaces, AI that exploits vulnerabilities to manipulate behavior, systems that assess criminal risk from biometric data.

High-risk AI: Systems in areas like critical infrastructure, education, employment, access to essential services, law enforcement, migration, and administration of justice. High-risk systems require conformity assessments, technical documentation, human oversight, accuracy and robustness testing, and registration in an EU database.

Limited-risk AI: Systems with specific transparency obligations — chatbots must disclose they're AI, deepfakes must be labeled, emotion recognition systems must disclose their use.

Minimal-risk AI: Most commercial AI — recommendation systems, spam filters, AI-powered productivity tools — falls here and faces no specific obligations beyond general law.

General Purpose AI Models (GPAI): Large language models with systemic risk (like GPT-4 class models) face additional transparency and safety requirements.

How This Affects CRM and Sales AI#

For most AI features in a CRM context — lead scoring, email drafting suggestions, contact enrichment, sales forecasting, conversation summaries — the AI Act classification is straightforward: minimal risk.

These use cases don't fall into high-risk categories. They're not making consequential decisions about individuals' access to education, employment, or essential services. They're productivity tools for sales and relationship management.

However, several caveats apply:

AI-powered HR screening: If you use your CRM with AI to screen job candidates or evaluate employees, this is high-risk under Annex III of the AI Act. Specific documentation and human oversight requirements apply.

AI for credit or insurance decisions: Using AI in your CRM to assess creditworthiness or insurance risk is high-risk. This affects fintech and insurance sales workflows.

Emotion AI: If your AI analyzes emotions from video calls or voice recordings (some sales intelligence tools claim to do this), transparency obligations apply and certain uses are restricted.

Biometric categorization: AI that categorizes people based on biometrics — even indirect biometrics like keystroke patterns — has specific restrictions.

For a standard B2B sales CRM using AI for pipeline management, email drafting, and lead prioritization: the AI Act creates minimal direct obligation. You're using minimal-risk AI.

The GPAI Provider Obligation#

Here's where the EU AI Act matters even for minimal-risk AI users: if you're using a cloud AI provider whose models are GPAI (general purpose AI with systemic risk) — Claude, GPT-4, Gemini — those providers have obligations under the AI Act.

Providers of GPAI models with systemic risk must:

  • Perform adversarial testing
  • Report serious incidents to the EU AI Office
  • Provide technical documentation
  • Maintain copyright compliance policies

This doesn't directly create obligations for you as a user. But it does affect which AI providers you can use — they must comply with the Act or face significant fines.

The Transparency Obligation: AI Disclosure#

The AI Act requires that AI systems interacting directly with people disclose that they're AI. If you use DenchClaw's AI to draft customer-facing emails that are sent without human review, and the recipients are in the EU, there's an argument that transparency obligations apply.

This is an area where the Act is still being interpreted. Practically speaking:

  • AI drafting emails that humans review and send: probably no disclosure obligation (human in the loop)
  • Fully automated AI sending emails without human review: transparency obligation likely applies
  • Chatbots on your website: clear transparency obligation (must disclose it's AI)

How Local-First Reduces AI Regulatory Surface#

Using local AI models — running inference on your own hardware with open-source models like Llama, Mistral, or Phi — has meaningful implications for AI Act compliance:

No GPAI obligations: Open-source models below the computation threshold for systemic risk are exempt from GPAI provider requirements. Using Llama 3 locally doesn't require compliance with the GPAI tier of the AI Act.

No external data exposure: With local models, your CRM data doesn't leave your infrastructure for AI processing. This simplifies GDPR interaction with the AI Act — you're not creating new data transfers for AI processing.

Auditability: With an open-source model running locally, you can inspect the model's weights, test its outputs, and document its characteristics — supporting any technical documentation requirements that do apply.

No provider dependency: You're not dependent on a GPAI provider's compliance with the AI Act. If a cloud AI provider faces enforcement action, it doesn't affect your operations.

DenchClaw supports local AI models via Ollama and LM Studio. Configuring DenchClaw to use a local model gives you full AI functionality with the minimal possible regulatory surface under the EU AI Act.

Compliance Checklist for AI in Your CRM#

If you're using AI in your CRM for EU-touching operations, here's a practical assessment:

  1. What does your AI do? List all AI-powered features in your CRM stack. What decisions do they influence?

  2. Risk category: Does any AI feature touch hiring, credit, insurance, biometrics, critical infrastructure, law enforcement, or education? If yes, high-risk obligations apply.

  3. Customer-facing AI: Do any AI features interact directly with customers or generate customer-facing communications without human review? Transparency obligations may apply.

  4. GPAI providers: If using cloud AI APIs (Claude, GPT-4, Gemini), confirm those providers' EU AI Act compliance status. Anthropic, OpenAI, and Google are all working on compliance.

  5. Documentation: For any AI system that might be high-risk, prepare technical documentation describing the system, its intended purpose, its training data, and accuracy metrics.

  6. Human oversight: For high-risk AI, document the human oversight mechanisms. Who reviews AI outputs before consequential decisions are made?

For most B2B CRM users with standard sales AI features: steps 1-4 lead to "minimal risk, no specific obligations." Document that assessment and move on.

What's Changing Over Time#

The AI Act's requirements phase in:

  • February 2025: Prohibitions on unacceptable-risk AI take effect
  • August 2025: Rules for GPAI models apply
  • August 2026: High-risk AI system obligations take full effect
  • August 2027: Some pre-existing AI systems must comply

The most significant provisions for most businesses — high-risk AI obligations — don't take full effect until 2026. This gives time for compliance preparation.

The EU AI Office will issue guidance on specific interpretations as the Act is implemented. Keep an eye on AI Office publications if you have any use cases in gray areas.

Frequently Asked Questions#

Does the EU AI Act apply to US companies?#

Yes. Like GDPR, the AI Act has extraterritorial reach. US companies whose AI systems are used in the EU or affect EU residents are subject to its provisions.

Is AI-powered lead scoring high-risk under the AI Act?#

Lead scoring for B2B sales purposes — prioritizing which companies to contact — is not high-risk. The high-risk category for employment covers AI used in hiring and employee management, not in sales prospecting.

Do I need to disclose AI in every sales email?#

The transparency obligation applies to AI systems interacting directly with natural persons in a way they might not know they're interacting with AI. A human reviewing and editing an AI-drafted email before sending is not an automated interaction subject to mandatory disclosure.

What open-source models can I use locally with DenchClaw?#

DenchClaw supports any model accessible via an OpenAI-compatible API endpoint. Ollama supports Llama 3, Mistral, Phi, Gemma, and many others. These run locally with no external data transmission.

Ready to try DenchClaw? Install in one command: npx denchclaw. Full setup guide →

Mark Rachapoom

Written by

Mark Rachapoom

Building the future of AI CRM software.

Continue reading

DENCH

© 2026 DenchHQ · San Francisco, CA