Mistral AI Provider
Provider Key: mistral
Overview
The Mistral AI provider enables HPD-Agent to use Mistral's powerful language models, including open-source and commercial variants. Mistral AI offers high-performance models optimized for efficiency and multilingual capabilities.
Key Features:
- Multiple model families (Mistral Large, Medium, Small, Mixtral)
- Streaming support for real-time responses
- Function/tool calling capabilities
- Vision support (not available)
- JSON output mode
- Deterministic generation with seed
- Safety prompts and guardrails
- Parallel tool calling
- API key authentication
For detailed API documentation, see:
- MistralProviderConfig API Reference - Complete property listing
Quick Start
Minimal Example
using HPD.Agent;
using HPD.Agent.Providers.Mistral;
// Set API key via environment variable
Environment.SetEnvironmentVariable("MISTRAL_API_KEY", "your-api-key");
var agent = await new AgentBuilder()
.WithMistral(model: "mistral-large-latest")
.Build();
var response = await agent.RunAsync("What is the capital of France?");
Console.WriteLine(response);Installation
dotnet add package HPD-Agent.Providers.MistralDependencies:
Mistral.SDK2.3.0 - Community-maintained Mistral SDKMicrosoft.Extensions.AI- AI abstractions
Configuration
Configuration Patterns
The Mistral provider supports all three configuration patterns. Choose the one that best fits your needs.
1. Builder Pattern (Fluent API)
Best for: Simple configurations and quick prototyping.
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.MaxTokens = 4096;
opts.Temperature = 0.7m;
opts.TopP = 0.9m;
})
.Build();2. Config Pattern (Data-Driven)
Best for: Serialization, persistence, and configuration files.
C# Config Object:
var config = new AgentConfig
{
Name = "MistralAgent",
Provider = new ProviderConfig
{
ProviderKey = "mistral",
ModelName = "mistral-large-latest",
ApiKey = "your-api-key"
}
};
var mistralOpts = new MistralProviderConfig
{
MaxTokens = 4096,
Temperature = 0.7m,
TopP = 0.9m
};
config.Provider.SetTypedProviderConfig(mistralOpts);
var agent = await config.BuildAsync();JSON Config File:
{
"Name": "MistralAgent",
"Provider": {
"ProviderKey": "mistral",
"ModelName": "mistral-large-latest",
"ApiKey": "your-api-key",
"ProviderOptionsJson": "{\"maxTokens\":4096,\"temperature\":0.7,\"topP\":0.9}"
}
}var agent = await AgentConfig
.BuildFromFileAsync("mistral-config.json");3. Builder + Config Pattern (Recommended)
Best for: Production deployments with reusable configuration and runtime customization.
// Define base config once
var config = new AgentConfig
{
Name = "MistralAgent",
Provider = new ProviderConfig
{
ProviderKey = "mistral",
ModelName = "mistral-large-latest",
ApiKey = "your-api-key"
}
};
var mistralOpts = new MistralProviderConfig
{
MaxTokens = 4096,
Temperature = 0.7m
};
config.Provider.SetTypedProviderConfig(mistralOpts);
// Reuse with different runtime customizations
var agent1 = new AgentBuilder(config)
.WithServiceProvider(services)
.WithToolkit<MathToolkit>()
.Build();
var agent2 = new AgentBuilder(config)
.WithServiceProvider(services)
.WithToolkit<FileToolkit>()
.Build();Provider-Specific Options
The MistralProviderConfig class provides comprehensive configuration options organized by category:
Core Parameters
configure: opts =>
{
// Maximum tokens to generate (model-specific limits)
opts.MaxTokens = 4096;
}Sampling Parameters
configure: opts =>
{
// Sampling temperature (0.0-1.0, default: 0.7)
// Higher = more creative, Lower = more focused
opts.Temperature = 0.7m;
// Top-P nucleus sampling (0.0-1.0, default: 1.0)
opts.TopP = 0.9m;
}Determinism
configure: opts =>
{
// Seed for deterministic generation
// Same seed + same input = same output
opts.RandomSeed = 12345;
opts.Temperature = 0m; // Use with seed for max determinism
}Response Format
configure: opts =>
{
// Response format: "text" (default) or "json_object"
opts.ResponseFormat = "json_object";
// Note: Instruct the model to produce JSON in your prompt when using json_object
}Safety
configure: opts =>
{
// Inject Mistral's safety guardrails
opts.SafePrompt = true;
}Tool/Function Calling
configure: opts =>
{
// Tool choice behavior: "auto" (default), "any", "none"
opts.ToolChoice = "auto";
// Enable parallel function calling
opts.ParallelToolCalls = true;
}Advanced Options
configure: opts =>
{
// Additional model-specific parameters
opts.AdditionalProperties = new Dictionary<string, object>
{
["custom_parameter"] = "value"
};
}Authentication
Mistral AI uses API key authentication. The provider supports multiple authentication methods with priority ordering.
Authentication Priority Order
- Explicit API key in
WithMistral()method - Environment variable:
MISTRAL_API_KEY - Configuration file:
"mistral:ApiKey"or"Mistral:ApiKey"in appsettings.json
Method 1: Environment Variable (Recommended for Development)
export MISTRAL_API_KEY="your-api-key"// Automatically uses environment variable
var agent = await new AgentBuilder()
.WithMistral(model: "mistral-large-latest")
.Build();Method 2: Explicit API Key
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key")
.Build();Security Warning: Never hardcode API keys in source code. Use environment variables or secure configuration management instead.
Method 3: Configuration File
appsettings.json:
{
"Mistral": {
"ApiKey": "your-api-key",
"ModelName": "mistral-large-latest"
}
}var agent = await new AgentBuilder()
.WithMistral(model: "mistral-large-latest")
.Build();Getting Your API Key
- Sign up at Mistral AI Console
- Navigate to API Keys section
- Create a new API key
- Store securely using environment variables or secrets management
Supported Models
Mistral AI provides access to multiple model families optimized for different use cases.
Commercial Models
| Model ID | Context | Best For |
|---|---|---|
mistral-large-latest | 128k | Complex reasoning, coding, analysis |
mistral-medium-latest | 32k | Balanced performance and cost |
mistral-small-latest | 32k | Fast, cost-effective for simple tasks |
Open-Source Models
| Model ID | Context | Best For |
|---|---|---|
open-mistral-7b | 32k | Open-source, general purpose |
open-mixtral-8x7b | 32k | Mixture of Experts, high performance |
Embeddings
| Model ID | Dimensions | Best For |
|---|---|---|
mistral-embed | 1024 | Text embeddings, semantic search |
For the latest models, see:
Advanced Features
JSON Output Mode
Force the model to return valid JSON:
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.ResponseFormat = "json_object";
opts.Temperature = 0.3m; // Lower for structured output
})
.Build();
var response = await agent.RunAsync(
"Return user data as JSON: Name: John Doe, Age: 30, City: Paris");Note: Always instruct the model to produce JSON in your prompt when using json_object mode.
Deterministic Generation
Produce identical outputs for the same inputs:
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-small-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.RandomSeed = 12345; // Same seed = same output
opts.Temperature = 0m; // No randomness
})
.Build();
// Multiple calls with same input will produce identical outputs
var response1 = await agent.RunAsync("Generate a story.");
var response2 = await agent.RunAsync("Generate a story.");
// response1 == response2Safety Prompts
Inject Mistral's safety guardrails:
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.SafePrompt = true; // Add safety guardrails
})
.Build();Benefits:
- Prevents harmful outputs
- Reduces inappropriate responses
- Enforces ethical guidelines
Parallel Tool Calling
Enable the model to call multiple tools simultaneously:
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.ToolChoice = "auto";
opts.ParallelToolCalls = true; // Call multiple tools at once
})
.WithToolkit<WeatherToolkit>()
.WithToolkit<CalculatorToolkit>()
.Build();Client Middleware
Add logging, caching, or custom processing:
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
clientFactory: client =>
new LoggingChatClient(
new CachingChatClient(client)))
.Build();Error Handling
The Mistral provider includes intelligent error classification and automatic retry logic.
Error Categories
| Category | HTTP Status | Retry Behavior | Examples |
|---|---|---|---|
| AuthError | 401, 403 | No retry | Invalid API key, insufficient permissions |
| RateLimitRetryable | 429 | Exponential backoff | Rate limit exceeded, quota exceeded |
| ClientError | 400, 404 | No retry | Invalid parameters, model not found |
| Transient | 503 | Retry | Service unavailable, timeout |
| ServerError | 500-599 | Retry | Internal server error |
Automatic Retry Logic
The provider automatically retries transient errors with exponential backoff:
Attempt 0: 1 second delay
Attempt 1: 2 seconds delay
Attempt 2: 4 seconds delay
Attempt 3: 8 seconds delayCommon Exceptions
401 Unauthorized
Mistral rejected your authorization, invalid API keySolution: Verify API key is correct and active
429 Rate Limit Exceeded
Rate limit exceededSolution: Automatically retried with backoff. For persistent issues, upgrade your plan
400 Bad Request
Invalid request parametersSolution: Check temperature range (0.0-1.0), responseFormat values, etc.
503 Service Unavailable
Service temporarily unavailableSolution: Automatically retried. If persistent, check Mistral AI status
404 Model Not Found
Model not foundSolution: Verify model ID and ensure you have access
Examples
Example 1: Basic Chat
using HPD.Agent;
using HPD.Agent.Providers.Mistral;
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key")
.Build();
var response = await agent.RunAsync("Explain quantum computing in simple terms.");
Console.WriteLine(response);Example 2: Function Calling with Tools
public class WeatherToolkit
{
[Function("Get current weather for a location")]
public string GetWeather(string location)
{
return $"The weather in {location} is sunny, 72°F";
}
}
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts => opts.ToolChoice = "auto")
.WithToolkit<WeatherToolkit>()
.Build();
var response = await agent.RunAsync("What's the weather in Seattle?");Example 3: Streaming Responses
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key")
.Build();
await foreach (var chunk in agent.RunAsync("Write a short story about AI."))
{
Console.Write(chunk);
}Example 4: JSON Output Mode
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.ResponseFormat = "json_object";
opts.Temperature = 0.3m;
})
.Build();
var response = await agent.RunAsync(@"
Extract the following person data as JSON:
Name: Alice Smith
Age: 28
City: New York
Occupation: Engineer
");
Console.WriteLine(response);
// Output: {"name":"Alice Smith","age":28,"city":"New York","occupation":"Engineer"}Example 5: Deterministic Generation for Testing
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-small-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.RandomSeed = 42;
opts.Temperature = 0m;
})
.Build();
// Use in tests to ensure consistent behavior
var response1 = await agent.RunAsync("Generate a test case");
var response2 = await agent.RunAsync("Generate a test case");
Assert.Equal(response1, response2); // Will passExample 6: Creative Writing with High Temperature
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.Temperature = 1.0m; // Maximum creativity
opts.TopP = 0.95m;
opts.MaxTokens = 2048;
})
.Build();
var response = await agent.RunAsync("Write a creative poem about the stars.");Example 7: Precise Code Generation
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.Temperature = 0m; // No randomness
opts.MaxTokens = 4096;
})
.Build();
var response = await agent.RunAsync("Write a Python function to sort a list.");Example 8: Safe Content Generation
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.SafePrompt = true; // Enable safety guardrails
opts.Temperature = 0.7m;
})
.Build();
var response = await agent.RunAsync("Generate a story for children.");Example 9: Parallel Tool Execution
public class MathToolkit
{
[Function("Add two numbers")]
public int Add(int a, int b) => a + b;
[Function("Multiply two numbers")]
public int Multiply(int a, int b) => a * b;
}
var agent = await new AgentBuilder()
.WithMistral(
model: "mistral-large-latest",
apiKey: "your-api-key",
configure: opts =>
{
opts.ParallelToolCalls = true; // Execute tools in parallel
})
.WithToolkit<MathToolkit>()
.Build();
var response = await agent.RunAsync("What is 5+3 and 4*7?");
// Model can call both Add and Multiply simultaneouslyExample 10: Multi-Model Strategy
// Use different models for different tasks
var config = new AgentConfig
{
Provider = new ProviderConfig
{
ProviderKey = "mistral",
ApiKey = "your-api-key"
}
};
// Fast model for simple tasks
var simpleAgent = new AgentBuilder(config)
.WithMistral(model: "mistral-small-latest")
.Build();
// Powerful model for complex tasks
var complexAgent = new AgentBuilder(config)
.WithMistral(
model: "mistral-large-latest",
configure: opts => opts.MaxTokens = 8192)
.Build();
// Route based on task complexity
var response = taskIsSimple
? await simpleAgent.RunAsync(prompt)
: await complexAgent.RunAsync(prompt);Troubleshooting
"API key is required for Mistral"
Problem: Missing API key configuration.
Solution:
// Option 1: Environment variable
Environment.SetEnvironmentVariable("MISTRAL_API_KEY", "your-api-key");
// Option 2: Explicit parameter
.WithMistral(model: "...", apiKey: "your-api-key")
// Option 3: Config file
// Add to appsettings.json: "Mistral": {"ApiKey": "your-api-key"}"401 Unauthorized"
Problem: Invalid or expired API key.
Solution:
- Verify API key is correct
- Check key is active in Mistral AI Console
- Ensure no extra spaces or characters
- Generate new API key if necessary
"Temperature must be between 0.0 and 1.0"
Problem: Invalid temperature value.
Solution:
configure: opts => opts.Temperature = 0.7m // Valid (0.0-1.0)
// NOT: opts.Temperature = 1.5m // Invalid for Mistral"ResponseFormat must be one of: text, json_object"
Problem: Invalid response format.
Solution:
configure: opts => opts.ResponseFormat = "json_object" // Valid
// NOT: opts.ResponseFormat = "json_schema" // Not supported by Mistral"ToolChoice must be one of: auto, any, none"
Problem: Invalid tool choice value.
Solution:
configure: opts => opts.ToolChoice = "auto" // Valid
// NOT: opts.ToolChoice = "required" // Use "any" insteadRate Limiting (429)
Problem: Too many requests.
Solution:
- Provider automatically retries with exponential backoff
- For persistent issues:
- Upgrade your Mistral AI plan
- Implement request throttling
- Reduce request frequency
Connection Timeout
Problem: Requests timing out.
Solution:
configure: opts =>
{
opts.MaxTokens = 2048; // Reduce output size
}
// Or increase HttpClient timeout in your appModel Not Found (404)
Problem: Invalid model ID or no access.
Solution:
- Verify model ID matches exactly (case-sensitive)
- Check available models
- Ensure you have access to the model
MistralProviderConfig API Reference
Core Parameters
| Property | Type | Range | Default | Description |
|---|---|---|---|---|
MaxTokens | int? | ≥ 1 | Model-specific | Maximum tokens to generate |
Sampling Parameters
| Property | Type | Range | Default | Description |
|---|---|---|---|---|
Temperature | decimal? | 0.0-1.0 | 0.7 | Sampling temperature (creativity) |
TopP | decimal? | 0.0-1.0 | 1.0 | Nucleus sampling threshold |
Determinism
| Property | Type | Default | Description |
|---|---|---|---|
RandomSeed | int? | - | Seed for deterministic generation |
Response Format
| Property | Type | Values | Default | Description |
|---|---|---|---|---|
ResponseFormat | string? | "text", "json_object" | "text" | Output format |
Safety
| Property | Type | Default | Description |
|---|---|---|---|
SafePrompt | bool? | false | Inject Mistral's safety guardrails |
Tool/Function Calling
| Property | Type | Values | Default | Description |
|---|---|---|---|---|
ToolChoice | string? | "auto", "any", "none" | "auto" | Tool selection behavior |
ParallelToolCalls | bool? | true | Enable parallel function calling |
Advanced Options
| Property | Type | Default | Description |
|---|---|---|---|
AdditionalProperties | Dictionary<string, object>? | - | Custom model parameters |
Additional Resources
- Mistral AI Documentation
- Mistral AI Models
- Mistral AI Pricing
- Mistral AI Console
- Mistral.SDK on GitHub
- API Reference
Note: This provider uses the community-maintained Mistral.SDK, which is not an official Mistral AI SDK but provides comprehensive integration with Microsoft.Extensions.AI.