Docker AI Setup for Askimo App
Docker AI Configuration
Section titled “Docker AI Configuration”Connect Askimo App to AI models running in Docker containers for portable and reproducible AI deployments.
Server Configuration
Section titled “Server Configuration”- Server URL: Docker AI container endpoint
- Default:
http://localhost:12434(Docker AI default port) - For remote containers:
http://your-server:12434
- Default:
- API Key: (Optional) If your container requires authentication
- Timeout: Connection timeout (default: 120s)
- Available Models: Detected from your running Docker AI containers
Setting Up Docker AI
Section titled “Setting Up Docker AI”- Install Docker Desktop from docker.com
- Enable the OpenAI-compatible API server:
docker desktop enable model-runner --tcp 12434- Pull an AI model from Docker Hub (it will be automatically served):
docker model pull ai/<model>- In Askimo, configure the provider:
- Provider Type: OpenAI (OpenAI-compatible)
- Base URL:
http://localhost:12434/v1 - API Key: (leave empty - not required for local Docker AI)
- Model: The model name (e.g.,
ai/gemma3:4B-F16)
- Click “Test Connection” to verify
Accessing Provider Settings
Section titled “Accessing Provider Settings”Since Docker AI models run as OpenAI-compatible servers, you configure them through the OpenAI provider settings:
- Click on the menu bar
- Select “Settings”
- Navigate to the “AI Providers” tab
- Select “OpenAI” from the provider list
- Configure the settings:
- Base URL:
http://localhost:12434/v1(or your custom port) - API Key: Leave empty (not required for local Docker AI)
- Model: The name of your Docker AI model
- Base URL:
Keyboard Shortcut: ⌘ + , (macOS) or Ctrl + , (Windows/Linux) then click “AI Providers”
Docker Hub AI Models
Section titled “Docker Hub AI Models”Visit hub.docker.com/u/ai to browse available AI models.
Prerequisites:
# Enable OpenAI-compatible API server (one-time setup)docker desktop enable model-runner --tcp 12434Example: Running gemma3:4B-F16 model
# Pull the model (it will be automatically served)docker model pull ai/gemma3:4B-F16
# Verify it's runningdocker model lsConfigure in Askimo:
- Provider: OpenAI
- Base URL:
http://localhost:12434/v1 - API Key: (leave empty)
- Model:
ai/gemma3:4B-F16
Model Management
Section titled “Model Management”List Pulled Models:
docker model lsRemove a Model:
docker model rm <model-name>Troubleshooting
Section titled “Troubleshooting”Cannot Connect?
- Ensure model-runner is enabled:
docker desktop enable model-runner --tcp 12434 - Verify models are pulled:
docker model ls - Ensure correct base URL:
http://localhost:12434/v1 - Test endpoint:
curl http://localhost:12434/v1/models
Model Not Available?
- Check if Docker Desktop is running
- Verify model-runner is enabled (see above)
- Verify the model was pulled successfully:
docker model ls - Try pulling the model again:
docker model pull ai/<model-name> - Restart Docker Desktop if needed
Slow Performance?
- Docker AI will use GPU automatically if available
- Use smaller models for faster inference
- Close other resource-intensive applications
- Check system resources (CPU/RAM usage)
Need Different Port? If port 12434 is already in use, you can specify a different port when enabling model-runner:
docker desktop enable model-runner --tcp 12435Then update Askimo’s base URL to http://localhost:12435/v1.