Skip to content

Docker AI Setup for Askimo App

Connect Askimo App to AI models running in Docker containers for portable and reproducible AI deployments.

  • Server URL: Docker AI container endpoint
    • Default: http://localhost:12434 (Docker AI default port)
    • For remote containers: http://your-server:12434
  • API Key: (Optional) If your container requires authentication
  • Timeout: Connection timeout (default: 120s)
  • Available Models: Detected from your running Docker AI containers
  1. Install Docker Desktop from docker.com
  2. Enable the OpenAI-compatible API server:
Terminal window
docker desktop enable model-runner --tcp 12434
  1. Pull an AI model from Docker Hub (it will be automatically served):
Terminal window
docker model pull ai/<model>
  1. In Askimo, configure the provider:
    • Provider Type: OpenAI (OpenAI-compatible)
    • Base URL: http://localhost:12434/v1
    • API Key: (leave empty - not required for local Docker AI)
    • Model: The model name (e.g., ai/gemma3:4B-F16)
  2. Click “Test Connection” to verify

Since Docker AI models run as OpenAI-compatible servers, you configure them through the OpenAI provider settings:

  1. Click on the menu bar
  2. Select “Settings”
  3. Navigate to the “AI Providers” tab
  4. Select “OpenAI” from the provider list
  5. Configure the settings:
    • Base URL: http://localhost:12434/v1 (or your custom port)
    • API Key: Leave empty (not required for local Docker AI)
    • Model: The name of your Docker AI model

Keyboard Shortcut: ⌘ + , (macOS) or Ctrl + , (Windows/Linux) then click “AI Providers”

Visit hub.docker.com/u/ai to browse available AI models.

Prerequisites:

Terminal window
# Enable OpenAI-compatible API server (one-time setup)
docker desktop enable model-runner --tcp 12434

Example: Running gemma3:4B-F16 model

12434/v1
# Pull the model (it will be automatically served)
docker model pull ai/gemma3:4B-F16
# Verify it's running
docker model ls

Configure in Askimo:

  • Provider: OpenAI
  • Base URL: http://localhost:12434/v1
  • API Key: (leave empty)
  • Model: ai/gemma3:4B-F16

List Pulled Models:

Terminal window
docker model ls

Remove a Model:

Terminal window
docker model rm <model-name>

Cannot Connect?

  • Ensure model-runner is enabled: docker desktop enable model-runner --tcp 12434
  • Verify models are pulled: docker model ls
  • Ensure correct base URL: http://localhost:12434/v1
  • Test endpoint: curl http://localhost:12434/v1/models

Model Not Available?

  • Check if Docker Desktop is running
  • Verify model-runner is enabled (see above)
  • Verify the model was pulled successfully: docker model ls
  • Try pulling the model again: docker model pull ai/<model-name>
  • Restart Docker Desktop if needed

Slow Performance?

  • Docker AI will use GPU automatically if available
  • Use smaller models for faster inference
  • Close other resource-intensive applications
  • Check system resources (CPU/RAM usage)

Need Different Port? If port 12434 is already in use, you can specify a different port when enabling model-runner:

Terminal window
docker desktop enable model-runner --tcp 12435

Then update Askimo’s base URL to http://localhost:12435/v1.