Getting Started with Askimo CLI
Ce contenu n’est pas encore disponible dans votre langue.
After installing Askimo, choose a provider and a model, then start chatting. Askimo saves your settings locally, so you won’t need to repeat these steps next time.
👉 If you don’t choose a model, Askimo will use the default for that provider (except Ollama).
Quick start (works the same for any provider)
Section titled “Quick start (works the same for any provider)”askimo> :set-provider <ollama|openai|gemini|xai|anthropic|docker|localai|lmstudio>askimo> :models # see models available for that provideraskimo> :set-param model <model-id> # optional if a default existsaskimo> "Hello! Summarize this text."Using Ollama (local models)
Section titled “Using Ollama (local models)”- Install Ollama (see ollama.com)
- Pull a model, for example gpt-oss:20b:
ollama pull gpt-oss:20b- In askimo
askimo> :set-provider ollamaaskimo> :models # shows local models (e.g., llama3)askimo> :set-param model gpt-oss:20baskimo> "Explain Redis caching in simple terms."If :models is empty, pull one with ollama pull <name> and try again.
Using OpenAI
Section titled “Using OpenAI”- Get an API key → https://platform.openai.com/api-keys
- Configure Askimo and chat:
askimo> :set-provider openaiaskimo> :set-param api_key sk-...askimo> :models # e.g., gpt-4o, gpt-4o-miniaskimo> "Explain Redis caching in simple terms."📌 Default model: gpt-4o
Use Gemini (Google)
Section titled “Use Gemini (Google)”- Get an API key → https://aistudio.google.com
- Configure and chat:
askimo> :provider geminiaskimo> :set-param api_key <your-gemini-key>askimo> :models # e.g., gemini-1.5-pro, gemini-1.5-flashaskimo> "Give me five CLI productivity tips."📌 Default model: gemini-2.5-flash
Use X AI (Grok)
Section titled “Use X AI (Grok)”- Get an API key → https://x.ai
- Configure and chat:
askimo> :set-provider xaiaskimo> :set-param api_key <your-xai-key>askimo> :models # e.g., grok-2, grok-2-mini (examples)askimo> :set-param model grok-3-miniaskimo> "What's new in Java 21?"📌 Default model: grok-4
Using Anthropic Claude
Section titled “Using Anthropic Claude”- Get an API key → https://console.anthropic.com/
- Configure and chat:
askimo> :set-provider anthropicaskimo> :set-param api_key <your-anthropic-key>askimo> :models # e.g., claude-3-5-sonnet, claude-3-opusaskimo> "Analyze this code for potential improvements."📌 Default model: claude-3-5-sonnet-20241022
Using Docker AI
Section titled “Using Docker AI”- Enable Docker AI model runner:
docker desktop enable model-runner --tcp 12434- Pull a model:
docker model pull ai/gemma3:4B-F16- Configure Askimo and chat:
askimo> :set-provider dockeraskimo> :modelsaskimo> :set-param model ai/gemma3:4B-F16askimo> "Explain containerization concepts."📌 Default endpoint: http://localhost:12434
Using LocalAI
Section titled “Using LocalAI”- Install LocalAI (see localai.io)
- Configure and chat:
askimo> :set-provider localaiaskimo> :set-param base_url http://localhost:8080 # your LocalAI endpointaskimo> :models # shows available LocalAI modelsaskimo> :set-param model <model-name>askimo> "Help me debug this function."Using LM Studio
Section titled “Using LM Studio”- Install LM Studio (see lmstudio.ai)
- Start the local server in LM Studio
- Configure Askimo and chat:
askimo> :set-provider lmstudioaskimo> :set-param base_url http://localhost:1234 # default LM Studio portaskimo> :models # shows loaded modelsaskimo> :set-param model <model-name>askimo> "Generate a regex pattern for email validation."Switch any time
Section titled “Switch any time”You can switch providers/models on the fly; Askimo remembers your last choices.
askimo> :set-provider ollamaaskimo> :set-param model mistralaskimo> :set-provider openai