Bỏ qua để đến nội dung

Getting Started with Askimo CLI

Nội dung này hiện chưa có sẵn bằng ngôn ngữ của bạn.

After installing Askimo, choose a provider and a model, then start chatting. Askimo saves your settings locally, so you won’t need to repeat these steps next time.

👉 If you don’t choose a model, Askimo will use the default for that provider (except Ollama).

Quick start (works the same for any provider)

Section titled “Quick start (works the same for any provider)”
Terminal window
askimo> :set-provider <ollama|openai|gemini|xai|anthropic|docker|localai|lmstudio>
askimo> :models # see models available for that provider
askimo> :set-param model <model-id> # optional if a default exists
askimo> "Hello! Summarize this text."

  1. Install Ollama (see ollama.com)
  2. Pull a model, for example gpt-oss:20b:
Terminal window
ollama pull gpt-oss:20b
  1. In askimo
Terminal window
askimo> :set-provider ollama
askimo> :models # shows local models (e.g., llama3)
askimo> :set-param model gpt-oss:20b
askimo> "Explain Redis caching in simple terms."

If :models is empty, pull one with ollama pull <name> and try again.


  1. Get an API key → https://platform.openai.com/api-keys
  2. Configure Askimo and chat:
Terminal window
askimo> :set-provider openai
askimo> :set-param api_key sk-...
askimo> :models # e.g., gpt-4o, gpt-4o-mini
askimo> "Explain Redis caching in simple terms."

📌 Default model: gpt-4o


  1. Get an API key → https://aistudio.google.com
  2. Configure and chat:
Terminal window
askimo> :provider gemini
askimo> :set-param api_key <your-gemini-key>
askimo> :models # e.g., gemini-1.5-pro, gemini-1.5-flash
askimo> "Give me five CLI productivity tips."

📌 Default model: gemini-2.5-flash


  1. Get an API key → https://x.ai
  2. Configure and chat:
Terminal window
askimo> :set-provider xai
askimo> :set-param api_key <your-xai-key>
askimo> :models # e.g., grok-2, grok-2-mini (examples)
askimo> :set-param model grok-3-mini
askimo> "What's new in Java 21?"

📌 Default model: grok-4


  1. Get an API key → https://console.anthropic.com/
  2. Configure and chat:
Terminal window
askimo> :set-provider anthropic
askimo> :set-param api_key <your-anthropic-key>
askimo> :models # e.g., claude-3-5-sonnet, claude-3-opus
askimo> "Analyze this code for potential improvements."

📌 Default model: claude-3-5-sonnet-20241022


  1. Enable Docker AI model runner:
Terminal window
docker desktop enable model-runner --tcp 12434
  1. Pull a model:
Terminal window
docker model pull ai/gemma3:4B-F16
  1. Configure Askimo and chat:
Terminal window
askimo> :set-provider docker
askimo> :models
askimo> :set-param model ai/gemma3:4B-F16
askimo> "Explain containerization concepts."

📌 Default endpoint: http://localhost:12434


  1. Install LocalAI (see localai.io)
  2. Configure and chat:
Terminal window
askimo> :set-provider localai
askimo> :set-param base_url http://localhost:8080 # your LocalAI endpoint
askimo> :models # shows available LocalAI models
askimo> :set-param model <model-name>
askimo> "Help me debug this function."

  1. Install LM Studio (see lmstudio.ai)
  2. Start the local server in LM Studio
  3. Configure Askimo and chat:
Terminal window
askimo> :set-provider lmstudio
askimo> :set-param base_url http://localhost:1234 # default LM Studio port
askimo> :models # shows loaded models
askimo> :set-param model <model-name>
askimo> "Generate a regex pattern for email validation."

You can switch providers/models on the fly; Askimo remembers your last choices.

Terminal window
askimo> :set-provider ollama
askimo> :set-param model mistral
askimo> :set-provider openai