Bỏ qua để đến nội dung

Getting Started with Askimo CLI

Nội dung này hiện chưa có sẵn bằng ngôn ngữ của bạn.

After installing Askimo, choose a provider and a model, then start chatting. Askimo saves your settings locally, so you won’t need to repeat these steps next time.

👉 If you don’t choose a model, Askimo will use the default for that provider (except Ollama).

Quick start (works the same for any provider)

Section titled “Quick start (works the same for any provider)”
Terminal window
askimo> :set-provider <ollama|openai|gemini|xai|anthropic|docker|localai|lmstudio>
askimo> :models # list models available for that provider
askimo> :set-param model <model-id> # optional if the provider has a default
askimo> "Hello! Summarize this text."

  1. Install Ollama (see ollama.com)
  2. Pull a model, for example:
Terminal window
ollama pull <model-name>
  1. In Askimo:
Terminal window
askimo> :set-provider ollama
askimo> :models
askimo> :set-param model <model-name>
askimo> "Explain Redis caching in simple terms."

If :models is empty, pull one with ollama pull <name> and try again.


  1. Get an API key → https://platform.openai.com/api-keys
  2. Configure Askimo and chat:
Terminal window
askimo> :set-provider openai
askimo> :set-param api_key sk-...
askimo> :models
askimo> "Explain Redis caching in simple terms."

  1. Get an API key → https://aistudio.google.com
  2. Configure and chat:
Terminal window
askimo> :set-provider gemini
askimo> :set-param api_key <your-gemini-key>
askimo> :models
askimo> "Give me five CLI productivity tips."

  1. Get an API key → https://x.ai
  2. Configure and chat:
Terminal window
askimo> :set-provider xai
askimo> :set-param api_key <your-xai-key>
askimo> :models
askimo> :set-param model <model-id> # optional
askimo> "What's new in Java 21?"

  1. Get an API key → https://console.anthropic.com/
  2. Configure and chat:
Terminal window
askimo> :set-provider anthropic
askimo> :set-param api_key <your-anthropic-key>
askimo> :models
askimo> "Analyze this code for potential improvements."

  1. Enable Docker AI model runner:
Terminal window
docker desktop enable model-runner --tcp 12434
  1. Pull a model:
Terminal window
docker model pull <model-name>
  1. Configure Askimo and chat:
Terminal window
askimo> :set-provider docker
askimo> :models
askimo> :set-param model <model-name>
askimo> "Explain containerization concepts."

📌 Default endpoint: http://localhost:12434


  1. Install LocalAI (see localai.io)
  2. Configure and chat:
Terminal window
askimo> :set-provider localai
askimo> :set-param base_url http://localhost:8080 # your LocalAI endpoint
askimo> :models
askimo> :set-param model <model-name>
askimo> "Help me debug this function."

  1. Install LM Studio (see lmstudio.ai)
  2. Start the local server in LM Studio
  3. Configure Askimo and chat:
Terminal window
askimo> :set-provider lmstudio
askimo> :set-param base_url http://localhost:1234 # default LM Studio port
askimo> :models
askimo> :set-param model <model-name>
askimo> "Generate a regex pattern for email validation."

You can switch providers/models on the fly; Askimo remembers your last choices.

Terminal window
askimo> :set-provider ollama
askimo> :set-param model <model-name>
askimo> :set-provider openai