Ollama

PapertLab can seamlessly connect to local Ollama models, providing robust AI-driven coding assistance directly from your machine.

Adding API Keys via PapertLab Settings

Instead of setting the API base through environment variables, you can also configure your Ollama API directly through PapertLab’s settings page:

  1. Open PapertLab and Access Settings: Start PapertLab and navigate to the settings page.

  2. Locate the API Section: Find the section where you can manage API keys.

  3. Input Your API Base: Enter your Ollama API base in the designated field (e.g., http://127.0.0.1:11434).

  4. Save Your Settings: Once entered, save your settings. PapertLab will now connect to your local Ollama models using the provided API base.

By following these steps, you can effortlessly integrate local Ollama models with PapertLab, taking advantage of powerful code editing capabilities. Whether you choose to set up through the command line or directly via the settings page, PapertLab offers flexibility and ease of use to meet your development needs.

Steps to Use Ollama Models with PapertLab

  1. Pull the Ollama Model: Begin by pulling the model you intend to use:

    ollama pull <model>
  2. Start the Ollama Server: Once the model is pulled, start the Ollama server:

    ollama serve
  3. Install PapertLab: Ensure PapertLab is installed on your system. If not, install it using pip:

    python -m pip install papert-lab
  4. Set Your Ollama API Base:

    • Mac/Linux:

      export OLLAMA_API_BASE=http://127.0.0.1:11434
    • Windows:

      setx OLLAMA_API_BASE http://127.0.0.1:11434

      (Note: After using setx, restart your shell for the changes to take effect.)

  5. Using PapertLab with Ollama Models:

    • To use a specific Ollama model, such as llama3:70b, specify it when launching PapertLab:

      papertlab --model ollama/llama3:70b
  6. Manage Model Warnings: PapertLab may issue warnings when working with unfamiliar models. Refer to the model warnings section for details.

Last updated