# Ollama

PapertLab can seamlessly connect to local Ollama models, providing robust AI-driven coding assistance directly from your machine.

**Adding API Keys via PapertLab Settings**

<figure><img src="/files/1spqanJ2km6qcuc5NUwW" alt=""><figcaption></figcaption></figure>

Instead of setting the API base through environment variables, you can also configure your Ollama API directly through PapertLab’s settings page:

1. **Open PapertLab and Access Settings**: Start PapertLab and navigate to the settings page.
2. **Locate the API Section**: Find the section where you can manage API keys.
3. **Input Your API Base**: Enter your Ollama API base in the designated field (e.g., `http://127.0.0.1:11434`).
4. **Save Your Settings**: Once entered, save your settings. PapertLab will now connect to your local Ollama models using the provided API base.

By following these steps, you can effortlessly integrate local Ollama models with PapertLab, taking advantage of powerful code editing capabilities. Whether you choose to set up through the command line or directly via the settings page, PapertLab offers flexibility and ease of use to meet your development needs.<br>

**Steps to Use Ollama Models with PapertLab**

1. **Pull the Ollama Model**: Begin by pulling the model you intend to use:

   ```bash
   ollama pull <model>
   ```
2. **Start the Ollama Server**: Once the model is pulled, start the Ollama server:

   ```bash
   ollama serve
   ```
3. **Install PapertLab**: Ensure PapertLab is installed on your system. If not, install it using pip:

   ```bash
   python -m pip install papert-lab
   ```
4. **Set Your Ollama API Base**:
   * **Mac/Linux**:

     ```bash
     export OLLAMA_API_BASE=http://127.0.0.1:11434
     ```
   * **Windows**:

     ```cmd
     setx OLLAMA_API_BASE http://127.0.0.1:11434
     ```

     *(Note: After using `setx`, restart your shell for the changes to take effect.)*
5. **Using PapertLab with Ollama Models**:
   * To use a specific Ollama model, such as **llama3:70b**, specify it when launching PapertLab:

     ```bash
     papertlab --model ollama/llama3:70b
     ```
6. **Manage Model Warnings**: PapertLab may issue warnings when working with unfamiliar models. Refer to the model warnings section for details.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.papert.in/connecting-to-llms/ollama.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
