User Tools

Site Tools


ai:llm:ollama

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai:llm:ollama [2025/02/13 09:11] jmbargalloai:llm:ollama [2025/02/18 17:09] (current) – [How to Run Ollama and Connect to the Service API Through Internal Network or Internet] 85.219.17.206
Line 66: Line 66:
  
 You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
 +
 +===== CLI Reference =====
 +
 +Create a model
 +
 +ollama create is used to create a model from a Modelfile.
 +<code> 
 +ollama create mymodel -f ./Modelfile
 +</code> 
 +Pull a model
 +<code> 
 +ollama pull llama3.2
 +</code> 
 +This command can also be used to update a local model. Only the diff will be pulled.
 +
 +Remove a model
 +<code> 
 +ollama rm llama3.2
 +</code> 
 +Copy a model
 +<code> 
 +ollama cp llama3.2 my-model
 +</code> 
 +Multiline input
 +
 +For multiline input, you can wrap text with """:
 +<code> 
 +>>> """Hello,
 +... world!
 +... """
 +I'm a basic program that prints the famous "Hello, world!" message to the console.
 +</code> 
 +Multimodal models
 +<code> 
 +ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
 +</code>
 +Output: The image features a yellow smiley face, which is likely the central focus of the picture.
 +
 +Pass the prompt as an argument
 +<code> 
 +ollama run llama3.2 "Summarize this file: $(cat README.md)"
 +</code> 
 +
 +Show model information
 +<code> 
 +ollama show llama3.2
 +</code> 
 +List models on your computer
 +<code> 
 +ollama list
 +</code> 
 +List which models are currently loaded
 +<code> 
 +ollama ps
 +</code> 
 +Stop a model which is currently running
 +<code> 
 +ollama stop llama3.2
 +</code> 
 +<code> 
 +Start Ollama
 +ollama serve is used when you want to start ollama without running the desktop application.
 +</code> 
 +
 +===== How to Run Ollama and Connect to the Service API Through Internal Network or Internet =====
 +
 +Setting Environment Variables on Linux
 +
 +If Ollama is run as a systemd service, environment variables should be set using systemctl:
 +
 +Edit the Ollama Service File: Open the Ollama service configuration file with the following command:
 +
 +    sudo systemctl edit ollama.service
 +
 +Add the Environment Variable: In the editor, add the following lines under the [Service] section:
 +
 +    [Service]
 +    Environment="OLLAMA_HOST=0.0.0.0"
 +
 +Note #1: Sometimes, 0.0.0.0 does not work due to your environment setup. Instead, you can try setting it to your local ip address like 10.0.0.x or xxx.local, etc.
 +
 +Note #2: You should put this above this line ### Lines below this comment will be discarded. It should look something like this:
 +
 +    ### Editing /etc/systemd/system/ollama.service.d/override.conf
 +    ### Anything between here and the comment below will become the new contents of the file
 +    [Service]
 +    Environment="OLLAMA_HOST=0.0.0.0"
 +    ### Lines below this comment will be discarded
 +    ### /etc/systemd/system/ollama.service
 +    # [Unit]
 +    # Description=Ollama Service
 +    # After=network-online.target
 +    #
 +    # [Service]
 +    # ExecStart=/usr/local/bin/ollama serve
 +    # User=ollama
 +    # Group=ollama
 +    # Restart=always
 +    # RestartSec=3
 +    # Environment="PATH=/home/kimi/.nvm/versions/node/v20.5.0/bin:/home/kimi/.local/share/pnpm:/usr/local/sbin:/usr/local/bin:/usr/s>
 +    #
 +    # [Install]
 +    # WantedBy=default.target
 +
 +Restart the Service: After editing the file, reload the systemd daemon and restart the Ollama service:
 +
 +    sudo systemctl daemon-reload
 +    sudo systemctl restart ollama
 +
 +
  
  
ai/llm/ollama.1739437886.txt.gz · Last modified: 2025/02/13 09:11 by jmbargallo