ai:llm:ollama
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ai:llm:ollama [2025/02/13 09:11] – jmbargallo | ai:llm:ollama [2025/02/18 17:09] (current) – [How to Run Ollama and Connect to the Service API Through Internal Network or Internet] 85.219.17.206 | ||
|---|---|---|---|
| Line 66: | Line 66: | ||
| You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. | You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. | ||
| + | |||
| + | ===== CLI Reference ===== | ||
| + | |||
| + | Create a model | ||
| + | |||
| + | ollama create is used to create a model from a Modelfile. | ||
| + | < | ||
| + | ollama create mymodel -f ./Modelfile | ||
| + | </ | ||
| + | Pull a model | ||
| + | < | ||
| + | ollama pull llama3.2 | ||
| + | </ | ||
| + | This command can also be used to update a local model. Only the diff will be pulled. | ||
| + | |||
| + | Remove a model | ||
| + | < | ||
| + | ollama rm llama3.2 | ||
| + | </ | ||
| + | Copy a model | ||
| + | < | ||
| + | ollama cp llama3.2 my-model | ||
| + | </ | ||
| + | Multiline input | ||
| + | |||
| + | For multiline input, you can wrap text with """: | ||
| + | < | ||
| + | >>> | ||
| + | ... world! | ||
| + | ... """ | ||
| + | I'm a basic program that prints the famous " | ||
| + | </ | ||
| + | Multimodal models | ||
| + | < | ||
| + | ollama run llava " | ||
| + | </ | ||
| + | Output: The image features a yellow smiley face, which is likely the central focus of the picture. | ||
| + | |||
| + | Pass the prompt as an argument | ||
| + | < | ||
| + | ollama run llama3.2 " | ||
| + | </ | ||
| + | |||
| + | Show model information | ||
| + | < | ||
| + | ollama show llama3.2 | ||
| + | </ | ||
| + | List models on your computer | ||
| + | < | ||
| + | ollama list | ||
| + | </ | ||
| + | List which models are currently loaded | ||
| + | < | ||
| + | ollama ps | ||
| + | </ | ||
| + | Stop a model which is currently running | ||
| + | < | ||
| + | ollama stop llama3.2 | ||
| + | </ | ||
| + | < | ||
| + | Start Ollama | ||
| + | ollama serve is used when you want to start ollama without running the desktop application. | ||
| + | </ | ||
| + | |||
| + | ===== How to Run Ollama and Connect to the Service API Through Internal Network or Internet ===== | ||
| + | |||
| + | Setting Environment Variables on Linux | ||
| + | |||
| + | If Ollama is run as a systemd service, environment variables should be set using systemctl: | ||
| + | |||
| + | Edit the Ollama Service File: Open the Ollama service configuration file with the following command: | ||
| + | |||
| + | sudo systemctl edit ollama.service | ||
| + | |||
| + | Add the Environment Variable: In the editor, add the following lines under the [Service] section: | ||
| + | |||
| + | [Service] | ||
| + | Environment=" | ||
| + | |||
| + | Note #1: Sometimes, 0.0.0.0 does not work due to your environment setup. Instead, you can try setting it to your local ip address like 10.0.0.x or xxx.local, etc. | ||
| + | |||
| + | Note #2: You should put this above this line ### Lines below this comment will be discarded. It should look something like this: | ||
| + | |||
| + | ### Editing / | ||
| + | ### Anything between here and the comment below will become the new contents of the file | ||
| + | [Service] | ||
| + | Environment=" | ||
| + | ### Lines below this comment will be discarded | ||
| + | ### / | ||
| + | # [Unit] | ||
| + | # Description=Ollama Service | ||
| + | # After=network-online.target | ||
| + | # | ||
| + | # [Service] | ||
| + | # ExecStart=/ | ||
| + | # User=ollama | ||
| + | # Group=ollama | ||
| + | # Restart=always | ||
| + | # RestartSec=3 | ||
| + | # Environment=" | ||
| + | # | ||
| + | # [Install] | ||
| + | # WantedBy=default.target | ||
| + | |||
| + | Restart the Service: After editing the file, reload the systemd daemon and restart the Ollama service: | ||
| + | |||
| + | sudo systemctl daemon-reload | ||
| + | sudo systemctl restart ollama | ||
| + | |||
| + | |||
ai/llm/ollama.1739437886.txt.gz · Last modified: 2025/02/13 09:11 by jmbargallo
