ai:llm:ollama
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ai:llm:ollama [2025/02/13 09:04] – [Installation and Basic Usage] jmbargallo | ai:llm:ollama [2025/02/18 17:09] (current) – [How to Run Ollama and Connect to the Service API Through Internal Network or Internet] 85.219.17.206 | ||
|---|---|---|---|
| Line 17: | Line 17: | ||
| - After installation, | - After installation, | ||
| - Run models using simple commands. For example, to run the Mistral model, use: | - Run models using simple commands. For example, to run the Mistral model, use: | ||
| - | | + | < |
| - | | + | ~$ ollama run mistral |
| - | + | </ | |
| ===== Supported Models ===== | ===== Supported Models ===== | ||
| Ollama supports several popular large language models, including but not limited to: | Ollama supports several popular large language models, including but not limited to: | ||
| Line 32: | Line 32: | ||
| - **Data Security:** Data remains on the local device, eliminating the risk of data breaches from cloud services. | - **Data Security:** Data remains on the local device, eliminating the risk of data breaches from cloud services. | ||
| - **High Performance: | - **High Performance: | ||
| + | |||
| + | ====== Model library ====== | ||
| + | |||
| + | Ollama supports a list of models available on [ollama.com/ | ||
| + | |||
| + | Here are some example models that can be downloaded: | ||
| + | |||
| + | ^ Model ^ Parameters ^ Size ^ Download Command | ||
| + | | DeepSeek-R1 | ||
| + | | DeepSeek-R1 | ||
| + | | Llama 3.3 | 70B | 43GB | `ollama run llama3.3` | ||
| + | | Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` | ||
| + | | Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2: | ||
| + | | Llama 3.2 Vision | ||
| + | | Llama 3.2 Vision | ||
| + | | Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` | ||
| + | | Llama 3.1 | 405B | 231GB | `ollama run llama3.1: | ||
| + | | Phi 4 | 14B | 9.1GB | `ollama run phi4` | | ||
| + | | Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` | | ||
| + | | Gemma 2 | 2B | 1.6GB | `ollama run gemma2: | ||
| + | | Gemma 2 | 9B | 5.5GB | `ollama run gemma2` | ||
| + | | Gemma 2 | 27B | 16GB | `ollama run gemma2: | ||
| + | | Mistral | ||
| + | | Moondream 2 | 1.4B | 829MB | `ollama run moondream` | ||
| + | | Neural Chat | 7B | 4.1GB | `ollama run neural-chat` | ||
| + | | Starling | ||
| + | | Code Llama | 7B | 3.8GB | `ollama run codellama` | ||
| + | | Llama 2 Uncensored | ||
| + | | LLaVA | 7B | 4.5GB | `ollama run llava` | ||
| + | | Solar | 10.7B | 6.1GB | `ollama run solar` | ||
| + | |||
| + | ==== Note ==== | ||
| + | |||
| + | You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models. | ||
| + | |||
| + | ===== CLI Reference ===== | ||
| + | |||
| + | Create a model | ||
| + | |||
| + | ollama create is used to create a model from a Modelfile. | ||
| + | < | ||
| + | ollama create mymodel -f ./Modelfile | ||
| + | </ | ||
| + | Pull a model | ||
| + | < | ||
| + | ollama pull llama3.2 | ||
| + | </ | ||
| + | This command can also be used to update a local model. Only the diff will be pulled. | ||
| + | |||
| + | Remove a model | ||
| + | < | ||
| + | ollama rm llama3.2 | ||
| + | </ | ||
| + | Copy a model | ||
| + | < | ||
| + | ollama cp llama3.2 my-model | ||
| + | </ | ||
| + | Multiline input | ||
| + | |||
| + | For multiline input, you can wrap text with """: | ||
| + | < | ||
| + | >>> | ||
| + | ... world! | ||
| + | ... """ | ||
| + | I'm a basic program that prints the famous " | ||
| + | </ | ||
| + | Multimodal models | ||
| + | < | ||
| + | ollama run llava " | ||
| + | </ | ||
| + | Output: The image features a yellow smiley face, which is likely the central focus of the picture. | ||
| + | |||
| + | Pass the prompt as an argument | ||
| + | < | ||
| + | ollama run llama3.2 " | ||
| + | </ | ||
| + | |||
| + | Show model information | ||
| + | < | ||
| + | ollama show llama3.2 | ||
| + | </ | ||
| + | List models on your computer | ||
| + | < | ||
| + | ollama list | ||
| + | </ | ||
| + | List which models are currently loaded | ||
| + | < | ||
| + | ollama ps | ||
| + | </ | ||
| + | Stop a model which is currently running | ||
| + | < | ||
| + | ollama stop llama3.2 | ||
| + | </ | ||
| + | < | ||
| + | Start Ollama | ||
| + | ollama serve is used when you want to start ollama without running the desktop application. | ||
| + | </ | ||
| + | |||
| + | ===== How to Run Ollama and Connect to the Service API Through Internal Network or Internet ===== | ||
| + | |||
| + | Setting Environment Variables on Linux | ||
| + | |||
| + | If Ollama is run as a systemd service, environment variables should be set using systemctl: | ||
| + | |||
| + | Edit the Ollama Service File: Open the Ollama service configuration file with the following command: | ||
| + | |||
| + | sudo systemctl edit ollama.service | ||
| + | |||
| + | Add the Environment Variable: In the editor, add the following lines under the [Service] section: | ||
| + | |||
| + | [Service] | ||
| + | Environment=" | ||
| + | |||
| + | Note #1: Sometimes, 0.0.0.0 does not work due to your environment setup. Instead, you can try setting it to your local ip address like 10.0.0.x or xxx.local, etc. | ||
| + | |||
| + | Note #2: You should put this above this line ### Lines below this comment will be discarded. It should look something like this: | ||
| + | |||
| + | ### Editing / | ||
| + | ### Anything between here and the comment below will become the new contents of the file | ||
| + | [Service] | ||
| + | Environment=" | ||
| + | ### Lines below this comment will be discarded | ||
| + | ### / | ||
| + | # [Unit] | ||
| + | # Description=Ollama Service | ||
| + | # After=network-online.target | ||
| + | # | ||
| + | # [Service] | ||
| + | # ExecStart=/ | ||
| + | # User=ollama | ||
| + | # Group=ollama | ||
| + | # Restart=always | ||
| + | # RestartSec=3 | ||
| + | # Environment=" | ||
| + | # | ||
| + | # [Install] | ||
| + | # WantedBy=default.target | ||
| + | |||
| + | Restart the Service: After editing the file, reload the systemd daemon and restart the Ollama service: | ||
| + | |||
| + | sudo systemctl daemon-reload | ||
| + | sudo systemctl restart ollama | ||
| + | |||
| + | |||
| + | |||
| + | |||
| ===== Learn More ===== | ===== Learn More ===== | ||
| For more detailed information and tutorials, visit [Ollama' | For more detailed information and tutorials, visit [Ollama' | ||
ai/llm/ollama.1739437465.txt.gz · Last modified: 2025/02/13 09:04 by jmbargallo
