Gemma1. Model scale2. Pull Gemma3. Use Gemma3.1. Run Gemma3.2. Start a conversation3.3. End the conversationReferences
Demo Environment
Development board: Jetson Orin series motherboard
SSD: 128G
Tutorial application scope: Whether the motherboard can run is related to the available memory of the system. The user's own environment and the programs running in the background may cause the model to fail to run.
Motherboard model | Run directly with Ollama | Run with Open WebUI |
---|---|---|
Jetson Orin NX 16GB | √ | √ |
Jetson Orin NX 8GB | √ (need to run the small parameter version) | √ (need to run the small parameter version) |
Jetson Orin Nano 8GB | √ (need to run the small parameter version) | √ (need to run the small parameter version) |
Jetson Orin Nano 4GB | √ (need to run the small parameter version) | √ (need to run the small parameter version) |
Gemma is a new open model developed by Google and its DeepMind team.
Model | Parameters |
---|---|
Gemma | 2B |
Gemma | 7B |
Use the pull command to automatically pull the model of the Ollama model library:
ollama pull gemma:7b
Small parameter version model: motherboards with 8G or less memory can run this
xxxxxxxxxx
ollama pull gemma:2b
If the system does not have a running model, the system will automatically pull the Gemma 7B model and run it:
xxxxxxxxxx
ollama run gemma:7b
Small parameter version model: motherboards with 8G or less memory can run this
xxxxxxxxxx
ollama run gemma:2b
xxxxxxxxxx
print HelloWorld in C
The time to reply to the question depends on the hardware configuration, please be patient!
Use the Ctrl+d
shortcut key or /bye
to end the conversation!
Ollama
Official website: https://ollama.com/
GitHub: https://github.com/ollama/ollama
Gemma
GitHub: https://github.com/google-deepmind/gemma
Ollama corresponding model: https://ollama.com/library/gemma