LLaVA-Phi3

LLaVA-Phi31. Model scale2. Performance3. Pull LLaVA-Phi34. Use LLaVA-Phi34.1、Run LLaVA-Phi34.2、Have a conversation4.3. End the conversationReferences

Demo Environment

Development board: Jetson Orin series motherboard

SSD: 128G

Tutorial application scope: Whether the motherboard can run is related to the available memory of the system. The user's own environment and the programs running in the background may cause the model to fail to run.

Motherboard modelRun directly with OllamaRun with Open WebUI
Jetson Orin NX 16GB
Jetson Orin NX 8GB
Jetson Orin Nano 8GB
Jetson Orin Nano 4GB

LLaVA-Phi3 is a LLaVA model fine-tuned from the Phi 3 Mini 4k.

1. Model scale

ModelParameters
LLaVA-Phi33.8B

2. Performance

image-20240704190030850

3. Pull LLaVA-Phi3

Using the pull command will automatically pull the model of the Ollama model library:

image-20250111190957426

4. Use LLaVA-Phi3

Use LLaVA-Phi3 to identify local image content.

4.1、Run LLaVA-Phi3

If the system does not have a running model, the system will automatically pull the LLaVA-Phi3 3.8B model and run it:

4.2、Have a conversation

The time to reply to the question depends on the hardware configuration, so be patient!

image-20250111191048692

4.3. End the conversation

Use the Ctrl+d shortcut key or /bye to end the conversation!

image-20250111191102820

References

Ollama

Official website: https://ollama.com/

GitHub: https://github.com/ollama/ollama

LLaVA-Phi3

GitHub: https://github.com/InternLM/xtuner/tree/main

Ollama corresponding model: https://ollama.com/library/llava-phi3