Installation
RamaLama can be installed on multiple platforms using various methods. Choose the installation method that best fits your environment.Quick Install
Universal Install Script (Linux and macOS)
The easiest way to install RamaLama is using the universal install script:Platform-Specific Installation
Fedora
On Fedora systems, you can install RamaLama directly from the official repositories:PyPI (All Platforms)
RamaLama is available on PyPI and can be installed using pip:Optional Components
MLX Runtime (macOS with Apple Silicon)
For macOS users with Apple Silicon hardware (M1, M2, M3, or later), you can install the MLX runtime for enhanced performance:--nocontainer option.
:::
Verify Installation
After installation, verify that RamaLama is working correctly:Next Steps
Once RamaLama is installed, you can:- Pull your first model:
ramalama pull ollama://tinyllama - Run a model:
ramalama run ollama://tinyllama - Explore available commands:
ramalama --help
Platform-Specific Setup
After installation, you may need additional platform-specific configuration:- NVIDIA GPUs: See CUDA Setup
- macOS: See macOS Setup
- Ascend NPUs: See CANN Setup
