Quick Install
Universal Install Script (Linux and macOS)
The easiest way to install RamaLama is using the universal install script:Platform-Specific Installation
Fedora
On Fedora systems, you can install RamaLama directly from the official repositories:PyPI (All Platforms)
RamaLama is available on PyPI and can be installed using pip:Optional Components
MLX Runtime (macOS with Apple Silicon)
For macOS users with Apple Silicon hardware (M1, M2, M3, or later), you can install the MLX runtime for enhanced performance:The MLX runtime is specifically designed for Apple Silicon Macs and provides optimized AI model inference. To use MLX, you’ll need to run RamaLama with the
--nocontainer option.Verify Installation
After installation, verify that RamaLama is working correctly:Next Steps
Once RamaLama is installed, you can:- Pull your first model:
ramalama pull ollama://tinyllama - Run a model:
ramalama run ollama://tinyllama - Explore available commands:
ramalama --help
Platform-Specific Setup
After installation, you may need additional platform-specific configuration:- NVIDIA GPUs: See CUDA Setup
- macOS: See macOS Setup
- Ascend NPUs: See CANN Setup
