Synopsis
ramalama convert [options] model [target]Description
Convert specified AI Model to an OCI Formatted AI Model The model can be from RamaLama model storage in Huggingface, Ollama, or a local model stored on disk. Converting from an OCI model is not supported.The convert command must be run with containers. Use of the —nocontainer option is not allowed.
Options
—gguf=Q2_K | Q3_K_S | Q3_K_M | Q3_K_L | Q4_0 | Q4_K_S | Q4_K_M | Q5_0 | Q5_K_S | Q5_K_M | Q6_K | Q8_0
Convert Safetensor models into a GGUF with the specified quantization format. To learn more about model quantization, read llama.cpp documentation: https://github.com/ggml-org/llama.cpp/blob/master/tools/quantize/README.md—help, -h
Print usage message—network=none
sets the configuration for network namespaces when handling RUN instructions—type=raw | car
type of OCI Model Image to convert.| Type | Description |
|---|---|
| car | Includes base image with the model stored in a /models subdir |
| raw | Only the model and a link file model.file to it stored at / |
EXAMPLE
Generate an oci model out of an Ollama model.See Also
ramalama(1), ramalama-push(1)Aug 2024, Originally compiled by Eric Curtin <ecurtin@redhat.com>
