Skip to content

Hugging Face

The huggingface provider allows you to use Hugging Face Models using Text Generation Inference.

script({ model: "huggingface:microsoft/Phi-3-mini-4k-instruct" })

To use Hugging Face models with GenAIScript, follow these steps:

  1. Sign up for a Hugging Face account and obtain an API key from their console. If you are creating a Fined Grained token, enable the Make calls to the serverless inference API option.

  2. Add your Hugging Face API key to the .env file as HUGGINGFACE_API_KEY, HF_TOKEN or HUGGINGFACE_TOKEN variables.

    .env
    HUGGINGFACE_API_KEY=hf_...
  3. Find the model that best suits your needs by visiting the HuggingFace models.

  4. Update your script to use the model you choose.

    script({
    ...
    model: "huggingface:microsoft/Phi-3-mini-4k-instruct",
    })

You can enable the genaiscript:huggingface and genaiscript:huggingface:msg logging namespaces for more information about the requests and responses:

Aliases

The following model aliases are attempted by default in GenAIScript.

AliasModel identifier
largemeta-llama/Llama-3.3-70B-Instruct
smallmicrosoft/phi-4
visionmeta-llama/Llama-3.2-11B-Vision-Instruct
embeddingsnomic-ai/nomic-embed-text-v1.5

Limitations