GitHub Models
The GitHub Models provider, github
, allows running models through the GitHub Marketplace.
This provider is useful for prototyping and subject to rate limits
depending on your subscription.
script({ model: "github:openai/gpt-4o" })
Codespace configuration
Section titled “Codespace configuration”If you are running from a GitHub Codespace, the token is already configured for you… It just works.
GitHub Actions configuration
Section titled “GitHub Actions configuration”As of April 2025,
you can use the GitHub Actions token (GITHUB_TOKEN
) to call AI models directly inside your workflows.
Ensure that the
models
permission is enabled in your workflow configuration.genai.yml permissions:models: readPass the
GITHUB_TOKEN
when runninggenaiscript
genai.yml run: npx -y genaiscript run ...env:GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Read more in the GitHub Documentation
Configuring with your own token
Section titled “Configuring with your own token”If you are not using GitHub Actions or Codespaces, you can use your own token to access the models.
Create a GitHub personal access token. The token should not have any scopes or permissions.
Update the
.env
file with the token..env GITHUB_TOKEN=...
To configure a specific model,
Open the GitHub Marketplace and find the model you want to use.
Copy the model name from the Javascript/Python samples
const modelName = "microsoft/Phi-3-mini-4k-instruct"to configure your script.
script({model: "github:microsoft/Phi-3-mini-4k-instruct",})
If you are already using GITHUB_TOKEN
variable in your script and need a different one
for GitHub Models, you can use the GITHUB_MODELS_TOKEN
variable instead.
o1-preview
and o1-mini
models
Section titled “o1-preview and o1-mini models”Currently these models do not support streaming and system prompts. GenAIScript handles this internally.
script({ model: "github:openai/o1-mini",})
Aliases
The following model aliases are attempted by default in GenAIScript.
Alias | Model identifier |
---|---|
large | openai/gpt-4.1 |
small | openai/gpt-4.1-mini |
tiny | openai/gpt-4.1-nano |
vision | openai/gpt-4.1 |
embeddings | openai/text-embedding-3-small |
reasoning | openai/o3 |
reasoning_small | openai/o3-mini |
Limitations
- Smaller context windows, and rate limiting
- listModels
- logprobs (and top logprobs) ignored
- Ignore prediction of output tokens
- topLogprobs