A comprehensive database of AI model specifications, pricing, and capabilities.
We welcome contributions to expand our model database! Follow these steps to add a new model:
If the AI provider doesn't already exist in the providers/
directory:
- Create a new folder in
providers/
with the provider's ID (e.g.,providers/newprovider/
) - Add a
provider.toml
file with the provider information:
name = "Provider Name"
Create a new TOML file in the provider's models/
directory where the filename is the model ID:
name = "Model Display Name"
attachment = true # or false - supports file attachments
reasoning = false # or true - supports reasoning/chain-of-thought
temperature = true # or false - supports temperature parameter
[cost]
input = 3.00 # Cost per million input tokens (USD)
output = 15.00 # Cost per million output tokens (USD)
inputCached = 0.30 # Cost per million cached input tokens (USD)
outputCached = 0.30 # Cost per million cached output tokens (USD)
[limit]
context = 200_000 # Maximum context window (tokens)
output = 8_192 # Maximum output tokens
- Fork this repository
- Create a new branch for your changes
- Add your provider and/or model files
- Open a pull request with a clear description
GitHub Actions will automatically validate your submission against our schema to ensure:
- All required fields are present
- Data types are correct
- Values are within acceptable ranges
- TOML syntax is valid
Models must conform to the following schema (defined in app/schemas.ts
):
Provider Schema:
name
: String - Display name of the provider
Model Schema:
name
: String - Display name of the modelattachment
: Boolean - Whether the model supports file attachmentsreasoning
: Boolean - Whether the model supports reasoning capabilitiestemperature
: Boolean - Whether the model supports temperature controlcost.input
: Number - Cost per million input tokens (USD)cost.output
: Number - Cost per million output tokens (USD)cost.inputCached
: Number - Cost per million cached input tokens (USD)cost.outputCached
: Number - Cost per million cached output tokens (USD)limit.context
: Number - Maximum context window in tokenslimit.output
: Number - Maximum output tokens
See existing providers in the providers/
directory for reference:
providers/anthropic/
- Anthropic Claude modelsproviders/openai/
- OpenAI GPT modelsproviders/google/
- Google Gemini models
Open an issue if you need help or have questions about contributing.