Recommended models balance speed, performance, and cost.
To generate a custom prompt, or edit an existing prompt, please email Zak Witkower (ZakWitkower@gmail.com). Your prompt will be reviewed for appropriateness and suitability, and added to the list of available options (or we can provide you with the data).
Note: LLM providers charge a high fee based on the length and tokenization of their output. The following prompts -- and our pricing model -- is automated for single-character outputs. Custom prompts must be engineered (or priced) accordingly.
After payment, you will not be able to cancel.
Provider billing is based on tokenization, retries, and provider-side rounding. We estimate provider prices, add a 20% buffer to ensure we do not lose money on your query, and provide the entire service for $2.