token_limits
Error Reporting
Model gpt-3.5-turbo has hit a token limit!
Input tokens: 768 of 16385
Output tokens: 4096 of 4096 -- exceeded output limit!
Total tokens: 4864 of 16385
To reduce output tokens:
- Ask for smaller changes in each request.
- Break your code into smaller source files.
- Try using a stronger model like gpt-4o or opus that can return diffs.Input Tokens & Context Window Size
The Problem
Solutions
Additional Tips
Output Token Limits
The Problem
Solutions
Other Causes
Troubleshooting
Last updated