Now LetTheTinCanDoIt supports DeepSeek API
Let me remind you that LetTheTinCanDoIt is an application that sends files you select through an API to an LLM model, parses the response, and updates the files on disk.
You can read more about it on the project page.
The formatting rules have been rewritten
To support DeepSeek, I had to substantially rewrite the introductory prompt text. DeepSeek stubbornly refused to follow the rules and formatted the output however it wanted: it wouldn't specify file names, or would specify them without the full path, or in the wrong format. It also refused to return whole files, trying instead to return just a snippet with the changes.
I rewrote the introductory prompt (the formatting rules) more explicitly — and that helped. Though sometimes issues still occur.
Overall, deepseek-reasoner obeys the introductory prompt much worse than OpenAI models.
Now things have gotten much better! Even with OpenAI’s model, I no longer notice random issues with file names.
Multithreading has been added
I also had to introduce multithreading. Deepseek-reasoner responds to requests very slowly, even slower than the batch task for gpt4o (which is supposed to run for 24 hours but actually finishes in a couple of minutes). The requests to deepseek-reasoner take so long that the operating system shows a message suggesting to force quit the program.
To fix this, I had to run the request in a separate thread, display a spinning loader, and disable the buttons so the user wouldn’t be tempted to send more requests before the previous ones finished.
First impressions of the DeepSeek API for code generation
I’m very impressed. Yes, deepseek-reasoner is worse than o1. And it’s very slow. But it’s miles better than gpt4o, and comparable to o3-mini. Possibly, deepseek-reasoner is even slightly better than o3-mini — at least, that’s my initial impression from using the DeepSeek API.
Now deepseek-reasoner has become my main tool, on par with o3-mini.