What is LetTheTinCanDoIt?
Let me start with a brief backstory.
After the release of GPT-4, I began actively using it, and it saved me tons of time. At first, I simply asked it to write small code snippets. But later, I started making requests like, “Here’s my file, do this and that…”, and it saved me even more time.
However, there was a catch: I had to perform a lot of repetitive tasks—copying files from the editor, pasting them into GPT window, writing the query, specifying constraints, and then copying the response back into the editor. It got even more tedious when I needed to feed multiple files into GPT-4.
But then I needed to feed multiple files into GPT-4. Sometimes, it was to pass a new variable through several methods. Other times, it was just convenient to include a class definition from another file so GPT wouldn’t start improvising and would use the methods that were actually there. I liked the results, and it saved time, but a lot of time was still spent on the mechanical task of copying and pasting text. Not exactly an intellectual challenge, wouldn’t you agree?
That’s why I decided to create a small Python app to send requests through the OpenAI API. The app allows me to open a project folder, select files using checkboxes, and attach them to my request. It also updates files on disk if the response includes changes.
I built this tool for my specific needs, refining queries and adding features as necessary. It turned out to be so useful that I decided to share it.
Where to get it?
You can download python project from the GitHub.
You’ll Need an OpenAI Key
The app uses OpenAI's paid API. You’ll need to register, add some funds to your account, generate an API key, and configure it in the app.
How to Use It
Break large tasks into smaller subtasks. The smaller the task, the higher the chance of getting working code without needing to roll back changes or fix mistakes. With experience, you’ll develop a sense of the model’s capabilities—what can be done in one request and what needs to be split into parts.
Using Git for source control is incredibly handy. Before each request, I commit my changes. After the response, I review the code. If it’s a mess (and that happens often, and it's not always GPT’s fault), I simply roll back and write a new query.
For large tasks broken into smaller steps, I use git commit --amend to combine changes. I get the updates, review and verify them, then amend the commit before moving on to the next query.
Example Use Case
A friend of mine is developing a game called TrappedTogether, and I was helping by creating a level editor. At one point, we needed to place many identical objects over a large area—a tedious process. That’s when I finally decided to implement multi-selection objects on the map.
Here’s an outline of the queries I made:
0. Selected `model.py`, `view.py`, and `controller.py`.
1. **Query**: “I need a new class in the model called `MultipleSelection`. It will contain x range, y range, and z (not a range).”
2. **Query**: “In the model, I have a method `GetCellItems`. Add a new method `GetSelectedCellsItems` that accepts a `MultipleSelection` parameter and returns all objects in the given range.”
3. **Query**: “In the view, replace `GetCellItems` with the new `GetSelectedCellsItems`. Convert single cell (X, Y, Z) into `MultipleSelection`. Update the controller if needed.”
4. **Query**: “Modify the `OnMouseClick` event in the view to handle mouse press and release. Use the press event to set the start coordinates and the release event to set the end coordinates for `MultipleSelection`.”
Within 1-2 hours, the new functionality was ready. If I had done everything from scratch—reading documentation, debugging typos, etc.—it would’ve taken me 8-16 hours! It's really amazing!
There will be no chat history
First, sending the entire chat history wastes tokens.
Second, chat history isn't necessary! When you provide the full content of a file and write a clear request about what to do with this file, that’s usually enough. In my experience, keeping a history only creates confusion when GPT starts pulling changes from earlier file versions instead of the latest one.
Is the LetTheTinCanDoIt a replacement for a programmer?
I’m a senior C++ developer, and this tool saves me a lot of time. Yes, GPT often produces nonsense, but I can quickly spot and fix it, or revert the changes and refine the query.
GPT’s code isn’t great—it tends to put everything in one file without splitting logic into methods or classes. In general, the architecture turns out terrible if you don’t make a conscious effort. You need to explicitly ask GPT to move code into a separate method or class, or to apply a specific design pattern—only then the result becomes quite decent.
And yes, if you ask, "Write me a program that does this and that," nothing will come of it. You need to have a clear idea of what you want and give GPT specific, and write small tasks.
I assume that for someone completely unfamiliar with programming, this tool will be almost useless—they’ll end up with terrible code that "looks like it works" but won’t know how to make it actually run. Yes, they could endlessly feed GPT the errors that come up when running the code, and maybe, eventually, it will work somehow… but that seems like a very odd approach.
Batch tasks (Deferred Tasks)
Batch processing is a request that can take up to 24 hours and costs 50% less. OpenAI intended this for resource-intensive tasks where results aren’t needed immediately.
In reality, batch jobs often finish within 1-2 minutes (during low-demand periods in the U.S.). During peak hours, it might take 4-8 minutes, and rarely a few hours.
So batch processing is fairly fast, and you can save some money.
Note: For newer models like O1 and O1-mini, batch tasks are not supported.
OpenAI Models
- 4o-mini: I only use it for testing. It’s useless for programming.
- 4o: My main workhorse!
- o1-mini: Marketed as an advanced programming model. In my experience, it’s not much better than 4o. It often makes mistakes and tries to overcomplicate the code. However, it’s great for refactoring.
- o1: OpenAI’s flagship model for scientific tasks. I’ve tried it a few times, but I wasn’t impressed
Comments and Suggestions
If something isn’t working, or something is missing, or you just want to show some respect to the author—leave your comments at the bottom of the page. I love comments!