Rising Above: What My Team Lead Taught Me About Problem-Solving (That ChatGPT Couldn't)
Learn to escape LLM loops
Stuck in endless LLM discussions without results? Break the loop by “rising above” the problem.
Hi friend,
I forgot most of the math I learned at the Faculty of Natural Sciences, but I remember some ideas that helped me understand systems, trends, and sometimes, life.
LLMs aren’t an exception from this.
A couple of months ago when our Team Lead single-handedly pointed out the flaw in my code, I recalled an important quote I heard during my studies:
Problems can’t be solved at the same level of complexity at which they were created.
Please, if you know who said this, help me out in the comments.
Sometimes an LLM sounds like a broken record that keeps repeating itself without realizing it. It seems they can’t get out of their own heads and think out of the box.
This is why we sometimes get into those endless loops and spend more time chatting than solving the problem.
Let's rise above this problem and learn to get out of these loops!
To understand what I mean by “rising above” a problem, let’s start with what seems like the simplest equation in mathematics:
1 + 1 = 2
Can you prove this?
Surprisingly, this needs a solid foundation in formal logic and set theory, which was done in Principia Mathematica by Alfred North Whitehead and Bertrand Russell. They spent hundreds of pages building up a logical framework to prove this simple result.
But I want you to be here for my next email so I won’t test your patience. Here’s the simplified outline:
Define the natural numbers: We start by defining natural numbers in a set-theoretic framework, using the Peano axioms, which define 0, 1, etc. These axioms define succession (1 is the successor of 0, 2 is the successor of 1, etc.).
Define addition: Addition is then defined as an operation on natural numbers. For instance, if we have two numbers a and b, their sum a + b is defined recursively in terms of the successor function.
With the above definitions, we can prove 1+1=2 in a few steps:
Define 1 as the successor of 0.
Define 2 as the successor of 1.
By the recursive definition of addition, 1 + 1 is the successor of 1, which is 2.
What made this difficult was that I didn’t tell you about succession, which single-handedly solved most of the problem.
The definition of succession allowed us to rise above the problem and use frameworks that weren’t anywhere in 1 + 1, at least, not explicitly.
Processing Big Files According to ChatGPT
I had to cut through 14 Terabytes of data a couple of months ago.
As a true 10x dev I turned to ChatGPT to create an initial version of a Node.js script using streams that read some data from S3, did some processing, and then wrote the results into a file in S3.
The overly simplified pseudo-code looked like this:
// Stream rows from input CSV and transform each row
readStream.on("data", (row) => {
// Transform each row:
// - Prepend "ID-" to the first column value
// - Keep the second column unchanged
// - Set the third column to ":LABEL"
transformedRow = {
Column1: "ID-" + row.Column1,
Column2: row.Column2,
Column3: ":LABEL"
}
writeOutput(writeStream, transformedRow)
})
The processing was a lot more complicated and I had to pipe the input into 5 different files, even making cross-references between the rows.
But there was a slight problem with my POC.
After running the conversion for the first batch I realized it’ll need 30 days to process 14 Terabytes of data. So I started looking for improvements.
I’ve run countless runs back and forth with ChatGPT, and Claude, but nothing gave me the results I wanted. It was time to reach out to my Team Lead, an incredibly knowledgeable backend developer.
He looked at my script and after two minutes of thinking came up with two thoughts:
I should accumulate the rows I want to write in a buffer and write them like that into the target file, instead of writing them line by line. This alone speeded up everything, by 100x.
I was listening to an event
.on(‘finish’)
from a Duplex stream, which wasn’t incorrect - the stream indeed emits those events, but it wasn’t what I wanted to do. This caused all kinds of weird behavior with the script when it came to the end of processing.
Both calls were correct, they speeded up the script and removed the random, unexpected results I was getting because I wasn't waiting for the last write to finish.
This experience made me reflect on why, despite multiple attempts, ChatGPT couldn't guide me to these solutions. The answer lies in the same principle we explored with our 1+1 example.
Why ChatGPT Couldn’t Fix Any Of This
As with 1+1=2, no matter how hard you look at this equation, you won’t be able to get anything out of it.
You have to “rise above” the problem, to solve it.
In the first case, I gave ChatGPT the callback function processing single rows and, simply asked: make it faster.
It could be that it was my inferior prompt engineering skills or the model, but it was focusing on making the current code faster, instead of taking a different approach, that is, accumulating what you want to write into a buffer and writing more rows at once.
For the second case, a Duplex stream emits both end and finish events. So while the code was technically correct, I didn’t realize that by this I’m not waiting for the last write to finish.
How To Prompt Better - Bird’s-Eye View
Next time you run into an endless discussion with ChatGPT or other LLMs and see no way out, try this.
In my case, the main problem was the slowness, but that came from the design of how I processed the files - line by line.
Both I and ChatGPT were focused on improving the existing processing that was slow by design.
What I should have done to rise above the problem is to forget about the solution I’m trying to improve and return to the bigger problem:
Ask the LLM to write a file processing using Node.js Streams, which is really fast.
If I do this now, the pseudo-code that ChatGPT returns with is this:
readStream.on("data", (row) => {
// same as before....
// Add transformed row to batch
batch.push(transformedRow)
// When batch reaches BATCH_SIZE, write to output and clear batch
if (batch.length >= BATCH_SIZE) {
writeBatchToOutput(writeStream, batch)
batch = [] // Clear batch after writing
}
})
See? It immediately included batch writes.
Have you run into endless discussions with LLMs?
What’s your secret to breaking the chain?
📰 Weekly shoutout
🛑 Don’t Start Coding Yet: Here’s What Great Engineers Do First by
Breaking the wall between teams by
andArrays by
📣 Share
There’s no easier way to help this newsletter grow than by sharing it with the world. If you liked it, found something helpful, or you know someone who knows someone to whom this could be helpful, share it:
🏆 Subscribe
Actually, there’s one easier thing you can do to grow and help grow: subscribe to this newsletter. I’ll keep putting in the work and distilling what I learn/learned as a software engineer/consultant. Simply sign up here:
I definitely experienced it!! Sometimes I got so frustrated as to open another prompt from scratch and explain again, hoping it’ll work better without our shared history 😂
It can get addicting to go back and forth with an LLM. You ask, get code, explain the bugs, it tells you what to fix, and so on. Now my strategy is if it repeats the same solution I just try to solve it myself. As you said, it can be much faster.
Insightful article Akos.
From my experience, anytime I attempt to get information from a LLM, from the first few responses, I just know whether or not further prompts would be productive. So I just resort to having the LLM point me to right documents, blogs, sites or any other beneficial resource that I will have to manual go through to get what I need.
Also, the Idea of using a buffer while processing, is what I believe is called Batch Processing. It involves using a buffer to avoid line-by-line processing. Moreover, anything that involves line-by-line processing could be a performance bottleneck in the long run.
In summary, having a knowledgeable Team Lead is super helpful. I have had similar experiences. Thank you for sharing your experiences through your articles.