Hey friends,
I know. It’s Friday, and I owe you a newsletter from the beginning of the week, so let me make up for it.
You can get my latest book, Building Cloud-Based PWAs with Supabase, React & TypeScript, for free until August 18, 11:59 PM PDT. I don’t need anything in return, but if you liked the book or would love to support my work, a review on Amazon would mean a lot to me!
I wanted to title this post How AI-enabled Software Engineers Ship More and Solve Problems Faster in 2024.
Then I recalled the recent cases where ChatGPT/Gemini and other AI tools didn’t yield the expected results, so I felt I should give you a reality check instead.
I went back to my chat histories and grabbed screenshots as well.
Here we go.
Why it matters?
LLMs came in like a wrecking ball.
They fundamentally changed the tooling for some professions, and they are here to stay. However, given how they work, thinking that any LLM is a magic bullet is still far-fetched.
Albert Einstein is often credited with saying:
“The significant problems we have cannot be solved at the same level of thinking with which we created them.”
I often remember this quote when I realize I want answers from an LLM about something that probably wasn’t part of the training data.
So here’s the good and the bad.
💾 Docs are dead
I’m working on a research project with Graph DBs. Since I picked up MySQL and relational databases over two decades ago, I haven’t touched a concept so different from RDBs.
Of course, my manager wouldn’t approve of me spending two decades learning this new world, and I’d be 56 by then, so…
I watched a simple YouTube tutorial, opened ChatGPT, and started asking questions like, if this is a query in PostgreSQL, how do I write it in Cypher (a GDB query language)? How does data modeling work compared to RDBs?
LLMs outperform reading the docs when you want to learn something new.
I was getting the hang of writing Cypher queries when we hit a technical plateau. I needed the queries in another language: Gremlin.
What would you do before LLMs? Open the Getting Started page for Gremlin, do the tutorials, learn some, experiment, and rewrite.
This prompt single-handedly saved me hours (and it worked):
Now, here are the things that rarely work out, and mostly, you should just read the docs and think instead of writing prompts.
🚦 LLMs Get Stuck
I wanted to build PWAs with React and Supabase for my second book without using other 3rd-party libraries.
I was chatting with ChatGPT about the best ways to implement something (it’s good at inspecting code and inconsistencies in naming, etc, both are important when you write teaching material) when it generated a code snippet that I knew wouldn’t work, so I followed up:
To my surprise, ChatGPT assumed I’d be using a library called next-pwa, which I had not previously mentioned.
After some digging, I found that most blog posts discussing similar examples with service workers use the same library.
There was no way to get a reasonable code snippet that doesn’t use this library, so I had to come up with a new solution.
🤫 Problem Solving Skills
Another area where they performed poorly was resolving trivial errors or updating solutions based on feedback.
Here’s a funny example.
My Amazon Neptune cluster was timing out for specific operations. I couldn’t figure out why, so I gave the error message to ChatGPT.
The solution I got was reasonable: instead of trying to do one operation, do smaller operations and control them from a bash script.
However, the single operations timed out with the same errors. These errors, to me, meant that we probably had an incorrect resource/policy or role configuration.
When I gave ChatGPT the error message, it acknowledged it’s possible to still receive an error, but it also promised to create an improved version of the script.
The improved version:
It was the same script with a 2-second delay instead of a 1-second delay. 😃
I keep hitting plateaus with LLMs because I use them a lot. I’m also starting to figure out the use cases where it’s almost unbeatable and the ones where I know it’s not worth the time to write up the prompt.
What are your experiences with this technology?
📰 Weekly shoutout
Ask First, Code Later: The single, most important question by
“20% for tech debt” doesn’t work by
and
📣 Share
There’s no easier way to help this newsletter grow than by sharing it with the world. If you liked it, found something helpful, or you know someone who knows someone to whom this could be helpful, share it:
🏆 Subscribe
Actually, there’s one easier thing you can do to grow and help grow: subscribe to this newsletter. I’ll keep putting in the work and distilling what I learn/learned as a software engineer/consultant. Simply sign up here:
Great post as always!
Another good use case of LLMs I've found is breaking down complex research papers or concepts into simpler easy-to-understand stuff. It's also kind of similar to the docs thing you talked about.
Ultimately, however, LLMs are restricted by what they have been trained on. Truly original stuff is still difficult unless you feed it a lot of data yourself in the context.
Also, thanks for the mention!