Cursor Commands, Claude Code Skills, Hooks, MCPs: Essentials or Overkill?
When I first got into Linux, pro users jokingly said:
Every sysadmin dreams of automating themselves with a Bash script.
It didn’t take long to understand this, because:
There was a lot of typing on Linux, compared to Windows
You made mistakes in long commands, and if you were new, like I was
You forgot commands, and you had to look them up again
Bash scripts delivered results fast.
How much easier it was to type stopapp 3000 to stop a process ID that is running something on port 3000, than googling stop app running on port, because I couldn’t remember if the flag is a capital I in sudo lsof -i :<port number>.
Software engineering is no different.
I don’t believe that knowing the exact configuration file format for setting up, for example, Playwright, has any value.
Even before LLMs, I used to go to their website, copy-paste the example config, and tweak it to my liking. Now I just tell Cursor, Claude Code, or whatever, to go and do that for me.
The goal isn’t to set up Playwright, but to have e2e tests so we can iterate faster.
But today we face another problem:
AI has evolved so fast that today we have many ways to solve our problems.
And if you think this is true:
It is not, and you might want to read further.
This is my unbiased overview of the current AI coding possibilities.
Before we dive in, a quick word from my sponsor: me. These newsletters take some time to write because I do it by hand like a caveman instead of using AI . If you want to see more unbiased reviews and software engineering stuff from the trenches, consider getting one of my books on Gumroad. Now you can use BLACKFRIDAY_2025 for 50% off! I also have purchasing power parity enabled, so if you live in the middle of nowhere like I do, the discount is even better! Here are the books:
Thanks for reading this, and I appreciate your support! ❤️
Advantage
We come from diverse backgrounds: some of us are marketing specialists and e-commerce managers, others are data scientists, and many readers here are likely software engineers.
When evaluating AI coding tools, it’s not enough to focus solely on their raw capabilities.
For someone new to coding, simply typing a prompt into ChatGPT could double their productivity overnight by cutting out endless googling and trial-and-error. But for others, something like Claude Code can be too much. They lose interest, scrap it, and return to ChatGPT.
This is why the learning curve is crucial—it’s not that these tools are “bad” or over-engineered; they just might not deliver a worthwhile payoff for everyone. Instead, I prefer to frame this in terms of relative advantage: the boost a tool provides in your daily work, tailored to your skill level and needs.
While this post focuses on software development use cases, I’ve seen people from many different industries leverage tools like ChatGPT effectively, so it’s worth highlighting broader applications. (Note: This is a subjective, non-scientific metric I’ve devised—no hard data here, just observations.)
Consider an e-commerce specialist who occasionally fixes broken online stores:
ChatGPT could double its effectiveness by delivering ready-to-use code snippets, eliminating guesswork.
Cursor might push that even further (say, to 2x) by accessing the full codebase and spotting existing solutions, saving them from reinventing the wheel.
Claude Projects (with the right Skills) could amplify it to 4x or more, allowing them to apply updates across multiple stores in one go.
Looking at the chart, one key takeaway stands out: no tool leaves you worse off. But based on your starting point, a small investment in learning could unlock 2x, 3x, or even greater productivity gains.
Having this in mind, here’s the market right now, for software engineers:
ChatGPT
When software engineers tell me they use ChatGPT for coding, I think:
How do they paste their entire codebase into the application?
They don’t.
No, they take a piece of an app, maybe a function, or a small requirement, and ask ChatGPT to do it.
I think this is the worst way to develop software.
Why?
The bad
Existing code
What if the code you are about to write can rely on existing code? ChatGPT will happily rewrite a helper function for you because it doesn’t know whether it already exists and can’t look it up.
Conventions
It’ll give you whatever it thinks is best to achieve your goal - it won’t consider your existing codebase for patterns or practices. You can either manually adjust the code to match the codebase or just introduce the 5th way of doing things.
Errors
What if introducing this thing actually breaks something else?
You are about to find that out. Cursor and other tools have context, so they know how the generated thing works with the rest of the code.
The Good
This is the case for software development, but as I mentioned earlier, I have friends who aren’t developers but still do minimal coding. They can get very far with ChatGPT.
Cursor
This is your one-stop shop if you want to get into AI-driven coding. It has many things set up out of the box, and it is easy to learn.
Cursor became famous for its “Tab” feature, which, upon making some edits, simply suggested where you should go next and make changes.
To be honest, I’m rarely using that, and instead I mostly Plan/Ask and finally reach for Agent.
But I’ve been a long-time Cursor fan and have been using it as my main driver right now.
The Good
It’s really cheap.
You can run this thing for $20/mo, and never run out of requests if you are willing to accept the LLM lottery and land on a model that’s not that smart.
Pro Tip: Use Plan mode often, since it always uses the smarter models.
However, it has some downsides:
The bad
In Cursor, you can use many models. This should be an upside, and it is, however, if you go with a specific model, you’ll burn through your quota even working on side projects.
To avoid this, you’ll likely opt into Auto mode:
Auto mode, as the tooltip says, will pick a model for you, which we already discussed. Enough to land on a simple model while debugging some code, and you’ll end up running in circles. 😬
Claude Code
I’ve been using Claude Code only for a week now, but its positioning is clear to me:
Claude Code is an ecosystem.
If Cursor is VSCode then Claude Code is vim.
It has plugins, a plugin marketplace, hooks, skills, subagents, all the stuff you’ve been reading about on the internet.
But do you need all this?
You might.
With Claude Code and the correct setup, you could prompt for something like this:
Review my Plausible analytics from last week and suggest improvements.
But how is this possible?
Using Skills, you can tell Claude how it can do specific tasks. Cursor got Commands as well, but it’s far less advanced, and you have to run them manually. In Claude Code Skills are picked up automatically as the LLM sees fit.
So with the right Skills setup, Claude Code could:
pull your Plausible analytics for the last week (using https://github.com/alexanderop/claude-plausible-analytics from Alex) - Plausible Skill
review the worst-performing articles - SEO Export skills
pull your other writings - Ghost Writer skill
suggest improvements to those articles - Tech Writer skills
review the changes (and maybe even run the commands) - Reviewer Skill
Of course, you’d have to code most of these skills to your liking, which can be a long, long experimentation before you get it right.
The Good
Claude Code is not an assistant; it’s an extension of you that you can teach to perform specific tasks and then let it decide which tools it’ll use to accomplish them.
Of course, this goes beyond writing blog posts.
Last week, I saw a colleague debug a production issue where some flags in the DB weren’t updated after jobs completed.
His Claude Code would look at the codebase and the database, using SELECT queries, because he had a Database Expert Skill configured.
The Bad
It’s a lot of money.
I’ve been using the €18/mo subscription since this week, but every day I’ve reached my 4-hour limit.
This window resets every 4 hours, so you can go on, unless you hit the weekly limit. I can’t imagine coding with Claude Code without the Max plan, which starts at $100/mo.
Conclusion
Here are some truth-bombs from me, for software engineers, based on more than a year of using different AI tools:
Get into AI-assisted coding, NOW
Cursor + good prompts can do 90% of the work for 1/5 of the money. Use Plan/Ask mode to shorten your iteration cycles. Use the Agent mode only when you have figured out exactly what needs to be done.
Claude Code is probably where coding is headed. If you can’t afford it, now don’t stress it, you’ll be just fine with Cursor/Windsurf etc. Basic prompting skills still apply
📰 Weekly shoutout
Choosing Between Normalization Or Denormalization by Saurabh Dashora
Pareto Principle: The Significant 20% by Michał Poczwardowski
📣 Share
There’s no easier way to help this newsletter grow than by sharing it with the world. If you liked it, found something helpful, or you know someone who knows someone to whom this could be helpful, share it:
🏆 Subscribe
Actually, there’s one easier thing you can do to grow and help grow: subscribe to this newsletter. I’ll keep putting in the work and distilling what I learn/learned as a software engineer/consultant. Simply sign up here:






Thanks for the breakdown of the current state of tools, Akos, and for the shoutout. I appreciate it!