13 Comments
User's avatar
Fran Soto's avatar

I love the pragmatic place from which you approach it, Akos!

Not sure if you knew the name, but you just described "the Lindy effect". The longer something has survived, the longer it will survive in the future. That's why people with a lot of experience or people with big audiences are expected to last longer than people just starting out.

Rather than going to the extremes of ignoring AI or losing hope we'll maintain our job, we have to adapt and use it as a new tool

Expand full comment
Akos Komuves's avatar

I'm all for adapting it. In fact, I've been using GitHub copilot since it came out. I'm also a subscriber to ChatGPT Plus for 6+ months. They simplify many things in my professional life and in my hobbies.

I didn't know about the Lindy effect. Thanks for bringing it to my attention. For those who are also interested, here's what ChatGPT told me on the subject:

"...For example, if a technology or idea has been in use for 10 years, the Lindy Effect implies that it can be expected to be in use for another 10 years. If it survives for 20 years, then it might be around for another 20 years. This pattern contrasts with many natural phenomena where aging or wear-and-tear reduces life expectancy over time.

The concept originated in a 1964 article by Albert Goldman in The New Republic, based on discussions in Lindy's delicatessen in New York City. It was later popularized and formalized by mathematician Benoit Mandelbrot and physicist Nassim Nicholas Taleb, particularly in the context of Taleb's work on randomness, probability, and uncertainty."

Expand full comment
Anton Zaides's avatar

I loved that overview, fun and to the point :)

I think though that the AI programmer will involve, and the instructions will be at a higher level than the example you gave. There is nothing stopping us from explaining the problem and not asking for microservices - and once we get used to it, the usage will change.

I completely agree on the tests though 🙃

Expand full comment
Akos Komuves's avatar

Thanks Anton!

We're yet to see an AI that can argue intelligently. All it does is repeat itself, and the whole thing ends up being a big circlejerk. Sure, it can do something like write this and that in Go, but it can't reflect constructively on code.

Here's a good read from a curl maintainer: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/. Also, ThePrimeagen's coverage on it: https://www.youtube.com/watch?v=e2HzKY5imTE. Such OSS projects are currently in big trouble. Imagine getting hundreds of potential vulnerability emails a day.

And there's an excellent point between the lines. The more these AI hallucinations look human, the harder it will be to filter out the real things. We'll ultimately waste time and effort looking at generated stuff instead of dealing with real potential errors.

Expand full comment
Anton Zaides's avatar

We are yet to see things because it's so early - barely 15 months since the first version of ChatGPT. I think that people focus on the current limitations, which may as well be solved in GPT 5 (and if not, than in 6).

When it comes to code, personally, I'm sure we'll have a tool that's a much better version of Devin in couple of years at most - and that won't introduce security risks...

I don't think we are there yet, and it's irresponsible to use it as such, but we will be imo.

Expand full comment
Akos Komuves's avatar

I guess when we reach that is when people call the AI becoming sentient.

I can't imagine how far we're from that and what implications it'll have, but I guess that'd be something similar to the industrial revolution.

---

In the post above it didn't introduce a security risk btw, but it tried to patch a security risk that didn't exist. It argued that it looked like a human, except the argument was bad.

If you haven't read the whole post, basically, this was the code:

```

if(randlen >= sizeof(keyval))

return CURLE_FAILED_INIT;

strcpy(keyval, randstr);

```

and it argued that strcpy is unsafe, which is, but it completely ignored that there's an if-return before the call. This is something a person with _slight interest_ in programming could realize.

Expand full comment
Anton Zaides's avatar

Sorry, I haven't read it 😅 So many content to read.. Thanks for the summary.

I think that the 'AGI' holy grail is not that far, but even before it, the coding part is easier to reach than a general 'sentient' AI. We are already in the middle of that revolution, which in my opinion will be even bigger than the industrial one.

Expand full comment
Akos Komuves's avatar

No worries, the curl maintainer made a good point, so I thought it worth summarizing 😊

Coding is about rules and machines are great at following the rules. But I'm curios how a future will look like when they'll be able to deviate from those rules as needed.

I'm not 100% sure it'll happen. Back to the Future left me hanging with the idea of flying cars too. 😃

Expand full comment
Saurabh Dashora's avatar

This is a great point of view Akos.

I feel that tools like Devin will level the playing field when it comes to productivity for a developer. But it will also increase competition. The one who can leverage all these tools will have a better chance of success in the long run.

Also, thanks for the mention!

Expand full comment
Akos Komuves's avatar

Thanks, Saurabh!

I have high hopes for AI, but I don't want to be disappointed. We're still at the generative stage where it mimics what has been written, but it can't really come up with new ideas – kind of a must in our field. I wonder when we'll reach that point. Would it be the same as becoming sentient?

Expand full comment
Basma Taha's avatar

You tackled this in a great way, @Akos . I like that you defined the directions in which AI can really help us as engineers. I think many aspects of the job are time-consuming, repetitive, and more akin to donkey work. If AI software engineers can tackle that, I will jump for joy.

Besides refactoring, prototyping, and testing, as you mentioned, I would also add writing boilerplate code and handling technical debt.

There are many other things that we as humans should focus on, like planning, prioritization, designing, and so on.

There's much more to software engineering than just coding. Coding is really just the tip of the iceberg, as you described it!

Expand full comment
Akos Komuves's avatar

There's a long way to go, for sure. I use AI daily and I know the limitations of the current solutions, but I wonder what is the next stage and when we'll reach that.

Expand full comment
Basma Taha's avatar

We'll definitely know that answer in the near future I think.

Expand full comment