10 Comments

I definitely experienced it!! Sometimes I got so frustrated as to open another prompt from scratch and explain again, hoping itโ€™ll work better without our shared history ๐Ÿ˜‚

It can get addicting to go back and forth with an LLM. You ask, get code, explain the bugs, it tells you what to fix, and so on. Now my strategy is if it repeats the same solution I just try to solve it myself. As you said, it can be much faster.

Expand full comment
author

My strategy proved itself helpful in situations where I had no time to learn the tech! For example, when I started working with GraphQL a couple of months ago I had to produce some queries quickly.

I had no time to learn the language and when the solutions didn't work, I stepped back and tried to solve it from a different angle/level.

Glad I'm not the only one spending time inside chats. ๐Ÿ˜›

Expand full comment
Oct 29ยทedited Oct 29Liked by Akos

Insightful article Akos.

From my experience, anytime I attempt to get information from a LLM, from the first few responses, I just know whether or not further prompts would be productive. So I just resort to having the LLM point me to right documents, blogs, sites or any other beneficial resource that I will have to manual go through to get what I need.

Also, the Idea of using a buffer while processing, is what I believe is called Batch Processing. It involves using a buffer to avoid line-by-line processing. Moreover, anything that involves line-by-line processing could be a performance bottleneck in the long run.

In summary, having a knowledgeable Team Lead is super helpful. I have had similar experiences. Thank you for sharing your experiences through your articles.

Expand full comment
author

Thanks, Amos, glad you liked the article! It's great that you can catch unproductive conversations early, it really saves a lot of time!

Expand full comment

I've read one of your articles, and it felt like you were drawing the reader into your world.

Thanks Akos ๐Ÿคฉ

Expand full comment
author

Thanks Terresa, how my world looks like to you? Do you think I should prompt less? ๐Ÿ˜ƒ

Expand full comment

Thanks, Akos, I enjoyed these examples!

So, prompting would involve first asking the LLM for potential high-level strategies for a given problem?

Expand full comment
author

Yes Michaล‚, this is what I do now! Glad you enjoyed the article!

Expand full comment

Nice one Akos,

LLMs are a black box so the only way to get a different output is by changing our inputs. This shows that we still need the engineer to use the tool properly and the tool can't work on its own yet.

Better if we already know what we need to do and LLMs are just faster at doing it. For me LLMs save time of googling, not time of thinking

Also thanks for the mention!

Expand full comment
author

Yeah, it can't work on it's own for sure... it saves time for me in a similar fashion. I rarely read the docs these days (and actually, got bitten by that as I shared in the story) but 95% of the time, it works.

Enjoyed your article Fran! ๐Ÿ‘

Expand full comment