I definitely experienced it!! Sometimes I got so frustrated as to open another prompt from scratch and explain again, hoping it’ll work better without our shared history 😂
It can get addicting to go back and forth with an LLM. You ask, get code, explain the bugs, it tells you what to fix, and so on. Now my strategy is if it repeats the same solution I just try to solve it myself. As you said, it can be much faster.
My strategy proved itself helpful in situations where I had no time to learn the tech! For example, when I started working with GraphQL a couple of months ago I had to produce some queries quickly.
I had no time to learn the language and when the solutions didn't work, I stepped back and tried to solve it from a different angle/level.
Glad I'm not the only one spending time inside chats. 😛
I had to once do some validation on user input field. It had to be in compliance with a particular pattern. So i decided to use regex. I managed to write half of it but couldn't manage to complete it. As my last resort, I asked ChatGPT. I kept giving it different prompts but it couldn't give me anything useful.
I had to read about regex myself ( I was refraining myself from doing so earlier ). I modified the prompt with some regex specific terminology and ChatGPT gave me slightly better version. Even then I had to fix it myself a little but I realised that GPT is not that smart as I thought (not with regex atleast) :D.
Interesting observation! I’ve never tried to extract any regex from ChatGPT before, so I’m surprised. Regex is a well-defined area, but who knows what happened. It can get stuck on secondary details, and I have to tell it what to focus on when solving a problem.
Right, I believe that taking some time to come up with good enough prompt for GPT puts you way ahead in getting closer to exact information that is required.
Right, I believe that taking some time to come up with good enough prompt for GPT puts you way ahead in getting closer to exact information that is required.
From my experience, anytime I attempt to get information from a LLM, from the first few responses, I just know whether or not further prompts would be productive. So I just resort to having the LLM point me to right documents, blogs, sites or any other beneficial resource that I will have to manual go through to get what I need.
Also, the Idea of using a buffer while processing, is what I believe is called Batch Processing. It involves using a buffer to avoid line-by-line processing. Moreover, anything that involves line-by-line processing could be a performance bottleneck in the long run.
In summary, having a knowledgeable Team Lead is super helpful. I have had similar experiences. Thank you for sharing your experiences through your articles.
LLMs are a black box so the only way to get a different output is by changing our inputs. This shows that we still need the engineer to use the tool properly and the tool can't work on its own yet.
Better if we already know what we need to do and LLMs are just faster at doing it. For me LLMs save time of googling, not time of thinking
Yeah, it can't work on it's own for sure... it saves time for me in a similar fashion. I rarely read the docs these days (and actually, got bitten by that as I shared in the story) but 95% of the time, it works.
That was my first use case of ChatGPT at work. Even before there was search and links, I prompted chatGPT about how to do something in Cypress tests and then I searched the original docs for the keywords that ChatGPT brought back 😛
I definitely experienced it!! Sometimes I got so frustrated as to open another prompt from scratch and explain again, hoping it’ll work better without our shared history 😂
It can get addicting to go back and forth with an LLM. You ask, get code, explain the bugs, it tells you what to fix, and so on. Now my strategy is if it repeats the same solution I just try to solve it myself. As you said, it can be much faster.
My strategy proved itself helpful in situations where I had no time to learn the tech! For example, when I started working with GraphQL a couple of months ago I had to produce some queries quickly.
I had no time to learn the language and when the solutions didn't work, I stepped back and tried to solve it from a different angle/level.
Glad I'm not the only one spending time inside chats. 😛
I had to once do some validation on user input field. It had to be in compliance with a particular pattern. So i decided to use regex. I managed to write half of it but couldn't manage to complete it. As my last resort, I asked ChatGPT. I kept giving it different prompts but it couldn't give me anything useful.
I had to read about regex myself ( I was refraining myself from doing so earlier ). I modified the prompt with some regex specific terminology and ChatGPT gave me slightly better version. Even then I had to fix it myself a little but I realised that GPT is not that smart as I thought (not with regex atleast) :D.
Interesting observation! I’ve never tried to extract any regex from ChatGPT before, so I’m surprised. Regex is a well-defined area, but who knows what happened. It can get stuck on secondary details, and I have to tell it what to focus on when solving a problem.
Right, I believe that taking some time to come up with good enough prompt for GPT puts you way ahead in getting closer to exact information that is required.
Right, I believe that taking some time to come up with good enough prompt for GPT puts you way ahead in getting closer to exact information that is required.
Insightful article Akos.
From my experience, anytime I attempt to get information from a LLM, from the first few responses, I just know whether or not further prompts would be productive. So I just resort to having the LLM point me to right documents, blogs, sites or any other beneficial resource that I will have to manual go through to get what I need.
Also, the Idea of using a buffer while processing, is what I believe is called Batch Processing. It involves using a buffer to avoid line-by-line processing. Moreover, anything that involves line-by-line processing could be a performance bottleneck in the long run.
In summary, having a knowledgeable Team Lead is super helpful. I have had similar experiences. Thank you for sharing your experiences through your articles.
Thanks, Amos, glad you liked the article! It's great that you can catch unproductive conversations early, it really saves a lot of time!
I've read one of your articles, and it felt like you were drawing the reader into your world.
Thanks Akos 🤩
Thanks Terresa, how my world looks like to you? Do you think I should prompt less? 😃
Thanks, Akos, I enjoyed these examples!
So, prompting would involve first asking the LLM for potential high-level strategies for a given problem?
Yes Michał, this is what I do now! Glad you enjoyed the article!
Nice one Akos,
LLMs are a black box so the only way to get a different output is by changing our inputs. This shows that we still need the engineer to use the tool properly and the tool can't work on its own yet.
Better if we already know what we need to do and LLMs are just faster at doing it. For me LLMs save time of googling, not time of thinking
Also thanks for the mention!
Yeah, it can't work on it's own for sure... it saves time for me in a similar fashion. I rarely read the docs these days (and actually, got bitten by that as I shared in the story) but 95% of the time, it works.
Enjoyed your article Fran! 👏
That was my first use case of ChatGPT at work. Even before there was search and links, I prompted chatGPT about how to do something in Cypress tests and then I searched the original docs for the keywords that ChatGPT brought back 😛