3 min read

Cheating at work with ChatGPT

Cheating at work with ChatGPT
Photo by Levart_Photographer / Unsplash

My first recorded chat with ChatGPT was on 9 December 2022. I asked it to "Write a prequel to Humpty Dumpty". It was pretty good, telling me that:

Humpty Dumpty had a great desire to see the world beyond the castle walls, but he knew that his round, egg-like shape made it impossible for him to venture out on his own. So he spent his days dreaming of far-off lands and adventures...

This one wasn't particularly useful for me at work, but not long later, I was clearly having a bit of trouble with PowerShell, and asked it "unset env var in powershell". Sure enough, it gave me the answer. In probably about the same amount of time as a Google search would have.

I got a bit more ambitious. I had just had my work machine rebuilt, and was getting frustrated with IIS sites shutting down in the middle of the day, so I wanted to increase the idle timeout. I found some instructions on how to do it using the UI, but I have a lot of sites running, so I pasted in those instructions with the prefix "Write me a powershell script to achieve the following:" and sure enough, it gave me what I almost wanted.

And that's the trick with ChatGPT. You need to take its answers with a grain of salt. I asked ChatGPT to write me a LinkedIn post about this very thing. I think that, at the moment at least, generative AI tools like ChatGPT are great at helping you if you already know what you're doing. Code examples don't often work first time, and you need a bit of experience to troubleshoot the problem. That's not to say, though, that the technology won't improve.

If I'm stuck and think my AI friend can help, my workflow depends on the problem. When what I'm asking about isn't so specific to a problem with existing code, I'll just ask a question. Other times I might think giving ChatGPT the code I'm having trouble with will help. If that's the case, I do my best to anonymise it, so I'm not breaching any confidentiality obligations I have. A big concern companies have is that these AIs can then use that information as part of their training, and your confidential material may well end up in a future response to someone else's prompt.

It's still early days with tools like ChatGPT. As services start to have commercial terms that are more appealing to businesses, and users' inputs are ringfenced so they don't end up being returned other users, the need to anonymise inputs will be a thing of the past.

But for now, I will often try to pare it back to just the thing that's giving me problems, much the same was as you might do when asking on Stack Overflow, or writing a blog post. This is where your own skill and experience becomes important. You need to know enough about your code in order to form a good prompt around it.

It's fairly well known that ChatGPT is not always right. Sometimes this is because it will just make things up (although arguably everthing it says it makes up), and other times it might be because its training data is misleading it. Knowing that it's wrong is another aspect of ChatGPT usage that relies on your own expertise.

Sometimes when I'm reading a response, it will be clear to me that something is wrong. In these cases, I'll just say, for example, "method doSomething does not exist on class Foo" and it will invariably reply with an apology for the mistake, and a new solution. Maybe that solution will also have a problem, so I'll point out the mistake again. I might also help it by telling it what the solution it - "doSomething doesn't exist but doThisThing does" - and it might save us a bit of back and forth in coming up with the answer.

There might be times where I can't see the mistake. In these cases, I can just paste the error (or a description of it) that I get when it compiles or runs, and it will do the same thing.

My job is not just about writing code. I've used it to help me formulate arguments for and against certain plans. I have given it a user story (which I anonymised first) and got it to give me a set of acceptance criteria. It's quite good at these. For non-work purposes, I've pasted a whole lot of text in, and asked for it to rewrite it to read better.

I'm really looking forward to the day that I can safely input private company data. I could send it a meeting transcript and get it to give me a summary of the key points. It could take all our Slack conversations and give me answers to questions about our sprint.

ChatGPT isn't a threat to my job yet. But it can definitely be a very useful tool.

What do you think? Have you used ChatGPT and friends in your day-to-day work? Let me know in the comments.