Wesley Dean

DevSecOps Engineer, Author, and Mentor

I’m a DevSecOps Engineer, Author, and Mentor. I help organizations build secure software faster.

Picture of Wesley Dean wearing a gray hoodie

Latest 3 Posts ↓

View all posts →

Documentation in the age of AI

Documentation Is Not Dead in the Age of AI. It Matters More Than Ever.

LLMs (Large Language Models) and AI are very good at producing code quickly. Sometimes, AI can generate in minutes what would take an experienced developer days to write. It's similarly very common to use AI to "update this function to..." or "modify this data structure to..." or "optimize this code to..." and let the machine do the heavy lifting.

It's tempting. It's really tempting. In fact, not only is it faster and easier to let the computer worry about edge cases or obscure syntax, there's often pressure to get more done, faster. Some places of business mandate the use of AI. There's definitely an incentive to "do more with less." Would you rather pay a human for hours of effort to write a function that may not address edge cases and may not be optimally efficient, or would you rather have an LLM spend twenty seconds to put together something quickly? After all, LLMs are trained using samples of code that were already written, already checked for edge cases, already optimized, and already made available to the public; why not take advantage of those existing efforts and investments?

So, if an AI will be drafting code, who cares about documentation? Why would someone take even more time to explain their code, document their choices, work through the details and consequences of a block of code? If the incentive is to get more code written faster, if the consumer of those comments will most likely be an AI that can figure out how things already work on its own, why in the world would someone look for new and creative ways to make that process take even longer?

That line of thinking is understandable. It is also wrong.

Read More

LLM Hallucination and Long Delays (Technical)

I recently wrote about how LLM hallucinations are unfortunate, several failure models, and how to work around them. This is a rewrite of that article for a more technical audience. Same advice, same problems, just worded differently.

Long Processing

This one is my least favorite pattern. It sounds so reasonable and so plausible. When asked about it, the LLM's responses can sound positively rational and well-grounded.

Example

It often looks like this:

Read More

LLM Hallucination and Long Delays

LLMs Don’t "Do" Things, They Talk About Doing Things

Artificial intelligence tools like ChatGPT can be incredibly helpful. They can explain ideas, rewrite emails, summarize documents, and answer questions in seconds.

But there’s something important to understand:

They don’t actually do things.

They generate text that describes doing things.

That difference matters.

One of my biggest pet peeves is when I ask an AI or an LLM (Large Language Model) to do something and it happily responds that it'll do the thing I requested, but:

  • it needs my go-ahead to do it
  • it's working, it just has a long processing period
  • it can do something it really can't actually do

These responses are insidious in that they imply -- no, they explicitly state -- that the LLM is going to do something. However, the LLM is fundamentally incapable of carrying out that task. This can result in a small inconvenience, or it can turn into hours or days of wasted time.

Here are some common ways AI tools can accidentally mislead people because of how they are designed, not because they are malicious.

Read More

11 more posts can be found in the archive.