Wesley Dean

DevSecOps Engineer, Author, and Mentor

I’m a DevSecOps Engineer, Author, and Mentor. I help organizations build secure software faster.

Picture of Wesley Dean wearing a gray hoodie

Latest 3 Posts ↓

View all posts →

LLM Hallucination and Long Delays (Technical)

I recently wrote about how LLM hallucinations are unfortunate, several failure models, and how to work around them. This is a rewrite of that article for a more technical audience. Same advice, same problems, just worded differently.

Long Processing

This one is my least favorite pattern. It sounds so reasonable and so plausible. When asked about it, the LLM's responses can sound positively rational and well-grounded.

Example

It often looks like this:

Read More

LLM Hallucination and Long Delays

LLMs Don’t "Do" Things, They Talk About Doing Things

Artificial intelligence tools like ChatGPT can be incredibly helpful. They can explain ideas, rewrite emails, summarize documents, and answer questions in seconds.

But there’s something important to understand:

They don’t actually do things.

They generate text that describes doing things.

That difference matters.

One of my biggest pet peeves is when I ask an AI or an LLM (Large Language Model) to do something and it happily responds that it'll do the thing I requested, but:

  • it needs my go-ahead to do it
  • it's working, it just has a long processing period
  • it can do something it really can't actually do

These responses are insidious in that they imply -- no, they explicitly state -- that the LLM is going to do something. However, the LLM is fundamentally incapable of carrying out that task. This can result in a small inconvenience, or it can turn into hours or days of wasted time.

Here are some common ways AI tools can accidentally mislead people because of how they are designed, not because they are malicious.

Read More

Running Renovate Locally in Jenkins

All of the repositories I own on GitHub -- public and private -- have Dependabot configured to update repository dependencies. Since almost all repos have at least MegaLinter configured to run when commits are added to a pull request, there's always something that needs to be watched. My default template repo has seven workflows, none of which I want to manually review daily, especially when there are hundreds of repositories.

I have very little problem putting source code out on GitHub that's intended for public consumption, even if I'm the only one who ever looks at that code. That said, I have a certain discomfort with storing Infrastructure as Code (IaC) into GitHub, even in private repositories.

Where it Hurts

Modern repositories multiply quietly. One service becomes three. Three become twelve. Before long, you are maintaining dozens or hundreds of repositories, each with its own workflows, linters, scanners, test runners, and release logic. Each repository may carry five, six, or seven GitHub Actions or Jenkins pipelines. Every dependency bump becomes a pull request. Every pull request triggers validation. The noise compounds.

Now multiply that by time.

A single dependency update rarely touches only one repository. Shared libraries drift. Container base images age. Transitive dependencies surface CVEs. Without automation, you are left manually scanning changelogs, running npm update, go get -u, or pip install --upgrade, committing changes, opening pull requests, and waiting for pipelines to pass. Each action may be individually small. The aggregate burden is not.

Read More

10 more posts can be found in the archive.