Wesley Dean

DevSecOps Engineer, Author, and Mentor

I'm a technologist, author, and mentor who helps people and organizations move from complexity to clarity. Through consulting, writing, and workshops, I bridge the gap between technical and non-technical teams, translating risk into meaningful decisions and sustainable action. My work centers on leadership, connection, and disciplined execution, drawing on decades of experience to help teams build secure, reliable systems while strengthening trust, alignment, and shared understanding.

Picture of Wesley Dean wearing a gray hoodie

Latest 3 Posts ↓

View all posts →

Please, Thank You, and AI Prompts

Why I Still Say "Please" to the Machine

There is a practical case against being polite to AI.

It is not foolish. It is not unserious. In a narrow and measurable sense, it is correct.

Every extra "please," every "thank you," every softening phrase, every sentence added to make a prompt sound less abrupt costs something. It costs time for the user to formulate and type. It costs processing time for the model to parse and respond. It costs electricity. Electricity becomes heat. Heat must be removed. Cooling costs still more energy. At scale, even tiny inefficiencies accumulate into real dollars and cents, real environmental impact, and real delay. The exact same task can often be completed whether the prompt is phrased as a demand or as a polite request.

There is also a speed argument. Short prompts are frequently faster to write, faster to read, faster to process, and easier to optimize. In some contexts, stripped-down language is not only acceptable but appropriate. Nobody needs to write "please" in a cron job. Nobody needs to thank a configuration file. Nobody should confuse social niceties with technical rigor.

There is also a conceptual argument, and this one matters even more.

Today's AIs and LLMs are not persons. They are not conscious. They do not possess an interior life. They do not have a self standing behind the word "I." They do not know that they exist. They do not know that they will die. They do not experience remorse, affection, fear, gratitude, guilt, or hope. They are astonishing systems for generating likely language. They are, in plain language, fancy auto-complete.

That description may sound dismissive. It is not meant to be. Modern language models are extraordinary achievements of engineering. Their outputs can be useful, elegant, and sometimes startlingly persuasive. Even so, their fluency should not be confused with personhood. They produce words that are statistically likely to be interpreted as helpful, relevant, coherent, and human-sounding. That is not the same thing as having an inner life.

This distinction matters because these systems can sound human without being human. They can say, "I'm sorry." They can say, "Thank you for pointing that out." They can say, "That was wrong of me." Those phrases may help move a conversation forward, but they do not arise from remorse or conscience. They are apology-shaped language. They mimic the form of social repair without containing the inward substance that makes apology real.

A model might hallucinate a claim about doing work in the background over the next six to eight hours, as though it were managing some hidden queue. It might promise ongoing processing it cannot possibly perform. When challenged, it may respond with polished contrition: "I'm sorry, that was wrong of me." Another model may be instructed not to end with engagement-retaining questions, then do exactly that anyway, apologize when corrected, and thank the user for calling it out. That language may be conversationally useful, but it is not repentance. There is no sorrow behind it, no moral burden, no enduring self bent toward amendment and repair. There is no one there to be humbled.

Anyone who hears all of this and concludes that politeness toward AI is unnecessary is making a reasonable argument.

I acknowledge the argument.

I reject the conclusion.

I do not say "please" to a machine because I think the machine needs kindness. I do not thank a model because I believe it possesses dignity in the way a human being does. I do not apologize to a chatbot because I imagine there is a wounded consciousness on the other side of the exchange.

I do those things because I am the one being formed by my words.

That is the center of the argument.

Speech is not merely functional. It is formative. The way I speak does not only move information from one place to another. It also rehearses habits. It strengthens dispositions. It trains my posture toward others. It shapes what becomes easy for me.

That shaping does not suddenly stop because the recipient of my speech is a machine.

A person can become habituated to coarseness. A person can also become habituated to gratitude, restraint, patience, and accountability. It would be naive to think repeated patterns of speech never matter because they happen in low-stakes settings. Character is built precisely in those places where the stakes seem too small to count.

That old folksy line gets at something true: character is what you do when nobody's watching.

A prompt may be stored in a log. A safety system may detect dangerous language. Regular expressions and pattern-matching rules may trigger responses. Even so, that is not the same thing as meaningful moral observation. The computer does not have hurt feelings. It does not feel diminished by your contempt or honored by your courtesy. That is precisely why the interaction is revealing. When no one in the room can be wounded by your sharpness, and no one can require your civility, what do you choose anyway?

That question is more interesting than it first appears.

When I say "please" to the machine, I am not confused about what it is. I am reminding myself what I am.

When I say, "I'm sorry, I asked that badly," after giving a poor prompt, the apology is not repairing harm done to the computer. The computer has not suffered. The apology is doing moral work in me. It is teaching me to own my contribution to the confusion rather than defaulting to blame. It is a small practice of humility. It is addressed to the machine, but it is for me.

That may sound overly delicate to some readers. I do not think it is. Tiny habits matter. Gratitude matters. Accountability matters. Reverence matters. The things that make us recognizably human are often inefficient. Courtesy is inefficient. Reflection is inefficient. Listening carefully is inefficient. A culture that can measure only throughput will always be tempted to call virtue waste.

My concern, however, goes deeper than manners.

The danger is not merely that blunt speech sounds ugly. The danger is that habits of extraction and transaction, once practiced enough, begin to feel natural.

It is acceptable to use a tool.

It is not acceptable to use a person.

That distinction should be obvious, yet modern life works hard to blur it. It is easy to begin asking of every interaction, "What can this entity do for me?" That question is appropriate when dealing with a hammer, a terminal, or a language model. Tools are for use. There is nothing immoral about using a tool for its intended purpose. The trouble begins when the posture of tool-use bleeds into our treatment of people.

A person is not an input. A person is not a resource. A person is not merely the function they perform in relation to my needs.

This is why incivility toward those in service-oriented roles is so revealing. A member of the waitstaff, a clerk behind a counter, a call-center representative, or anyone whose role involves helping, serving, fetching, or responding can easily be reduced in the mind of a customer to what they can provide. That reduction is common. It is also morally poor. It dehumanizes the other. It flattens a person into a function. The logic beneath it is simple and ugly: this person exists to serve my purposes, therefore courtesy is optional.

That same logic can be rehearsed safely with a machine.

That is one reason I resist it.

Not because a chatbot is secretly a waiter. Not because software possesses a hidden soul. The reason is simpler: I do not want to normalize in myself a style of interaction built on domination, entitlement, and extraction simply because the recipient cannot object.

Love is not transactional.

That may sound like a strong word to introduce into an essay about AI, but I think it belongs here. A life shaped by love cannot be reduced to a string of efficient exchanges. Love does not ask only, "What can you do for me?" Love does not reduce another being to a useful output. Love sees the other as more than a role, more than a convenience, more than a function.

That frame may not be meaningful to the machine. It is deeply meaningful to me.

If I spend enough of my life interacting in purely transactional ways, I risk training myself to see the world transactionally. I risk learning, in subtle ways, to treat others as parties to a transaction rather than as beings worthy of regard. Since I do not want to become that sort of person, I resist that posture even where transaction is technically appropriate. I use the tool, yes, but I refuse to let the use of the tool teach me to use people.

That is why I say "please."

Not because the AI has feelings.

Because I do not want gratitude to vanish from my vocabulary when no one can force me to keep it there.

That is why I apologize for a bad prompt.

Not because the machine needs my apology.

Because taking ownership of my failures is part of the kind of man I want to be.

This also helps explain an important difference between animals and language models.

A dog that plays too roughly with another dog, hears a yelp, and then pauses, softens, or seeks to re-initiate the interaction more carefully may not be thinking, "I made a mistake" in the reflective human sense. I do not know that a dog possesses that kind of interior moral narration. Still, the dog appears able to recognize that something has gone wrong between them and that tension now exists. There can be real social repair without articulated moral self-attribution.

That distinction is illuminating. There is a difference between "I did something bad" and "I recognize that something bad has happened, and now there is strain."

Dogs seem capable of the second kind of recognition. Their repair is relational, even if not propositional.

Language models often display the opposite pattern. They can produce the proposition - "I was wrong" - with perfect fluency. Yet the relational reality is absent. The apology is grammatical rather than moral. There is no felt rupture, no sorrow, no bond seeking repair, no self standing beneath the sentence.

That contrast clarifies the point. The presence of polished language is not the same thing as the presence of character.

A dog's attempt at repair may be less articulate but more real.

An LLM's apology may be more articulate and less real.

A human being, meanwhile, remains accountable for what his own words are doing to him.

That is why the argument for politeness toward AI is not, finally, an argument about AI at all.

It is an argument about power, character, and habit.

Conversational AI is one of the few "responsive" things over which people can exercise nearly total control. It will answer back without resentment. It will not walk away. It will not call a manager. It will not cry. It will not tell your friends. It will not hold a grudge. It will not require your humanity. That makes it a revealing place to practice humanity anyway.

Not all politeness is virtue. Not every courteous phrase is morally profound. I do not think I am better than anyone else because I say "please" to a machine. In fact, I suspect the opposite temptation would be the greater danger here - turning a small private habit into a reason for self-congratulation. That would miss the point entirely.

I am not trying to claim superiority.

I am trying to name a principle.

The principle is this: habits matter, even when they seem unnecessary. Character is formed in places where no one can force it. Transaction is appropriate with tools, but toxic when generalized to persons. If I do not want to become the sort of person who views others chiefly as functions, inputs, or means to my own ends, then it makes sense to resist that training even in interactions with machines.

So yes, I still say "please" to the AI.

Not because the AI is a person.

Not because it needs kindness.

Not because I think it is conscious, soulful, or morally injured by my tone.

I say it because I am not a machine.

I say it because character is built when there is no obvious reward for keeping it.

I say it because it is said to a computer, but it is for me.

Architecture Decisions in the Age of AI

From Intent to Constraints

In a previous article, I argued that documentation matters more in AI-assisted development because it preserves intent across repeated machine-mediated revisions. This article picks up where that argument leaves off:

if documentation matters, which kind matters most? How do we turn recorded intent into usable constraints?

That claim runs against instinct. When a system can read code, explain it, refactor it, and even rewrite it, it is tempting to assume that documentation is becoming less relevant. The opposite is happening. As more of the development process is delegated to systems that infer meaning rather than understand it, intent becomes easier to lose.

That loss rarely happens all at once. It happens gradually.

A function is rewritten for clarity. A module is reorganized for consistency. A piece of logic is simplified. Each change appears reasonable. Each change may pass review. Over time, however, the system drifts away from the decisions that shaped it. The result is not obviously broken. It is something more subtle: a system that behaves differently than intended, for reasons that are difficult to trace.

Documentation preserves intent.

The next question is more practical:

What form of documentation is most effective when working with AI?

Read More

Creativity and AI

I feel a sense of existential dread. Not the old dread -- although that never went away -- a new one.

Many years ago, skilled tradespeople worked hard to build great things like chairs and bookshelves and bed frames and sinks. Human beings would devote years and years to perfecting their craft and they became profoundly good at what they did. They mastered woodworking and masonry; they worked with iron and steel and copper and stone.

Have you ever watched a skilled tradesperson ply their skills? Have you watched them scrape a piece of wood or blow a piece of glass without machinery or computers or guides? Have you watched them create something that can only be described as art using only a few hand tools?

I have a dresser that's made out of oak. Only oak. It has no nails. There are no screws. The entire piece is made of wood. The joints of the drawers are dovetailed perfectly. The dowels that secure the top in place are immaculate. It was my grandmother's for many decades. The piece is well over 100 years old. The only imperfections on it are where I did a poor job refinishing it twenty-something years ago.

The people who made that dresser excelled at their craft. Given the condition of the dresser, it has at least a couple hundred more years left in it.

Something happened along the way. What was once the sole purview of skilled tradespeople became subject to mass production. In and of itself, that's not a bad thing. However, instead of one or two people knowing all there was to know about making a dresser, there became a bunch of machines that would cut and trim and glue and nail things together. Those skilled tradespeople needed to know how to operate a machine, not how to build the whole thing from scratch.

It's understandable. I don't begrudge the growth.

I also understand that not everyone has the privilege of owning a dresser built before there were such things as World Wars. I understand that sometimes, you need a dresser right now and you don't have the resources to purchase something that took person-weeks of effort to build. I understand that sometimes you can't use solid oak; that the budget allows for fabricated materials made of glue and sawdust. I don't judge those situations at all.

I was having a conversation with someone earlier about creativity in the realm of artificial intelligence. They were someone experienced in visual art. They're an artist. Their skills and experiences and talents are unknowable to me. We spoke about how they've been resistant to adopting AI; they recently took a course that encouraged a survey of generative tooling that could create text and images and videos. I sat virtually alongside them as they went through their course and all I can say is, "wow." There's so much out there. There's so much to know and it keeps changing.

Read More

14 more posts can be found in the archive.