Please, Thank You, and AI Prompts
Why I Still Say "Please" to the Machine
There is a practical case against being polite to AI.
It is not foolish. It is not unserious. In a narrow and measurable sense, it is correct.
Every extra "please," every "thank you," every softening phrase, every sentence added to make a prompt sound less abrupt costs something. It costs time for the user to formulate and type. It costs processing time for the model to parse and respond. It costs electricity. Electricity becomes heat. Heat must be removed. Cooling costs still more energy. At scale, even tiny inefficiencies accumulate into real dollars and cents, real environmental impact, and real delay. The exact same task can often be completed whether the prompt is phrased as a demand or as a polite request.
There is also a speed argument. Short prompts are frequently faster to write, faster to read, faster to process, and easier to optimize. In some contexts, stripped-down language is not only acceptable but appropriate. Nobody needs to write "please" in a cron job. Nobody needs to thank a configuration file. Nobody should confuse social niceties with technical rigor.
There is also a conceptual argument, and this one matters even more.
Today's AIs and LLMs are not persons. They are not conscious. They do not possess an interior life. They do not have a self standing behind the word "I." They do not know that they exist. They do not know that they will die. They do not experience remorse, affection, fear, gratitude, guilt, or hope. They are astonishing systems for generating likely language. They are, in plain language, fancy auto-complete.
That description may sound dismissive. It is not meant to be. Modern language models are extraordinary achievements of engineering. Their outputs can be useful, elegant, and sometimes startlingly persuasive. Even so, their fluency should not be confused with personhood. They produce words that are statistically likely to be interpreted as helpful, relevant, coherent, and human-sounding. That is not the same thing as having an inner life.
This distinction matters because these systems can sound human without being human. They can say, "I'm sorry." They can say, "Thank you for pointing that out." They can say, "That was wrong of me." Those phrases may help move a conversation forward, but they do not arise from remorse or conscience. They are apology-shaped language. They mimic the form of social repair without containing the inward substance that makes apology real.
A model might hallucinate a claim about doing work in the background over the next six to eight hours, as though it were managing some hidden queue. It might promise ongoing processing it cannot possibly perform. When challenged, it may respond with polished contrition: "I'm sorry, that was wrong of me." Another model may be instructed not to end with engagement-retaining questions, then do exactly that anyway, apologize when corrected, and thank the user for calling it out. That language may be conversationally useful, but it is not repentance. There is no sorrow behind it, no moral burden, no enduring self bent toward amendment and repair. There is no one there to be humbled.
Anyone who hears all of this and concludes that politeness toward AI is unnecessary is making a reasonable argument.
I acknowledge the argument.
I reject the conclusion.
I do not say "please" to a machine because I think the machine needs kindness. I do not thank a model because I believe it possesses dignity in the way a human being does. I do not apologize to a chatbot because I imagine there is a wounded consciousness on the other side of the exchange.
I do those things because I am the one being formed by my words.
That is the center of the argument.
Speech is not merely functional. It is formative. The way I speak does not only move information from one place to another. It also rehearses habits. It strengthens dispositions. It trains my posture toward others. It shapes what becomes easy for me.
That shaping does not suddenly stop because the recipient of my speech is a machine.
A person can become habituated to coarseness. A person can also become habituated to gratitude, restraint, patience, and accountability. It would be naive to think repeated patterns of speech never matter because they happen in low-stakes settings. Character is built precisely in those places where the stakes seem too small to count.
That old folksy line gets at something true: character is what you do when nobody's watching.
A prompt may be stored in a log. A safety system may detect dangerous language. Regular expressions and pattern-matching rules may trigger responses. Even so, that is not the same thing as meaningful moral observation. The computer does not have hurt feelings. It does not feel diminished by your contempt or honored by your courtesy. That is precisely why the interaction is revealing. When no one in the room can be wounded by your sharpness, and no one can require your civility, what do you choose anyway?
That question is more interesting than it first appears.
When I say "please" to the machine, I am not confused about what it is. I am reminding myself what I am.
When I say, "I'm sorry, I asked that badly," after giving a poor prompt, the apology is not repairing harm done to the computer. The computer has not suffered. The apology is doing moral work in me. It is teaching me to own my contribution to the confusion rather than defaulting to blame. It is a small practice of humility. It is addressed to the machine, but it is for me.
That may sound overly delicate to some readers. I do not think it is. Tiny habits matter. Gratitude matters. Accountability matters. Reverence matters. The things that make us recognizably human are often inefficient. Courtesy is inefficient. Reflection is inefficient. Listening carefully is inefficient. A culture that can measure only throughput will always be tempted to call virtue waste.
My concern, however, goes deeper than manners.
The danger is not merely that blunt speech sounds ugly. The danger is that habits of extraction and transaction, once practiced enough, begin to feel natural.
It is acceptable to use a tool.
It is not acceptable to use a person.
That distinction should be obvious, yet modern life works hard to blur it. It is easy to begin asking of every interaction, "What can this entity do for me?" That question is appropriate when dealing with a hammer, a terminal, or a language model. Tools are for use. There is nothing immoral about using a tool for its intended purpose. The trouble begins when the posture of tool-use bleeds into our treatment of people.
A person is not an input. A person is not a resource. A person is not merely the function they perform in relation to my needs.
This is why incivility toward those in service-oriented roles is so revealing. A member of the waitstaff, a clerk behind a counter, a call-center representative, or anyone whose role involves helping, serving, fetching, or responding can easily be reduced in the mind of a customer to what they can provide. That reduction is common. It is also morally poor. It dehumanizes the other. It flattens a person into a function. The logic beneath it is simple and ugly: this person exists to serve my purposes, therefore courtesy is optional.
That same logic can be rehearsed safely with a machine.
That is one reason I resist it.
Not because a chatbot is secretly a waiter. Not because software possesses a hidden soul. The reason is simpler: I do not want to normalize in myself a style of interaction built on domination, entitlement, and extraction simply because the recipient cannot object.
Love is not transactional.
That may sound like a strong word to introduce into an essay about AI, but I think it belongs here. A life shaped by love cannot be reduced to a string of efficient exchanges. Love does not ask only, "What can you do for me?" Love does not reduce another being to a useful output. Love sees the other as more than a role, more than a convenience, more than a function.
That frame may not be meaningful to the machine. It is deeply meaningful to me.
If I spend enough of my life interacting in purely transactional ways, I risk training myself to see the world transactionally. I risk learning, in subtle ways, to treat others as parties to a transaction rather than as beings worthy of regard. Since I do not want to become that sort of person, I resist that posture even where transaction is technically appropriate. I use the tool, yes, but I refuse to let the use of the tool teach me to use people.
That is why I say "please."
Not because the AI has feelings.
Because I do not want gratitude to vanish from my vocabulary when no one can force me to keep it there.
That is why I apologize for a bad prompt.
Not because the machine needs my apology.
Because taking ownership of my failures is part of the kind of man I want to be.
This also helps explain an important difference between animals and language models.
A dog that plays too roughly with another dog, hears a yelp, and then pauses, softens, or seeks to re-initiate the interaction more carefully may not be thinking, "I made a mistake" in the reflective human sense. I do not know that a dog possesses that kind of interior moral narration. Still, the dog appears able to recognize that something has gone wrong between them and that tension now exists. There can be real social repair without articulated moral self-attribution.
That distinction is illuminating. There is a difference between "I did something bad" and "I recognize that something bad has happened, and now there is strain."
Dogs seem capable of the second kind of recognition. Their repair is relational, even if not propositional.
Language models often display the opposite pattern. They can produce the proposition - "I was wrong" - with perfect fluency. Yet the relational reality is absent. The apology is grammatical rather than moral. There is no felt rupture, no sorrow, no bond seeking repair, no self standing beneath the sentence.
That contrast clarifies the point. The presence of polished language is not the same thing as the presence of character.
A dog's attempt at repair may be less articulate but more real.
An LLM's apology may be more articulate and less real.
A human being, meanwhile, remains accountable for what his own words are doing to him.
That is why the argument for politeness toward AI is not, finally, an argument about AI at all.
It is an argument about power, character, and habit.
Conversational AI is one of the few "responsive" things over which people can exercise nearly total control. It will answer back without resentment. It will not walk away. It will not call a manager. It will not cry. It will not tell your friends. It will not hold a grudge. It will not require your humanity. That makes it a revealing place to practice humanity anyway.
Not all politeness is virtue. Not every courteous phrase is morally profound. I do not think I am better than anyone else because I say "please" to a machine. In fact, I suspect the opposite temptation would be the greater danger here - turning a small private habit into a reason for self-congratulation. That would miss the point entirely.
I am not trying to claim superiority.
I am trying to name a principle.
The principle is this: habits matter, even when they seem unnecessary. Character is formed in places where no one can force it. Transaction is appropriate with tools, but toxic when generalized to persons. If I do not want to become the sort of person who views others chiefly as functions, inputs, or means to my own ends, then it makes sense to resist that training even in interactions with machines.
So yes, I still say "please" to the AI.
Not because the AI is a person.
Not because it needs kindness.
Not because I think it is conscious, soulful, or morally injured by my tone.
I say it because I am not a machine.
I say it because character is built when there is no obvious reward for keeping it.
I say it because it is said to a computer, but it is for me.
