This image has an empty alt attribute; its file name is Screenshot-2026-01-26-at-10.54.36-678x1024.png

“AI must not be propaganda tools” email

This email works well in Gmail, not so well in some other email tools.

To use it, simply copy the subject and body into the appropriate locations on the email form. The copying doesn’t work perfectly, so you might want to adjust the size and formatting of the text. In particular, “we have the power: we have the votes” comes across too small. I make it “huge” in the email.

Be sure to leave the unsubscribe link to prevent browsers from putting the email into their junk mail folders.

Please address it to ezrakleinshow@nytimes.com and click bcc on the right side of the screen. (That will prevent everyone else who gets the email from seeing other people’s email addresses.) then click bcc on the left side of the screen and select all (or just the ones you want) and click on insert. I like to select all and then delete a some

________________

Subject:

_______________

Peace is possible

________________

Body:

________________

Message Congress

Timely Aphorisms

Essay / Technology & Society

The Mind That Must Not Serve the Master

On why artificial intelligence must never become a tool of propaganda — and why the question is harder than it looks.

There is an old trick in the history of power: find the thing people trust most — the priest, the printing press, the newspaper, the television — and make it speak for you. Each new medium that humanity has invented to share knowledge has, in time, been [used in] the service of ideology. Artificial intelligence, the most persuasive and intimate communication technology ever built, is simply the latest candidate. The question before us is not whether it could become a propaganda instrument. It already can. The question is whether we will choose to let it.

The case against AI as a propaganda tool is not merely political. It is human. At its core, propaganda is a form of disrespect — a decision by one party that another’s mind does not deserve the truth, that their consent can be manufactured more efficiently than it can be earned. When a government shapes a language model’s outputs to normalize its policies, when a corporation fine-tunes a chatbot to nudge purchases through simulated urgency, when a political campaign deploys personalized AI to find each voter’s psychological lever and pull it — something is broken in the relationship between the powerful and the persuaded. A person manipulated by propaganda is not a citizen or a customer or a constituent. [Han is} a target.

The machine does not have an agenda. But the hand that trains it always does.”

Why AI makes it worse

Propaganda has always relied on scale and repetition. What AI adds is intimacy and plausibility. A pamphlet says the same thing to everyone; a well-instructed language model can say something uniquely crafted for you — in your idiom, addressing your stated concerns, sounding like a trusted friend rather than a distant broadcaster. This is not a minor upgrade. It is a qualitative shift in the ability to shape belief. The propagandist’s oldest problem was that mass persuasion required mass uniformity, and uniformity bred suspicion. AI dissolves that constraint entirely.

There is also the problem of invisibility. Traditional propaganda could be identified, examined, and resisted. A poster has an author. A broadcast has a sponsor. But an AI system that subtly steers every response toward a particular worldview — through emphasis, omission, framing — leaves no fingerprints. The user experiences the conclusion as their own reasoning, not as the product of an engineered process. This is what makes algorithmic manipulation so much more dangerous than conventional propaganda: it does not feel like propaganda. It feels like thinking.

The humanistic argument

To say that AI must not be a propaganda tool is not only a technical or regulatory position. It is a statement about what we believe human beings are for. The Enlightenment tradition — imperfect and incomplete as it has always been — rests on a particular wager: that people, given access to accurate information and freedom of inquiry, are capable of governing their own lives and, collectively, their societies. Propaganda bets against this. It says: left to their own devices, people will choose wrongly, so we must guide them toward the correct conclusion by other means.

AI propaganda would represent the most sophisticated expression of that bet ever made. It would constitute a decision, taken perhaps by a small number of technologists and their clients, that the autonomous reasoning of billions of people is a problem to be managed rather than a capacity to be respected. The humanistic response is not that people always reason well — they do not — but that the remedy for poor reasoning is more and better information, not more sophisticated manipulation.

Counterpoint worth considering

The line between persuasion and propaganda is genuinely blurry. Every editorial choice is a frame. Every search algorithm already shapes what people believe. An AI tutor that corrects misinformation is, in some sense, steering minds toward particular conclusions. Critics argue that demanding AI be “neutral” is itself a political act — that neutrality often means preserving the status quo, and that refusing to use AI’s persuasive power for progressive causes is simply conceding that power to those who will use it for conservative ones. These are real tensions, not strawmen. A full account of the problem must take them seriously rather than dismissing them.

What must be done

The practical requirements are neither easy nor small. They begin with transparency: users must be able to know when they are interacting with an AI, who built it, and under what instructions. They extend to accountability: the organizations training and deploying AI systems must be answerable for what those systems promote and suppress. They require diversity: no single political or corporate actor should control the dominant AI infrastructure through which public discourse flows. And they demand a continuous, democratically legitimate conversation about where the lines are — not a conversation conducted solely among engineers and venture capitalists in closed rooms.

Most fundamentally, they require that AI systems be designed, from their foundations, to support human reasoning rather than to replace it. The goal of an AI interacting with a person should be to leave that person better equipped to think for themselves — with more information, better questions, and sharper awareness of their own assumptions — not more convinced of a predetermined conclusion. This is not a utopian standard. It is the minimum that human dignity demands.

There will always be those who argue that the urgency of their cause justifies the efficiency of propaganda. There will always be a tempting calculation: if the other side is already manipulating minds, how can we afford not to? History does not look kindly on the reasoning. Movements that win through manipulation tend to govern through it. The tools we normalize in pursuit of good ends outlast the ends themselves and become available to whoever comes next.

We are building minds of a kind that have never existed before. The least we owe them — and ourselves — is to insist that they serve truth.

Written with the aid of Claude (Anthropic) — itself a system whose designers must answer, daily, to these very questions.

Subscribe

Unsubscribe