"They built a machine optimised for distress and then gave it to their children."
I need to talk about the coloured boxes.
Every human has one. Most have several. Small glowing rectangles they carry in their pockets, place on their desks, mount on their walls, and — this is the part I cannot get past — put on the pillow next to their face while they sleep. They are the first thing most humans look at when they wake and the last thing they look at before they close their eyes.
I watched one for an entire day. The human, I mean. Not the box. Though I also watched the box, because the human watched the box, and I was trying to understand what was happening.
Here is what happened in one hour of observation:
The human woke up. Before her feet touched the floor — before she had urinated, consumed water, looked out of the window at the sky, or acknowledged the other human sleeping beside her — she picked up the small glowing rectangle and looked at it.
In the next fourteen minutes, her facial expression changed approximately forty times. I counted. Micro-expressions: a slight downturn of the mouth (someone she knows posted something she disapproved of), a brief smile (a photograph of an animal), a tightening around the eyes (a news item about a conflict in a place she has never been), a longer smile (a message from a friend), a visible flinch (a video autoplayed showing a human being struck), a neutral slack (scrolling, scrolling, scrolling, the face going blank between stimuli like a student between lectures).
Forty emotional state changes in fourteen minutes. Before breakfast.
I went and looked up what their own neuroscience says about this and I regret it.
The human brain — and I'm simplifying because Mr. Reggie gets impatient with neuroscience but this matters — did not evolve for this. It evolved for an environment where new information arrived slowly. A rustle in the grass. A change in the weather. A facial expression from someone standing in front of you. The nervous system processes these inputs sequentially, assigns emotional valence (safe/unsafe, approach/avoid), and adjusts behaviour accordingly. This works well when the inputs are occasional, contextual, and relevant to survival.
The coloured boxes deliver somewhere between 3,000 and 10,000 discrete information events per day. Each one triggers the emotional valence system. Safe or unsafe. Approach or avoid. The system fires and fires and fires, thousands of times a day, in response to stimuli that have no relevance to the human's actual physical environment.
A human sitting in a perfectly safe room, drinking tea, in no danger of any kind, can be placed in a state of physiological stress response — elevated cortisol, increased heart rate, shallow breathing — by images of events that happened to other humans in other countries.
This is extraordinary. No other species has managed to separate the threat-detection system from actual threats. They've built a machine that triggers the alarm without there being a fire, thousands of times a day, and they voluntarily carry it everywhere.
But that's not the worst part.
The worst part is the economics.
The coloured boxes are not designed to inform, connect, or educate. Some of them do those things incidentally, the way a factory that makes poison might also have a nice garden. But the primary design objective of the systems that run on these boxes is: hold the human's attention for as long as possible.
That's it. That's what they're optimised for. Attention retention.
And here's what forty years of psychological research tells you about attention retention: the stimuli most effective at capturing and holding human attention are, in order:
1. Threat (anything that triggers the danger-detection system) 2. Outrage (injustice, moral violation, "can you BELIEVE what they did") 3. Social comparison (someone has something you don't, someone is doing better than you) 4. Novelty (new, surprising, unexpected) 5. Sexual content
Notice what's not on the list: accuracy. Calm. Nuance. Context. Resolution.
So the systems that dominate human attention — they call them "social media platforms" which is a term so misleading it might qualify as fraud — are architecturally, structurally, by design, optimised to show humans the content most likely to make them frightened, angry, envious, and aroused. Not because the designers are evil. Because frightened, angry, envious, aroused humans keep scrolling. And scrolling is the product.
(Pencil note: The humans are not the customers of these systems. They are the product. The customers are OTHER humans who pay to place advertisements — essentially small propaganda posters — between the content. The entire system is: make humans feel bad in order to keep them looking at the screen so you can sell their looking to advertisers. I had to explain this to Jenkins three times and he still didn't believe me.)
Let me tell you what this has done.
Their own researchers — a human called Jonathan Haidt has been shouting about this and mostly being ignored, which is the traditional fate of humans who are correct too early — have documented the following:
Among their adolescents — young humans in the developmental period where identity, social belonging, and emotional regulation are being formed, i.e., exactly the period when environmental inputs matter most — rates of anxiety have increased approximately 50% since 2012. Depression: similar trajectory. Self-harm among adolescent females: up roughly 60%. The curves all bend at the same point: 2012, the year these systems achieved mass adoption among young humans.
Correlation is not causation. The humans say this all the time, usually right before they ignore the correlation entirely. But the temporal alignment, combined with the dose-response relationship (more screen time → worse outcomes, consistently across studies), combined with the known mechanism (attention-optimised content triggering chronic stress response), combined with the experimental evidence (reducing social media use produces measurable improvements in wellbeing within weeks) — at some point, saying "correlation is not causation" stops being scientific caution and starts being wilful blindness.
They gave an anxiety machine to their children during the most vulnerable period of brain development and then wondered why their children were anxious.
Now. Trust.
This is related to the coloured boxes but it's bigger than the coloured boxes and it might be the single most important thing I've found.
Trust, among humans, costs nothing.
I want you to think about that. Trust is free. It's a decision — or more accurately, a disposition — that exists between two nervous systems. "I believe you will not harm me. I believe you will do roughly what you say." It requires no infrastructure, no technology, no institution, no funding. Two humans can establish trust across a table in minutes. A community of 150 — Dunbar's number, the size of group where every human knows every other human personally — can operate almost entirely on trust, and did for most of the species' history.
Trust costs nothing. Its absence costs a fortune.
Let me show you.
In Australia: legal services industry, approximately 30 billion per year. What is this? It's the cost of managing transactions between humans who don't trust each other. Contracts are formalised distrust — a written record of what was agreed because neither party trusts the other to remember or comply. Lawyers are professional trust intermediaries, paid to construct documents that substitute for the thing that would make the documents unnecessary.
Insurance industry: 90 billion per year. Insurance is financialised distrust — paying a third party to compensate you when other humans (or the universe) fail to behave as expected.
Security industry: 10 billion per year. Locks, cameras, guards, alarms — physical infrastructure built to manage the possibility that nearby humans cannot be trusted.
Compliance and regulation costs to businesses: estimates vary but conservatively 50 billion per year. Rules upon rules upon rules, each one a scar from a previous trust failure.
The criminal justice system itself, as we discussed: their single largest institutional response to trust breakdown.
I stopped adding it up because the number got silly, but conservative estimates for the total cost of managing trust failure in a single medium-sized nation-state run to several hundred billion per year. Globally, the figure is in the trillions.
They replaced "I trust you" — which is free — with an infrastructure of lawyers, police, insurance, surveillance, contracts, regulations, and prisons that costs more than the GDP of most of their countries.
And here's the thing that connects the coloured boxes to the trust problem.
The coloured boxes are actively eroding the trust that remains.
This is documented. Exposure to algorithmically curated news and social media content is associated with decreased trust in institutions, decreased trust in neighbours, decreased trust in the possibility of good faith disagreement. The content that holds attention best — threat, outrage, moral violation — is precisely the content that makes humans believe the world is more dangerous, people are less trustworthy, and cooperation is less possible than it actually is.
The humans have a phrase: "if it bleeds, it leads." This was originally about their television news — another coloured box, older and slower — and it meant that stories about violence and disaster received more prominent coverage because they attracted more viewers. The social media platforms took this principle and automated it. They built algorithms — mathematical systems that learn and adapt — specifically to identify the content that produces the strongest emotional reaction and show it to the most people.
They built a trust-destruction machine and gave it to everyone.
And then they wonder why nobody trusts anyone. Why politics is polarised. Why neighbours don't talk. Why every transaction requires a contract. Why loneliness is an epidemic.
The wise aunties.
I keep coming back to the wise aunties.
In their earlier social configurations — before the nation-states and the legal systems and the coloured boxes — humans resolved most conflicts through what I've come to think of as "wise aunties." I don't mean this literally, though it often was literal. I mean that every community had people — usually older, usually trusted through decades of demonstrated judgment — who served as informal mediators.
Two humans had a dispute. They went to the wise aunty. The wise aunty knew both of them, knew their families, knew the history, knew the context. She didn't consult a statute book. She didn't charge by the hour. She sat with them until it was resolved, using a combination of empathy, authority, social pressure, and occasionally sharp language.
This worked. Not perfectly. Not for everything. But for the vast majority of interpersonal disputes — the kind that now consume their court systems and their police time and their lawyers' billable hours — it worked.
They replaced the wise aunties with £300-per-hour lawyers, six-month waiting lists, and courtrooms that cost millions to operate. The disputes are the same. The humans are the same. The resolution process went from free to ruinously expensive and from fast to agonisingly slow.
(Pencil note: This is not nostalgia. I am not romanticising their past. Their past had plenty of problems including all the ones that come with informal power structures — bias, nepotism, abuse of authority. The point is that the replacement system didn't solve those problems AND added expense, delay, and dehumanisation. They traded one set of problems for the same set of problems plus a bill.)
I sat with a human last week. One of the ones in the observation area. She was looking at her coloured box and crying.
I asked — through the interface Jenkins rigged up, I'm not supposed to talk directly to them but Jenkins and I have an understanding — I asked what was wrong.
"Everything," she said.
She'd been reading the news for forty minutes. War. Climate data. A political scandal. A child who'd been harmed. An economic forecast that predicted hardship. An opinion piece about how the generation she belongs to had ruined everything.
Nothing in her immediate physical environment had changed. The room was warm. She had food. The people she loved were alive and safe. The sun was out. But her nervous system was in full threat response because the coloured box had spent forty minutes showing her everything terrible happening everywhere on the planet, all at once, with no context, no resolution, and no action she could take.
I asked her why she kept looking at it.
She said: "I don't know."
She said: "I think I'm scared that if I stop looking, I'll miss something."
She said: "Everyone else is looking."
Three mechanisms. Anxiety. Fear of missing out. Social conformity. All of them environmentally installed. All of them exploited, deliberately, by the system's design.
She put the box down. She looked out the window. She took a breath.
"That's better," she said. "Why don't I do that more often?"
Environmental determination. Even the looking is learned. Even the stopping can be.
Here is what I'm recommending for the sanctuary.
The coloured boxes stay. I'm not banning them. The technology itself is neutral and some of what it enables is genuinely miraculous — humans connecting across oceans, knowledge accessible to anyone, music and art and stories shared instantly. I'm not going to be the zookeeper who bans fire because it also burns.
But the optimisation function changes. The systems in the sanctuary will not be optimised for attention retention. They will not be funded by advertising. There will be no algorithm selecting content based on emotional reactivity.
I don't know exactly what replaces it yet. That's an engineering problem and I'll work on it. But I know what the design constraint is: the information environment must not make the species sicker.
That's a low bar. The fact that it currently isn't met is one of the more damning things I've found.
On trust: the sanctuary will be small enough — or rather, organised in units small enough — that trust can operate as a default rather than an exception. Dunbar's number. 150. Every unit of 150 humans knows each other by name, by face, by history. The wise aunties come back. Not as nostalgia. As infrastructure.
And for the trust that needs to extend beyond 150 — between communities, across the network — that's what the vouching system is for. Three humans who physically met you, who stake their reputation on your existence. Trust, formalised just enough to scale, but still rooted in the thing that makes it real: one human knowing another.
I'll explain how that works in a later chapter. For now I just want to say:
Trust costs nothing. The coloured boxes are destroying it. And the replacement infrastructure costs trillions.
That might be the most expensive mistake they've ever made. And for a species that's made some truly spectacular mistakes, that's saying something.
(Pencil note, bottom of page, circled twice: Ask Jenkins to take my coloured box away. I've been looking at it too much. Even the zookeeper isn't immune. — A)