Do The Right Thing


Much of the best new writing online originates from activities in the real world — music, fine art, politics, law, to cite recent subjects of these letters.

But there is also writing which belongs primarily to the world of the Internet, by virtue of its subject-matter and of its sensibility. In this category I would place the genre that calls itself Rationalism, the raw materials of which are cognitive science and mathematical logic.

I will capitalise Rationalism and Rationalists when referring to the writers and thinkers who are connected in one way or another with the Less Wrong forum (discussed below). I will do this to avoid confusion with the much broader mass of small-r "rational" thinkers — most of us, in fact — who believe their thinking to be founded on reasoning of some sort; and with "rationalistic" thinkers, a term used in the social sciences for people who favour the generalised application of scientific methods.

Capital-R Rationalism contends that there are specific techniques, drawn mainly from probability theory, by means of which people can teach themselves to think better and to act better — where "better" is intended not as a moral judgement but as a measure of efficiency. Capital-R Rationalism contends that, by recognising and eliminating biases common in human judgement, one can arrive at a more accurate view of the world and a more accurate view of one's actions within it. When thus equipped with a more exact view of the world and of ourselves, we are far more likely to know what we want and to know how to get it.

Rationalism does not try to substitute for morality. It stops short of morality. It does not tell you how to feel about the truth once you think you have found it. By stopping short of morality it has the best of both worlds: It provides a rich framework for thought and action from which, in principle, one might advance, better equipped, into metaphysics. But the richness and complexity of deciding how to act Rationally in the world is such that nobody, having seriously committed to Rationalism, is ever likely to emerge on the far side of it.


The influence of Rationalism today is, I would say, comparable with that of existentialism in the mid-20th century. It offers a way of thinking and a guide to action with particular attractions for the intelligent, the dissident, the secular and the alienated. In Rationalism it is perfectly reasonable to contend that you are right while the World is wrong.

Rationalism is more of an applied than a pure discipline, so its effects are felt mainly in fields where its adepts tend to be concentrated. By far the highest concentration of Rationalists would appear to cohabit in the study and development of artificial intelligence; so it hardly surprising that main fruit of Rationalism to date has been the birth of a new academic field, existential risk studies, born of a convergence between Rationalism and AI, with science fiction playing catalytic role. Leading figures in existential risk studies include Nicholas Bostrom at Oxford University and Jaan Tallinn at Cambridge University.

Another relatively new field, effective altruism, has emerged from a convergence of Rationalism and Utilitarianism, with the philosopher Peter Singer as catalyst. The leading figures in effective altruism, besides Singer, are Toby Ord, author of The Precipice; William MacAskill, author of Doing Good Better; and Holden Karnofsky, co-founder of GiveWell and blogger at Cold Takes.

A third new field, progress studies, has emerged very recently from the convergence of Rationalism and economics, with Tyler Cowen and Patrick Collison as its founding fathers. Progress studies seeks to identify, primarily from the study of history, the preconditions and factors which underpin economic growth and technological innovation, and to apply these insights in concrete ways to the promotion of future prosperity. The key text of progress studies is Cowen's Stubborn Attachments.


The central institution of the Rationalist school is Less Wrong, an online public forum where substantive new writing is published and discussions pursued. Less Wrong was founded by the computer scientist Elizier Yudkowsky, who explained the need for Rationality thus:

We need a concept like “Rational” in order to note general facts about those ways of thinking that systematically produce truth or value — and the systematic ways in which we fall short of those standards.

Yudkowsky sub-divided pure Rationality into two fields of applied Rationality.  The first was epistemic rationality, which he defined as "systematically improving the accuracy of your beliefs". The second was instrumental rationality, which he defined as "systematically achieving your values", and which others might call "decision theory".

For Rationalists, Rational Beliefs are those which can be explained by means of probability theory, in particular the school of probability theory known as Bayesianism. Bayesianism (named after its founder, an 18C English clergyman called Thomas Bayes) is a set of rules for assigning probability to particular future events — for example: "Will America and China go to war before 2050?"

Very roughly speaking, to assign a Bayesian probability to a given question you would begin by saying what likelihood you initially assigned to the imagined event; you would also specify the arguments on which your expectation was based, which would very probably include various related or analogous events; and you would adjust your expectations over time as and when the intervening events did or did not unfurl as expected.

As will readily be seen, your initial position as to whether America and China will go to war need owe nothing to Bayes. It may be calculated or arbitrary. But by specifying related events against which your hypothesis may be tested, eg ...

America and China will go to war by 2050
because China will be encouraged by Russia's invasion of Ukraine, and
because China will develop an effective missile shield by 2030, and
because China will invade Taiwan once it has its missile shield in place, and
because America will still regard itself as obligated to defend Taiwan

... you will be obliged to clarify your own thinking on the matter in question and you will be obliged to update your initial expectations (your "priors") when any of your related expectations changes materially. In this case, if the outcome of Russia's invasion of Ukraine is not at all encouraging, and/or if China does not develop a ballistic missile shield by 2030, then clearly something is going wrong with your prediction of war between China and America by 2050.


I doubt there is any wholly original scientific content to Rationalism: It is a taker of facts from other fields, not a contributor to them. But by selecting and prioritising ideas which play well together, by dramatising them in the form of thought experiments, and by pursuing their applications to the limits of possibility (which far exceed the limits of common sense), Rationalism has become a contributor to the philosophical fields of logic and metaphysics and to conceptual aspects of artificial intelligence.

Here some of the rules of thumb which I have retained from my relatively casual readings into Rationalism, for convenience quoting Yudkowsky throughout from his book, Map And Territory:

— The more complex a proposition is, the more evidence is required to argue for it
— Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation
— For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it
— Curiosity and morality can both attach an intrinsic value to truth. Yet being curious about what's behind the curtain is a very different state of mind from believing that you have a moral duty to look there. If you are curious, your priorities will be determined by which truths you find most intriguing, not most important or most useful
— What set humanity firmly on the path of Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world; truth as an instrument
— Probabilities are subjective degrees of belief — often operationalized as willingness to bet

I have also learned from Rationalism to cherish well-crafted thought-experiments as gifts that keep giving. A particular favourite is the endlessly-analysed problem known as Newcomb's Box, summarised below. I will leave it to you to decide whether the point ultimately at issue here is the upper bound of artificial intelligence, or the predictability of human behaviour, or something else entirely.

In Newcomb's problem, a superintelligence called  Omega shows you two boxes, A and B, and offers you the choice of taking only Box A, or both boxes A and B.

You know that Omega has already predicted your behaviour, and has prepared the boxes as follows:

— Omega has put $1,000 in box B.

— If Omega has predicted that you will take Box A only, Omega has also put $1,000,000 in Box A.

— If Omega has predicted that you will take both boxes, Omega has left Box A empty.

— Omega has played this game many times, and has never been wrong in predicting whether someone will take both boxes or not.

Do you take only Box A? Or both boxes, A and B?

My supplementary questions: What if Omega might have put $1 billion in Box A, rather than $1 million? Is this really a problem about relative returns?


Pen-names are common among Rationalist writers. "Scott Alexander" conceals very lightly the offline identity of Scott Siskind, a practising psychiatrist whose double life as doctor and blogger you can read about here. I do not know the offline identity of Applied Divinity Studies, though I know people who say that they do. Only Tyler Cowen seems to know the offline identity of Gwern Branwen, a polymath upon whose every word I hang. And not even Tyler Cowen knows the true name of the sage who styles himself Pseudoerasmus.

I mentioned earlier that Rationalism is more an applied than a pure discipline; Rationalists recognise in one another a common intellectual style, a common way of framing arguments, but they do not spend much of their time discussing Rationalism as such. They apply its methods to real-world problems in a spirit of curiosity, often with striking results.

To sample the Rationalist sensibility, after looking in on Less Wrong, a good first destination is Scott Alexander's blog Astral Codex Ten, which includes periodic evaluations of psychiatric medications which are invariably extended by a well-managed comment thread.

The blogger Zvi, though no epidemiologist, proved himself to be one of the two or three best interpreters of Covid statistics and one of the very few to foresee correctly the path of the pandemic.

Scott Aaronson of Shtetl-Optimised is the go-to expert on quantum computing.

Stuart Armstrong, a research fellow at the Future Of Humanity Institute, is the Rationalist to whom questions about existential risk from superintelligent computers or invisible aliens should probably be directed in the first instance —  but don't always expect to understand the answers.

Tyler Cowen is beloved of Rationalists but would hesitate (I think) to identify with them. His attitude towards cognitive biases is more like that of Chesterton towards fences: Before seeking to remove them you should be sure that you understand why they were put there in the first place. The true ambassador of the Mercatus Center to the Rationalist community is Robin Hanson, whose speculations about alien life and simulated humans are a constant reminder of the grave opportunity costs of common sense.


A diet consisting only or mainly of Rationality would become tedious, as might too concentrated a diet of anything. But if, like me, you spend most of your days amid emotivist arguments for the way the world should be, then dipping occasionally into the Rationalist view of things has a marvellous power of refreshment. At such times I feel towards Rationalism as Adrian Mitchell does towards his beloved Celia in this short poem:

When I am sad and weary
When I think all hope has gone
When I walk along High Holborn
I think of you with nothing on

In the hope that Rationalism may offer you a similarly unobstructed view of the world, let me conclude by recommending pieces from three of the writers already mentioned in this letter, which you may find thought-provoking, whether for their Rationality or for any other reason:

Cat Psychology And Domestication | Gwern

The common assumption I shared, that cats were naturals for domestication [in ancient Egypt]  because they are such good vermin exterminators, is apparently not well-supported as there were many alternatives, some superior to cats in ways. Instead, the key to their domestication may be — and this is speculative, I should caution — their essentially arbitrary role as popular sacrifices, requiring countless "catteries" attached to temples and at least millions of sacrifices, on a scale staggering to contemplate ... We will never know how many cats were sacrificed this way. One shipment of cat mummies alone, sent to London, weighed nineteen tons, out of which just one cat was removed and presented to the British Museum before the remainder were ground into powder.

Would UFO Aliens Be Our Gods? | Robin Hanson

What if the world soon comes to a general consensus that some UFOs actually are aliens? And what if our direct physical relation to these aliens doesn’t change much? That is, they still don’t talk to us, we only see them rarely, and we don’t find their “bases”, their origins, or figure out any of their tech. And what if this situation persists for another century, or for many centuries?

In this postulated scenario, I think the main way that our world changes is this: in our minds, these UFO aliens take over the top of our status hierarchy; we see them as the top dog in our “pack”. And as status is a big deal to we social animals, this ends up being a big deal.

The first obvious implication is that acting or looking alien-like would start to become higher status. Hovering, fast movement and acceleration, bright fuzzy lights, making no sounds, geometric shapes, and smooth shiny surfaces without protuberances. Because that’s just how status works; if aliens are high status, we want to look like them.

Two Attitudes in Psychiatry | Scott Alexander

Attitude 1 says that patients know what they want but not necessarily how to get it, and psychiatrists are there to advise them. So a patient might say “I want to stop being depressed”, and their psychiatrist might recommend them an antidepressant drug, or a therapy that works against depression. This is nice and straightforward and tends to make patients very happy.

Attitude 2 says that people are complicated. Sometimes this complexity makes them mentally ill, and sometimes it makes them come to psychiatrists and ask for help, but there’s no guarantee that the thing that they’re asking about is actually the problem. In order to solve the problem, you need to unravel the complexity, and that might involve not giving the patient what they want, or giving them things they don’t want. This is not straightforward and requires some justification.

Robert Cottrell



Join 150,000+ curious readers who grow with us every day

No spam. No nonsense. Unsubscribe anytime.

Great! Check your inbox and click the link to confirm your subscription
Please enter a valid email address!
You've successfully subscribed to The Browser
Welcome back! You've successfully signed in
Could not sign in! Login link expired. Click here to retry
Cookies must be enabled in your browser to sign in
search