Underflow | 2


Previous episodes

In this episode: Toomas Peters, an Estonian Internet entrepreneur, has published a pseudonymous paper proposing a digital cash network called Bitcoin. Now he is meeting with American scientists and scholars to assess the viability of new research into neural networks and artificial intelligence.

September 2008 — Cambridge, Mass., and Princeton, N.J.


THE WIRED HEADLINE calling Toomas Peters a "billion-dollar brain" was an exaggeration, if a forgivable one. Yazoo had paid $750 million in stock for Peters's Lynxite video-calling platform, with an option for Peters to offload his stock for $600 million in cash to a Japanese bank which was underwriting the deal.

Peters took the cash and moved it promptly into Union Bank of Switzerland, a relatively safe haven while the world financial system was collapsing around his ears. Net of taxes he was a half-billionaire, safely encamped in the foothills of great wealth and deciding which high peak to scale next.

Bitcoin had been his side-project, more or less his hobby, but it might go somewhere, and he had almost finished writing the code. He would launch the first coins into the wild at the turn of the year. Would they get traction? He could see the arguments for and against. A private digital currency had vast potential utility but no intrinsic value. Well, he would keep the first million Bitcoins for himself, try to forget about them, and see what, if anything, they were worth after a decade or two. A pension pot.

Just now Peters was much more excited by the possibilities of artificial intelligence. His reading of the scientific and technical literature persuaded him that AI research had hit the buffers twenty years earlier mainly because the theoretical models developed by the machine-learning scientists had far outstripped the computing hardware available at the time.

His analysis had been confirmed for him by Henry Hutton, the doyen of AI, whom Peters had met a couple of days earlier at the University of Toronto. Hutton had published a landmark paper in the mid-1980s proposing a revolutionary architecture for AI using neural networks, which were algorithms designed to process data in the manner of human brain-cells.

Hutton's paper had been expected to lead in short order to new generations of AI platforms capable of outpacing human intelligence. In fact, said Hutton, nothing of the kind had happened. His theoretical breakthrough had been real enough, but he had not foreseen the quantities of computing power that would be needed in practice to train a neural network to make even a seemingly simple decision. Asking a 1980s mainframe to tell a cat from a dog was like asking a nematode worm to paint the Mona Lisa.

With hindsight, said Hutton ruefully, his celebrated paper had indeed marked a turning-point for AI, but a turning-point in the wrong direction. It had marked the point at which capital and talent deserted AI for the new technologies of the Internet. Almost alone, Hutton had continued his work on neural networks for the following 20 years, developing newer and better training models while knowing that he might never live to see them exploited.


In Cambridge, his next stop after Toronto, Peters met with a professor of jurisprudence at Harvard Law School, a trustee of Boston Theological Institute, a professor of neuroscience at Harvard Medical School, and a behavioural psychologist at MIT.

His questions for all of these experts were speculative and verged at times on the philosophical: What might happen in society if machines did start thinking and communicating like people? How would public opinion react? How would law-courts assign property-rights in content generated by AI, and liability for damage and injury caused by AI? Might moralists and theologists contend that intelligence implied consciousness and that intelligent machines were thus entitled to "human" rights? Might AIs get out of control and turn against humanity, as they often did in science fiction?

From Cambridge Peters went on to Princeton, where he and his assistant, Lars Lipp, were guests of Holbert Rijtkraft, the director of the Institute for Advanced Studies. Rijtkraft, a mathematical physicist, chaired an informal seminar, sponsored by Peters, to discuss whether machines could ever think like people. The seminar concluded that no conclusion was possible because nobody knew how people thought.

Afterwards Rijtkraft hosted a dinner for Peters and Lipp, to which he also invited Anetha Houlay, director of scientific development at the Institute.

The window of Rijtkraft's private dining-room looked out on to a manicured lawn the size of a small bowling green, with a bench at one side. 

"That", said Rijtkraft, "is the most valuable piece of real estate in America".

Peters gave him a quizzical look.

"Einstein and Gödel used to sit on that bench each day, after their morning walk", said Houlay, taking over the story. "Einstein used to say that the only reason he still came into his office was so that he could go for his walk with Gödel. When Einstein died in 1955 Gödel was heartbroken. To console him, his admirers on the faculty formed a trust to maintain the lawn, and the bench, in Einstein's memory."

"This being Princeton", continued Rijtkraft, "the faculty members didn't just give cash; a couple of them contributed patents. One of the patents was for a data-compression algorithm which, by the early 1970s, was being used to transmit just about every radio and television signal in the Western world. This piece of grass became a billionaire in its own right. More than one director has ordered a study of whether the Institute could build a lab on the lawn which could then be funded from the lawn's endowment. But the trustees have always voted against, on the grounds that paving over the lawn would be more or less the opposite of what the trust was created to achieve."

"I wonder what Gödel would have said if he'd been in our seminar today", said Peters. "None of the concepts would have been new to him. He probably even ran into McCullock and Pitts here in Cambridge, is that right?"

"Very possibly", said Rijtkraft. "They overlapped here and at MIT for the best part of thirty years. But Gödel became more and more reclusive as he got older. His social circle after the war was pretty much limited to Albert Einstein and Oskar Morgenstern — which was not a bad social circle by any means. He was invited to the Macy Conferences in the forties and fifties which laid the foundations of all subsequent work in artificial intelligence, but he didn't go. He didn't even reply to the invitations, according to his secretary's diaries, which we still have in the archives."

"I understand from Holbert", said Houlay, nodding towards Rijtkraft — "that you are considering taking forward the work of the Macy Conferences, Mr Peters."

"Toomas, please", Peters replied. "Yes, I am convinced that we can scale up neural networks to levels of complexity comparable with that of the human brain, and beyond. Thanks to the Internet, such networks can have all of the information in the world at their disposal for decision-making. I don't want to get into the question of whether they will be truly intelligent, or aware, or whatever. It will be enough that they can do much of what the human brain does far more efficiently. They will supplement our human capacity to do almost anything."


Peters and Lipp flew back to Estonia on a NetJet from Teterboro. In the course of the eight-hour journey Peters shared with Lipp the main elements of his thinking.

There would be no great ethical obstacles to commercialising even high-performing artificial intelligence, said Peters. Biology and religion both insisted that life required organic matter. No priest or professor would claim personhood for a box of arithmetic.

In legal terms, intellectual-property law was already AI-friendly. If you wrote an algorithm, then that algorithm was your private property, and so was the output from that algorithm, even if the output walked and talked like a human being. That said, AI would create new edge-cases around "fair use" and "transformation", to the extent that AIs created content from the materials on which they were trained.

The big risk concerned timing. Peters had come away from his American trip persuaded that the time was almost ripe for achieving the sort of AI outcomes anticipated in the 1980s. Computing power was almost as cheap and plentiful as it needed to be for neural networks to start out-performing human brains. Moore's Law predicted that the tipping-point would be reached in a couple of years. AI would become viable. But if Peters held off his investment until that tipping-point arrived, somebody else would get in ahead of him.

Lipp played devil's advocate, doing his best to demolish or at least dent Peters's logic, but he failed. He was forced to agree that the technical problems were soluble in principle and that the possibilities were immense.

By the time their plane reached Tallinn, Peters was committed.

To be continued ...


THE BROWSER: Caroline Crampton, Editor-In-Chief; Jodi Ettenberg, Editor-At-Large; Dan Feyer, Crossword Editor; Uri Bram, CEO & Publisher; Sylvia Bishop, Assistant Publisher; Al Breach, Founding Director; Robert Cottrell, Founding Editor.

Editorial comments and letters to the editor: editor@thebrowser.com
Technical issues and support requests: support@thebrowser.com
Or write at any time to the publisher: uri@thebrowser.com

Elsewhere on The Browser, and of possible interest to Browser subscribers: Letters To The Editor, where you will find constructive comment from fellow-subscribers; The Reader, our commonplace book of clippings and quotations; Notes, our occasional blog. You can always Give The Browser, surely the finest possible gift for discerning friends and family.

Join 150,000+ curious readers who grow with us every day

No spam. No nonsense. Unsubscribe anytime.

Great! Check your inbox and click the link to confirm your subscription
Please enter a valid email address!
You've successfully subscribed to The Browser
Welcome back! You've successfully signed in
Could not sign in! Login link expired. Click here to retry
Cookies must be enabled in your browser to sign in
search