System Error: Where Big Tech Went Wrong and How We Can Reboot


Uri: Hello. I'm delighted to be here today with three Stanford professors – philosopher Rob Reich, political scientist Jeremy Weinstein and computer scientist Mehran Sahami – who are authors of the new book System Error: Where Big Tech Went Wrong and How We Can Reboot. Thank you all so much for being here today.

We're going to play a very simple game we call The Last Word, where we ask you to answer difficult questions in a very specific number of words. Rob, we'll start with you. Could you please tell us what this book is all about in exactly ten words?

Rob: [smiles] Alright: [counts on fingers] Reenergizing democratic institutions through the sensible regulation of Big Tech.

Uri: That was fantastic

Jeremy: Wow

Uri: Obviously the relationship between Big Tech and the democratic process, and our values as a society, is a very prominent topic on everyone’s minds these days, though often with more sound than light. I was wondering if you can tell us about the three perspectives you're bringing to it, and what you hope to achieve with the book.

Jeremy: So let me start by building on Rob's ten-word answer: in this moment, many people around the United States and around the world, feel that the effects of technology are washing over them. That it's a wave that you have no agency in shaping or influencing. And our view is that we need to pivot that discussion and recognise that there's profound agency that people have – as technologists who design technology, as users of technology, as citizens in a democratic society – and that ultimately the effects of technology are something that we can impact, impact by ensuring that our values are reflected in technology as it's designed, and impact by shaping the way that government mitigates the harms of technology that is all around us.

Mehran: I think part of the message of the book as well is thinking not only in the big picture but also understanding what are the details of the technology and how they're impacting people's lives. So things like automated decision-making that are now using AI techniques to make consequential decisions in people's lives; what happens with the future of work as AI scales; issues around privacy, as information about us is being gathered online and aggregated; and ultimately something many people are familiar with, the misinformation and disinformation that flows through social networks. So being able to disaggregate those technologies and understand the forces that are at play creates a greater urgency about why we need to do something about them.

Rob: The spirit of the book is after four years of teaching a class together at Stanford – in the belly of the beast of Silicon Valley, as it were – we wanted to try to expand the conversation in trying to reach really talented undergraduates using a technological lens, policy lens, and a philosophy lens to broaden the conversation.

And as Jeremy described, the book has answers of a certain kind to the dilemmas or problems of Big Tech, but they're not a policy blueprint – “if only Congress would take our answers, things would miraculously get much better” – rather, it's a way of shaping a new conversation and a new framework for thinking about the trade-offs that are encoded in the various products that Silicon Valley and Big Tech has brought to the world, and ensuring that the decisions that get made in the corporate boardrooms and product development lifecycles of the big tech companies are not the ones that are imposed upon the rest of us, because we haven't exercised our own agency in trying to shape a technological future worth having.

Uri: I have to say that the book was very uncomfortable for me, as a young person who went through a similar university and had that feeling that these questions of values didn't come up as much, and that we did all feel a little powerless, like we were a part of a bigger system that shaped us and which was out of our control. Which I think a lot of people feel, and I think that's something really great about the way you've approached this and made us aware of how we've been shaped so far, but also an empowering story about what we can do, which I really appreciated.

Rob: Let me just add to that, if I can Uri – I'm a long time Browser reader, subscriber, I have some sense of maybe of the community of people who are likely to be listening. And there's a sense in which of course it's important that technological and scientific progress have delivered extraordinary benefits to societies and to individuals. And the question is not about, as it were, a values conversation that the philosopher or the policy maker shows up and says, stop, we need to slow it all down and make sure that we have a broader conversation that effectively brings a halt to technological progress.

To the contrary, the idea is that the interesting aspects of an ethics conversation and a policy conversation are really not about right and wrong, or true and false choices about technology or science, but rather about better and worse social outcomes. The ways in which so many of the technological advances of the past hundred or 200 years, when they are brought to market typically by private companies, and then the market consolidates, they exercise an extraordinary effect on society. And it's the task of all of us to harness the enormous benefits and then to try to mitigate some of the harms. And that's a task that goes far beyond the decision-making of people in companies alone.

This is why at the end of the day, I think ethics is an energising way of thinking about technology, not “the moral police have shown up to the technologists and told them when to stop.”


Not yet a Browser subscriber? Every day we hand-pick the five best articles on the web and recommend them in our daily newsletter. Try your first month free:






Uri: Absolutely. And well, on that note, Jeremy you are, I believe a philosopher who has spent time in government. I don't know if that's a rare beast.

Jeremy: Not a philosopher. I'm a political scientist who spent time in government, which is also a relatively rare beast.

Uri: So I was wondering if you could tell us in exactly five words, what you think are the main challenges in the ways that social values get stymied, or challenged, or fail to be implemented through the process of government?

Jeremy: [thinks]: building consensus around shared goals.

Uri: You are all so good at this, I'm absolutely gobsmacked.

Jeremy: Now can I add two sentences beyond that?

Uri: Please do, please do.

Jeremy: So in the book we write about democracy as a technology. Democracy is the technology that our society and many other societies have chosen to help us navigate really difficult value trade-offs, that as a collective of human beings living together where we can't have everything we want, not everyone can get the outcomes they want, we have to make some choices.

And you can think about lots of different ways of making those choices. You could think about those choices being made by a single individual, like a king or the Pope, which was one way that societies used to organise themselves. You could think about leaving those decisions to companies, and that's been a bit of the mode that we've been in with Big Tech. And this book is an argument about the role of our democratic institutions in making those choices. And the reason it's hard to make those choices, and why I chose the words that I did, is that people want different things and they want them very enthusiastically, and they're very unhappy when they don't get the things that they want.

So this process of deliberation, and negotiation, and contestation, that's what politics is all about. And right now we're at a moment of a tremendous lack of faith in our democratic institutions and an inability to bridge the partisan divides in the United States. But it doesn't mean that there's some alternative way to accomplish that underlying task, that is the task of our democracy.

Rob: There's a mistake that I think I perceive that technologists make sometimes  – and we discussed this in the book some – the important part for any reader to understand if they're trying to figure out what's going on in Big Tech: you don't need to understand all of the details of a particular technology, what's helpful to understand, we say, is you should understand the optimisation mindset of the technologist, always choosing some objective or goal to optimise around.

And the mistake I think technologists frequently make is they complain about government because it seems so sub-optimal in delivering outcomes, and to my mind that just fundamentally mistakes what democratic governance is about. As Jeremy said, it's about a fair process for refereeing in an ongoing way, the contestation of citizens’ own choices and preferences. We shouldn't expect the optimal production of something through democratic government, because we don't have a unanimous consensus around what the objective is. That's why democracy is an extraordinary technological, institutional arrangement for the always-evolving, ever present to update regulatory rules of the game, of the entire social order.

We shouldn't expect optimising out of government: we should expect, at a minimum, a guardrails approached to avoid the worst possible outcomes, and fairness, in order to treat everyone with an equal voice and to assign equal status to their interests.

Mehran: And the translation of that through technology is thinking about, as Rob mentioned, not just the technology itself, but when you think about the value trade-offs that are encoded in what someone chooses to optimise and how those things get traded off. So the extent that someone needs to understand the technology is not the details, but in thinking about what are the societal values and personal values that are at stake when a new technology comes into play, how do we trade off, for example, something like privacy versus national security and the devices that we have?, those are the things that we really need to think about and deliberate. It requires some understanding of the technology, but more importantly it takes an understanding of values and the deliberation of those values.

Uri: One thing I found really interesting in your book was this angle about mitigating harms and the ways that we want to use government to prevent bad things, just as much as we want to try to do good things. Which I think is a lens that in the last couple of years obviously has become more salient in our world as a whole. Do you have thoughts about that?

Jeremy: Yes, I'll pick up on that to say that on campus and in the region that we live, regulation is a loaded word, right? Even among our undergraduate computer science majors, when they hear the word regulation, they just assume bad. That it's going to get in the way of good things that they want to achieve as individuals or people working in companies or as technologists.

And we go to great lengths to remind people the degree to which regulation undergirds everything that they're able to achieve in the world. So you start by asking someone the question, did you drink milk this morning? And when you drank that milk, were you sick to your stomach after you drank it, and why do you think you weren't sick to your stomach?

Or the clothes that you're wearing, have they caused a skin rash? Where were those clothes made and why do you think that your body is able to wear those clothes without having a reaction? So regulation is basically the set of decisions that we make collectively about how to create a future that we want to have together or to live in society and to amplify the benefits of new technologies or new products, but also to mitigate those harms.

And so we need to break people out of a kind of binary mindset where markets are good and markets have all the solutions, and government is bad and government will slow things down, and recognise that while democratic institutions aren't perfect, they are this vehicle that we have to bring these key value trade-offs out of the secret place where they're made by people who are designing technology, or in boardrooms with the people who finance companies and run them, and bring them squarely into the public debate so we can decide what is the right balance between the benefits of algorithmic decision-making and our commitment to fairness and due process? And how do we want to balance the privacy that we care about against the potential benefits of access to data? And how do we want to take advantage of what automation enables, but preserve people's ability to find meaningful work and meet the needs of their families?

Those are societal trade-offs, and trade-offs that can't really be left to companies on their own. If you want to call that regulation, fine, but recognise that in disparaging regulation, you're effectively disparaging the role of democracy and the role of the collective in basically making choices that benefit all of us.

Rob: Another thing to add here for me is that, personally, I'm kind of exhausted by the conventional framework for thinking about Silicon Valley that I perceive for the past 20 years, which is an early enthusiasm for the liberatory potential of social media and all of the digital gizmos and tools and gadgets that we have – you know, Silicon Valley would spread freedom and democracy and improve human lives, a huge utopian streak in the work of hackers and technologists – and then the past decade, certainly the past five years, it's the complete opposite, that Big Tech is rotting democracy from within, serving us up clickbait, sending us down echo chambers and extremism, and AI is displacing human labour.

It's time to have a different conversation in which tech is neither the saviour for society, nor the destroyer of society. And to try to harness the extraordinary benefits and to tamp down through our collective agency, the harms. And one last idea here that's essential to this is, stop focusing on the personality of founders: whether Mark Zuckerberg is a good or a bad person, or if Jack Dorsey is a good or a bad person, is at best a secondary issue. The really key thing is understand the broader ecosystem in which these frontier technologies emerge and then achieve great scale in society. And then what to do about them when they exert enormous power over us.

Mehran: Yeah maybe one way to think about it is, technology is often cast as a matter of personal choice, right? If you don't like the policies of Facebook, you should get off Facebook, or #deletefacebook. So that notion of libertarianism, that people just have individual liberty and they should just make their individual choices and that’s going to somehow solve the problem, is part of what actually creates the problem in that ethos.

If you want to consider a simple analogy that we sometimes like to talk about is driving on the roadways: if you were to tell someone “there’s no rules on the road, you can either make the choice to drive and you should be careful yourself, or just don't drive,” you see what the flaws is there, right? Because there's value in driving, and telling people that they should just be personally responsible for driving doesn't solve the problem.

What we got was a set of regulations that gave us things like lanes and stop signs and streetlights, things like that. They created a system that made driving safer for everyone. Now, at the same time, you still have your personal choice around how safely you drive, how quickly you want to drive, and also whether or not you want to drive at all. But we get a system that works better for everyone because we got regulation. That's the moment we're at with technology.

Uri: Absolutely. Mehran I understand you worked in technology before becoming a professor.

Mehran: That's right. I was at Google and a few other companies for a little over a decade.

Rob: The fact that you don't get so much spam Uri has a lot to do with Mehran's work.

Uri: Thank you very much. That really is one of the greatest technologies.

Mehran: I hope you don't get a lot of spam. If you do, sorry.

Uri: Google Maps and anti-spam are the two technologies I feel unequivocally positive about, they've just made my life so much better.

With that experience and background, I was wondering if you could tell us how we got to where we are now, and since of course this won't be the last big societal change from new technology that government has to respond to, also perhaps what we can do to change things in the future… in exactly three words?

Mehran: In exactly three words [laughs]: AI changes life.

Uri: Oh, interesting.

Mehran: Well, I think we're at a moment where Artificial Intelligence technology has gotten sufficiently powerful and also concentrated in a lot of places where substantial decisions are being made about our lives that we may be unaware of. So when we apply for a loan, chances are AI is making the decision as to whether or not you're approved for credit. In our personal lives, people use dating apps, AI is making decisions as to who they should potentially match with. In the criminal justice system, who gets left in jail and who gets put out on bond are now being made more and more by algorithmic decisions.

And so as life continues to progress, there's going to be more places where it gets automated. That becomes crucial to understand how are those systems being evaluated; what sort of transparency and due processes is there to be able to understand what they're doing and to challenge the decision of those algorithms; and at the same time, understanding what data is being collected about us and fed into these algorithms.

Also, are they being audited for things like bias that might exist in the data or reinforcing historical patterns that we don't actually want to see, but that we think somehow are more objective because they're made by a computer? Really what AI gives us, is it gives us a mirror to our society. There was a bunch of historical data that's fed into these systems, which then gets turned into models that makes future decisions. What that means is we're codifying the past. Part of codifying the past means putting a mirror to ourselves and understanding what have we actually done that we like and we don't like? What do we want to change?

And the only way we can do that is by having structures in place that force us to actually look critically at these algorithms, how they're used, what their impacts are, and even tease apart details around the particular predictions they make, so that we can actually ensure a future that's positive for everyone, rather than just reinforcing the past and concentrating that power into the hands of a few people who know how to work with AI.

Jeremy: If I can add on top of Mehran, in response to your question, I'd say the book identifies three key reasons that we're in the mess that we're in.

The first is the optimisation mindset of technologists that prioritises one particular value or outcome that you're optimising for and ignores the rest.

The second is the structure by which our technology companies are financed, that privileges and prioritises scale in a way that imposes those values on everyone else before we even understand the consequences of these technologies.

And the third ingredient is the deliberate indifference of our political system. So the growth of Big Tech and its increasing concentration of power up until the present moment. In fact, our politicians paved the way for the concentration of power in Big Tech by adopting an orientation toward creating a regulatory oasis in the 1990s to seize the internet and information revolution.

So what does that mean for a future beyond the current moment? It means we need to address all three of those things. We need a mindset of the technologist that's not firmly rooted in optimisation, but that actually grapples with the relevant trade-offs, the different values that are potentially being encoded or could be encoded in technologies, and you need to encompass that or approach that through the work you do educating technologists and in companies themselves.

Second, we need a different orientation of companies. And basically we need checks on the power of the concentrated and dominant players in the tech landscape, and we think that power is going to come in part through the check that workers have on companies: who they partner with, what technology they design, how those technologies are used in the world. Because the competition for talent gives workers in the tech sector extraordinary unrealised power.

Then the third change is going to come from a government that is no longer asleep at the wheel. And we're seeing the very beginnings of that in the moment that we're in now, but we're going to need a government that is capable, and adaptable, and flexible, so they can govern technology in democratic ways. That's going to be generations of work, not just to solve the problems of the moment, but to reboot the structure of our government so it can navigate technology going forward.

Uri: Fantastic. Well, we've reached The Last Word. I'm just going to ask each of you to tell us in one word what you'd like people to take away from this book.

Rob: In one word?

Uri: In one word.

Jeremy: My word is agency. That we as individuals have agency in our technological future. And it's a question of whether we use that agency.

Mehran: My word is education. I'd say people need to educate themselves about the impact of technology, the impact it's already had on their life, what's coming down the pipe, because that's going to be significant. And to be able to make for themselves as value judgments about what they actually want to see in the future, otherwise they cede control to a group of people who will make those decisions for them.

Rob: So I'll go with the predictable, then, I'll say democracy and the historic role that democracies have played and the opportunity we have now before us to re-energise our very dysfunctional democratic institutions to rise to the challenge of our era, certainly one of the great challenges of our era, in steering the technological revolution in a beneficial way in the future.

And I'll add here that this is not just a question of domestic politics in the United States. I view it as a geopolitical consideration, and this comes up at the very end of the book. At the moment, the kind of arrangement or expectation we have is that America innovates and Europe regulates, and we get competition between democratic societies on the technological frontier, while in the meantime, an alternative geopolitical power has arisen that fuses the regime and the technology itself, of course China. And it would be far better if our democratic societies were cooperating on the geopolitical questions rather than competing.

Uri: Absolutely. Well, like I said, I did genuinely come away from the book feeling both a little ashamed at how I had previously given up my agency, but also energised to reclaim my agency. And I thought you all did a really wonderful job of that. Please, can you tell our listeners and viewers where they can find the book?

Mehran: The book's available from Harper Collins, it came out on September 7th. As you mentioned, the title is System Error: Where Big Tech Went Wrong and How We Can Reboot, and you can find it at your favourite local or online bookseller.

Rob: In audiobook, e-book, and hard copy form.

Uri: [laughing] you can get the information through all your senses. We'll obviously include all the links in the descriptions. I really recommend the book to anyone who's interested in these topics, and if you're not interested in these topics, I really recommend becoming interested in these topics.

Rob, Jeremy, Mehra, thank you so much for coming and joining us today.

Rob: Thanks for The Browser.

Jeremy: It's been such a pleasure.

Mehran: Thanks very much. Really enjoyed it.


Not yet a Browser subscriber? Every day we hand-pick the five best articles on the web and recommend them in our daily newsletter. Try your first month free:





Join 150,000+ curious readers who grow with us every day

No spam. No nonsense. Unsubscribe anytime.

Great! Check your inbox and click the link to confirm your subscription
Please enter a valid email address!
You've successfully subscribed to The Browser
Welcome back! You've successfully signed in
Could not sign in! Login link expired. Click here to retry
Cookies must be enabled in your browser to sign in
search