Lea Steinacker on Ada and ada


Baiqu Gonkar: Welcome to The Browser. Today, I’m with Lea Steinacker, who is an award-winning journalist, researcher and entrepreneur. She’s also the co-founder and COO of ada, which is a learning platform for corporate training, looking specifically at the future of work. Did I get that right?

Lea Steinacker: Yes, absolutely.

Baiqu Gonkar: Lea and I  go way back -- I’ve known her since I was 15 years old. And it’s really surreal to be interviewing you. I’ve been following her journey, which is incredible. She’s just such a… how do I describe her? A sunshine person, when you’re in her presence, you just feel like you’ve got to smile.

Since we left school you’ve obviously done a huge amount, including, apparently, just finishing off a PhD. So can you just tell everyone a bit about your journey? Specifically, I’m really interested in the inflection points and the different crossroads where you ended up from Germany, to Wales, to the US, and now back in Germany again.

Lea Steinacker:  Yes, Okay. Well, I’ll do a quick round of inflection points, I like that focus. So I was born and grew up in Germany until I was 15. And actually, before we met in Wales, I spent a year in Australia.

And I think that for me was an eye-opening experience of how people, and in this case I in particular, could reinvent myself at the age of 15. People can really have almost different personalities, but certainly different options when they discover that a new place can be called home. And I definitely had that in Australia for the first time, really dived into the language and loved the Aussie accent and the beautiful continent that that was.

And then -- talking about languages and accents and people and cultures -- I had the great pleasure of spending two years at the United World College of the Atlantic, or AC as you and I know it. So that was something that really brought out in me this portfolio, this bouquet of options of who I wanted to be, of what I wanted to do with my life. And I think it really also underlined that I had an interest in international understanding and diplomacy more broadly. But specifically, through a few courses and encounters in those two years, I got really interested in the issue of human rights and social justice.

And so from the United World College, I applied to some US colleges. So I ended up going to Princeton for four years, and had this experience that allowed me to explore different languages. I took some Arabic, I took some Swahili, I spent a semester abroad in Cairo to practice my Arabic. I ended up choosing a major in international affairs, which at Princeton meant that you had to take politics and history and ethics and all of these different disciplines, which again, I really like. You can see, and this actually is the same and has been the same in my PhD research, I’m a big fan of interdisciplinarity.

And basically, I spent my summers in Eastern Africa to practice my Swahili as well as spend some time in Kenya and Tanzania, and really got interested in the options and methods and practices of dealing with conflict and dealing with social dynamics in really fraught and maybe protracted situations. So after Princeton, I got a grant to research for an entire year in different settings, how to deal with what I consider to be really difficult and really thorny human conflict, and in this case, human rights violation even, which is sexual violence.

The next year, I wanted to see -- not just from a research perspective, but from a practical perspective -- if things could be done, if this issue is actually something that we can address, and if yes in what ways. And I ended up at an NGO in Eastern Congo and worked with them for a year, and got really interested in the interplay of human measures and projects and technological inventions and interventions.

So while in Eastern Congo, there were a few projects that used various technological methods, some really simple: we’re talking radio technology to train local journalists to report on human rights abuses, or we’re talking GIS mapping, so literally navigation and mapping systems to understand how different rebel movements were affecting communities.

But I got really interested in the technology behind these and so I decided to do a master’s program and go back to university and really look at that intersection of technology, social justice and journalism. And I did two years at the Kennedy School in Boston. And, again, I realize that -- for me at least -- it really helps to have the exposure to different disciplines to end up in a place where I am now at the intersection of technology, society and journalism. I really feel I have grown into that throughout all of these years.

And so while still in America, working on these technological issues, I connected with a journalist from Germany, and I started working with her on  Wirtschaftswoche -- I would almost say, it’s a German version of The Economist, or Business Week, the German Business Week. And I worked there for a number of years looking at how to cover technology,what does technology and what do algorithms and artificial intelligence mean for society, for markets, for the various sectors? What does it do to jobs? And out of that came two big things that I’m still working, that I would say occupy most of my work at the moment.

So number one, work-wise, we actually ended up founding our own company. We came up with the idea based on all of these conversations that we were having talking to CEOs of large companies and tech experts, and future researchers or future risks as they like to be called in various disciplines. We really realized that all of them, all of the experts who were starting to consider what would happen in the next 5, 10 years were talking about upskilling their employees, and were talking about how much the workforce will be changing. And we realized, me and my colleague, Miriam, we realized that basically, we need to report more on these issues. But even a step further, we would love to contribute to upskilling, to education basically.

And also we thought -- kind of self-confidently -- that so far in this realm of the digital revolution a lot of things have happened in the US, a lot of things have happened in Asia, so a lot of progress is coming from these two corners of the world, and we believe that Europe also has a role to play. And so we came up with the idea of ada -- ada is named ada because of Ada Lovelace, the wonderful British programmer who lived 100 years before Alan Turing, who I think many people know because of the Turing Test for Artificial Intelligence. Ada Lovelace lived 100 years prior, and kind of thought up, in summary, one of the first algorithms to be programmed into a machine. So we call it ada.

And the ada learning company is what we’re now using as the vehicle to train about 600 to 900 employees every year at the moment, from about 30 companies from all over Europe. The German government is on board, ee have large telecommunications companies and retailers, many different sectors, and they each send around 30 people every year, who then participate in three things.

Number one, they do some online learning with us. Number two, they participate in live events. We have a large conference called Morals and Machines, where we bring together international people to discuss the ethics of technology. And we have an Ada Lovelace Festival every year.

And then the third thing is that they participate in very practical projects with each other. They do a lot of networking with each other, and they work together inter-organizationally, too. And that altogether is the ada fellowship. So that’s what I’m working on at the moment.

And the other thing that came out of the last few years was my interest in, of course, researching all of this further, and so I started a PhD and just finished it, actually just defended it, looking at the impact of AI. So you see, this is a long-winded way of saying I started with this interest in just going abroad, seeing a different place and trying it on. And then I ended up back in Germany, funnily enough, after 11 years abroad, back here in Germany, but with a very, very refined sense of myself, I think.

Baiqu Gonkar: Thank you for taking us through that. I feel a couple of things really popped up for me. I think one is just it’s so funny how life takes you through these winding ways. And I think only when you look at it retrospectively, do you see how all the pieces fit together.

And the other thing that’s really interesting for me as you talk about trying on things and trying of identity and I think identity is such a funny thing as I think a lot of people struggle with it, especially when they change jobs, or especially when they change environments. They’re kind of who am I now? What does this change for me now?

And I find it so endearing that from such a young age you were like, “Oh, I’m going to go into this with an experimental mindset.” And I really liked this idea of having these difficult moments or jarring moments that makes you crystallize something or makes you find out something about yourself, right? Because I think it’s almost a filtration system where you then have to make decisions and you kind of have to ask yourself, “Why am I making these decisions? Where does that come from?”

And the other point that I wanted to talk about -- and maybe this is just me projecting my own interests on to you, but it also feels there’s a thread of comradeship with women. I mean, from your studies into sexual violence and conflict regions, to calling your company ada -- funnily enough, I’m drinking from a cup about women who changed the world, and I have Ada right here, you see.

Lea Steinacker: Love that. Ada’s on the mug, nice.

Baiqu Gonkar: She’s on there, I love it.

Lea Steinacker: Did you notice this before or did you just look at this and realize she was on it?

Baiqu Gonkar: When you were talking, I was thinking “Ada must be on this cup.”. And she is. Obviously since your company is called ada I did a little bit of research into Ada before we spoke.

From everything that I see, anyway, your relationship with your co-founder, Miriam, also seems to be just such a wonderful collaboration. So am I right in saying that that’s been also kind of the focus in your journey?

Lea Steinacker: Absolutely. I think when you’re interested in how humans work and how social dynamics work, I think you cannot get around the fact that there have been structural, infrastructural differences for men and women, or anyone who falls in between in that spectrum. And that really fascinated me.

And it’s not the only -ism that I got really interested in. I’ve definitely had my own evolution also on other issues of inclusion, exclusion. By nature of the work of my parents, I have a lot to do with people with disabilities. I ended up having a lot of friends who are working on issues of racial inequality, so all of these structural layers and dimensions really do fascinate me.

And to stick to the issue of women and empowering women and lifting them up in certain senses to their potential -- which I think they could have definitely unleashed without my help, but I think structural barriers at points have not really helped us in the past, certainly in the distant past. I think Ada Lovelace is a great example. She was working with a mathematician called Charles Babbage, a famous British mathematician. And Charles Babbage, of course, was the one who got most of the credit for most of the time for the machine that they were working on, the Analytical Engine.

And only in retrospect did people realize that it was Ada Lovelace, in her notes that she wrote about a paper that he had written, where she came up with these brilliant ideas. I mean, I’m really kind of shortening the story here so I hope that no Babbage experts are jumping on me. But basically, she did not get a lot of credit at the time, it was Charles.

If anyone has the time to look into her notes, they are fascinating and hilarious. I read them for my PhD research. And she goes into this cynical, high-level intellectual and philosophical musing on this journey about Babbage’s work and what she thinks about it. And then counters it with her own ideas, and basically making it better. Sure, he had the idea, but she’s really improving the work and she ends up having the idea that ended up being impactful, if not coming up with the idea of artificial intelligence in a way.

So, that is all to say that people like that, women like that, inspire me to make sure that we’re listening to our voices, that we’re making everyone visible, who not because of their own wrongdoing but because of structural barriers is usually not at the table. I was an LGBT peer educator during my time at university making sure that in this case, not just focused on women, but that the LGBTQ+ community was included. So yes, that is definitely a focus of mine.

At the same time, I’m also happy to report that in the Ada fellowship, we really strive for diversity in all kinds of ways. So we do have 50:50, in terms of the gender ratio, but we also have a big age range,we have a big hierarchical range. I think it’s really important and it’s just more interesting and I think more representative of what’s going on in the world, more representative of  the truth, if there is such a thing, if you look at everyone who is available with their voice, and I don’t think we have always done that.

Baiqu Gonkar: And that goes to what you were saying about your interest in interhuman relationships, right? And I think especially when you’ve been afforded certain kinds of privileges through education, or just by way you were born, it becomes harder and harder to broaden that circle of active engagement with people who are very different from you.

With your interest in AI, how do you think that comes into play? Because I’m not an expert. You’ve just done a PhD, so I’m sure you can give me a few schoolings around that. But how does that fit into your world view and your passion about elevating voices and making sure that we have this intermingling community of people who are very different and have different perspectives? How do you do that with technology, especially given all the biases we’ve already implemented into the existing platforms that we have at the moment?

Lea Steinacker: Well, that’s exactly the right word that we can focus on at the beginning of this question, which is the biases that we already bring into it, right? So we, as humans, have a bunch of biases, cognitive biases that are even important to navigate through the world, of course, otherwise we wouldn’t be able to categorize absolutely anything, right? And we would make every split second decision about something that we’ve maybe done, hundreds, if not thousands of times prior.

So biases in and of itself, I would say, are just a psychological phenomenon. There’s nothing bad about it but they basically lead in practice, and certainly in our existing structures, to forget conditions in the real world that definitely are discriminatory towards some.

And if we then translate that to technology -- by way of, for example, designing these technologies, designing devices with male characteristics, let’s say, measurements on helmets, glasses, VR glasses, wearables, all of these devices, where it is important to have an average beta user -- and a lot of these things in the past and up till now have been designed often,  or really most of the time, with men in mind.

And then when we take this further, when we look at not just devices and hardware, when we look at software, then I think there are at least two issues here. Number one is the data that is being fed into it. So we usually think of data as really binary: this is either right or wrong. This is just the fact. We collected the data, look at the data sheet, right? But we forget that the data were even collected in a biased way.

So for example, if you look at policing data in America, don’t forget that the people who were being arrested more were being arrested based on a human bias of racially profiling non-white people more than others. So if we look at this spreadsheet and say, “Look, the data shows you non-white people end up being in jail more” then that is a biased data sheet for sure. Or you could call it a representation of a really biased world.”

So if, for example, we use this data now to feed an algorithm that should ostensibly help predictive policing, then the model that you are training, the neural network or whatever model you’re using, it will learn based on this…  I’m going to say faulty or at least biased data.

And the other aspect is not even just in the data, but in the way that you design these models. So for example, you have to make decisions about what you are looking for, what variables you are putting into it, how you’re weighing them, what is most important when thinking about policing. So some people, and I’m saying this because there was an example of this that happened a few years ago, some would include a zip code because they know that according to the data they were neighborhoods that were more difficult.

If you’re going to focus on zip codes, where already we have discriminatory structures in terms of who gets housing where and when, then we have taken a variable that brings with it so much societal meaning, so much biased, racist history, so much classist history, that even in the way that you are designing the system, you are already making big decisions with societal impact that you might not even realize because you’re thinking, “But I just chose a zip code, right?”

So it’s the data, it's the design of the software products and it’s also who uses them. I think there is a lot of great technology out there that is technologically fascinating, it represents advances in science, and is really exciting, but some of the use cases are horrific. I would never want that to be out in the real world. Or I would want it maybe by, let’s say, democratic governments, but would  I want, I don’t know, anonymous hackers to have them? Maybe not, maybe it’s the other way around. Maybe I would never want a government to have that technology, but I’d much rather have, let’s say, a benevolent hacker to have them.

The point is there are all of these dimensions of the technology that I think often we overlook. And so that is why it is connected to making sure everyone is represented and included. Because I really think that for a long time, we have considered – you can look at the history of technology and impact assessments, we’re talking at the beginning of the 20th Century, we considered technology to almost be this inevitable thing. Like, we build it and it couldn’t have been any other way. This glass, this is obviously a glass. It couldn’t have been any other way.

What if we have people with very small hands? This is not a glass for children, right? This was very clearly designed for an adult. A child couldn’t even hold this glass. So this is not something that just inevitably came to be. Somebody thought about this, somebody designed it with an average user in hand. Sorry, in mind. Well, in hands, but that would be weird. What would the user be doing in their hands?

So basically, I think that we need to always remember what Melvin Kranzberg,who is a technology historian, said: “Technology is neither good nor bad, nor is it neutral.” It is never neutral. And it’s neither good nor bad, because you don’t need to label a technology itself as either one of those two. But because technologies are being used, they are never neutral. They are loaded with normative forces. So there’s definitely a strong connection to an interest in social justice and representation and in debiasing certain systems.

Baiqu Gonkar: And what about the role of the government? And what about the role of corporations? I mean, you guys are training corporations to upskill their workers to prepare for the future of work. And you also work with the German government, and they send employees to your training platform.

What about the relationship between these three? Because sometimes, at least from what I observe, it feels they’re still very siloed. And there’s this kind of perception, I think, from technologists, about government just being really dumb and clunky and far behind. And then governments who are, I don’t know, speaking to Mark Zuckerberg was on trial, or whatever, and seem to have very little understanding of what it is that they’re actually trying to get at. And then you’ve got corporations, on the other hand, who have their own drivers and who have their own incentives, how do those things interplay with each other? And not to put this burden on your shoulders, but through the work that you’re doing, what kind of stuff are you thinking about when it comes to this?

Lea Steinacker: Yes. So many, many other people have done this work before me. And I’ve been trying to do this work -- as you said, not to put this on my shoulders or even to get any credit for this. But let me start with something that you just mentioned. The trial of Mark Zuckerberg – or not trial, but the hearings rather sorry, wishful thinking here.

Baiqu Gonkar: I know. I think that was just my own thing just coming out.

Lea Steinacker: The hearings a few years ago, the first hearings, there was a moment that I think for a lot of people who have been in technology for a while really crystallized the gap of knowledge. And that was when a senator asked Mark Zuckerberg, “Well, how do you make money then?” And Mark Zuckerberg looked at him and just said, “We run ads, Senator.”

The fact that after over a decade of existing, Facebook was able to be so obscure, and also quite frankly, untransparent about their business model in detail, at least to the US government. Or, you could say that the fact that a senator or maybe the Senate, on average, did not know after a decade, because you could also say that they could have researched and requested this information before. But the fact that there was lack of knowledge, that there was ignorance about the basic business model, the very basic business model of one of the most impactful companies of our age, is kind of staggering.

That moment, really, I’m not even sure if it shocked us, but because we have seen this lack of knowledge and also almost lack of interest. I mean the fact that Facebook has only been having these hearings in the last few years when 10 years ago people were really ringing the alarm bells, researchers and experts.

So you have a great point in that, I think because of the speed of technology today and because of who gets to develop it and who is in it, there is a huge lag between regulators, legislators, those who basically make decisions that could rein these companies in, and also, a lack of knowledge, often on the part of users. I mean, the Cambridge Analytica scandal, again, was something that researchers and experts had pointed out before, but it really brought home the fact that users did not really know what Facebook had been doing with their data.

And I think what’s interesting now is that at least in the last few years, there have been improvements. So look at the hearings lately with the whistleblower, Frances Haugen. The questions that were even being asked of her, and then of the Facebook representative, were very different, were much more detailed and at least much more knowledgeable. I think the government -- and I don’t mean just the US government, but they have realized that they really do need these – well, in a way this skill to understand and to contextualize technological issues.

So bringing this back to the work ofada, I actually think that’s exactly why the German government was so quick and so certain in participating. They’ve been there from the beginning. They sent people every year, every cohort. They have many applications within the government to participate in this program. The Swiss government is joining us this December, the Defense Ministry of Switzerland, and we’re really hoping that more European governments will join us – and even internationally quite frankly, because, as you said, we are bringing them together with industry, with companies but also with other institutions.

We have a hospital who’s participating,a smart hospital. We have the Goethe Institute, a cultural Institute, because we really believe as you said that they’ve been kind of siloed, we really believe in the exchange and the communication, the interorganizational communication about all of this.

And I cannot emphasize enough what kinds of societal impacts these technologies have. If you asked me about how important it is that government knows about this, if you think about the example of facial recognition technology, my friend and really an amazing researcher Joy Buolamwini, who’s a researcher at MIT, she a few years ago with her co-writer Timnit Gebruand Deborah Raji on two different papers, she basically pointed out the discrimination of non-white people by facial recognition technology. She was the first one to really show this empirically, right?

And in the last few years, since Joy’s study, basically, companies have had to respond to this, have had to improve their systems. And as I’m sure you’ve seen, and many of The Browser readers and listeners and viewers have seen of course, facial recognition regulation has been all kinds of places. The EU this April has been the first to suggest a draft AI regulation that if it came into law would be the first worldwide stark and intense AI regulation. And it included biometric systems like facial recognition technology.

So what I’m trying to say is, this is something that really brings together again the question of who’s included, pointing out that people who are non-white don’t really get to be recognized as accurately by systems that we’re currently putting in place in all of these public spaces, connecting that to the legislators who for a long time did not know what this was about, had not heard about facial rec, did not know about discriminatory systems, right? And now we’re getting to a place hopefully, where we’re negotiating and balancing out more and more where we stand on these technological issues with huge societal impact.

Baiqu Gonkar:  A comprehensive overview. Thanks, Lea.

Bringing it back a little bit more to you: you’ve interviewed everyone from Angela Merkel to Chimamanda Adichie, what do you think defines success? What do you think constitutes as being successful?

I ask this because I was speaking to Jodi Ettenberg, who is this wonderful human being, and she was a lawyer then successful travel food writer, and then she essentially became bedbound from a badly done lumbar puncture. And she was saying, I think when people look at me, they’ll think, “Oh, she’s living a very small life. And she’s not very successful.” And we talked about that a little bit,so I want to ask your thoughts on success.

Lea Steinacker: I almost want to say, I’m not quite sure you can judge people’s success from the outside unless you think of success as a measure of career achievement. And I just do not believe that at all. So for me, I guess it’s a very personal answer, or there has to be a very personal answer.

For me, success I almost equate with contentment, with happiness. I think I will, at some point feel  successful when I have managed to align what I do and who I spend my time with, and what I’m engaged with, with what I actually stand for, and what makes me who I am. If I find the sort of alignment, I think success to me is alignment.

Because, in a way, sure, you could have career markers and say, “Well, that looks like a very successful life.” I wonder if that is something to strive for if the person is deeply depressed on the inside, or actually completely distant from their soul.

So basically, for me very personally, success means continuously exploring all those multitudes. Also, by the way, and I think that is the pinnacle of success, exploring who you are and what makes you you when success falls away.

So there’s a line at the end of a poem that I really liked. The poem is called The Invitation. And the last line is “What sustains you from the inside when all else falls away.” I actually don't think it’s the very last line, I’m realizing. It’s a very long poem. Yes, the last line is, “Do you like the company you keep in the empty moments?”

And both of those actually really fit this. And for me, I think the sense of success comes when I realize I don’t feel completely misaligned. When there’s stillness, when it’s the empty moments, when all else falls away, have I actually lived enough, lived deeply enough, intensely enough, loved intensely enough, love to know people, communities, my work, what I do, how I have spent my time, do I experience that intensely enough to actually know even what sustains me from the inside when all else falls away?

I think that maybe for most people that might be the definition of happiness. But success to me is if you want to consider it from an outside perspective, is basically aligning at least what you do on the outside or for the outside world with the outside world as best as possible with what’s going on, on the inside.

You could also think about impact. Success, I think, can be something that is very small, what we would consider very small, but has actually deeply impacted people. I think that is hugely successful in some sense as well. But I often think that when we align the inside and the outside practices, I actually think that oftentimes we are incredibly impactful. Self-efficacy is also a feeling I think that makes one feel successful, very effective in what we do. But usually, even if it’s inspiration, because people watch you from afar, or it’s the people you encounter, I think we are incredibly impactful when we feel self-efficacious, and know that what we’re doing, we really stand behind. I think that that would be it.

Baiqu Gonkar: That’s an incredible answer. Did you just come up with this on the spot? But, obviously, you’ve thought about this. And I agree, I think that sense of taking away external structures of validation and external votes of confidence in who you are, when that’s gone, what parts are left behind, and what parts can you be happy with, with the company that you keep which is at the end of the day just yourself, right?

Lea Steinacker: Yes, absolutely.

Baiqu Gonkar: I really love that.

Lea Steinacker: I mean… can I just share one example? I was thinking, since you mentioned Chimamanda Ngozi Adichie, who Miriam and I recently interviewed with Angela Merkel, and she has just gone through this really quite horrific experience last year during the pandemic, of losing both of her parents. And she talked about that not just in her really stunning book that she just brought out, Notes on Grief, but also on stage. And forget the success of her books, it was the most touching, and most gripping exploration of grief I have ever listened to. Because she was, I think, really very real with the audience, in the book not just on our stage.

But I think it was really  what was going on inside that was incredibly aligned with what she was -- in this case even sharing, but even just working on. She could have been writing this for herself, I would have found it incredibly real, aligned, impactful. This is just one example – she does not need to win, which she might actually, but she doesn’t need to win all these awards for this particular book. I think just the energy that flew into the book, wow, that is successful. That is incredible. That, actually, I think will touch so many people when somebody has found something like she has in writing, and in being that authentic in this moment right now. I think that is impactful.

Sorry, I just really had to think about the other person on stage who has been the German Chancellor for 16 years, Angela Merkel is the first one to be incredibly modest. We were trying to even have her comment on her own legacy. And she just wouldn’t… she basically says, “I have been a civil servant for 16 years. That’s it. I’m not the leader of the free world.” She does not appreciate those terms. And I really think again, that shows you that I don’t think successes are having all of these checkmarks on anything.

Baiqu Gonkar: No, that’s incredible. And I’ve always found Merkel’s quiet dignity is so powerful, I think that’s what instills this authority in her. And just as was Chimamanda on grief, I think that sense of alignment requires a lot of courage. I think it’s often easier to rely on external sources of a sense of self than to be really reflective inside and trying to see whether the two match up. And I think when you meet people who have that sense of alignment, at least for myself, I feel it’s so evident, you can kind of feel their gravitas and feel their center of gravity -- like it’s so strong that it kind of pulls you in, doesn’t it? And you just feel…

Lea Steinacker: Absolutely.

Baiqu Gonkar: You just feel this kind of stillness, which I think is incredibly attractive both from a work perspective, but also from a personal perspective.

Lea Steinacker: Yes. I like the way you put it, dignity and gravitas..

Baiqu Gonkar: So I guess my final question Lea is, seeing as we are The Browser, can you think of a book that you would like to recommend to everyone?

Lea Steinacker: Absolutely. I read this in the past year and found it a captivating nonfiction read, which as we both know, is not always the case with nonfiction. And it is If, Then by Jill Lepore, she’s a Harvard historian.

And Jill Lepore has actually written a number of gripping nonfiction books. And it is this incredible story that even having research technology for a number of years now, I hadn’t even heard about it. And it’s basically about the history of this corporation Simulmatics, who in the – I believe it was the 1950s tried to invent this machine called the People Machine. And they thought that they could – back then, in the 1950s, model everybody’s preferences, behaviors, and everyone in the United States, they could model and map for polling purposes, basically, for political gain.

And first of all, Lepore really writes this in a very captivating way. And it  the characters come alive. I hope there’ll be a movie about this. But also, it’s interesting how history repeats really Simulmatic sounds like Cambridge Analytica in a way. And it’s fascinating how we humans have for a very long time, tried to model everything that we do, tried to map and calculate, and even better, please predict what we’re going to do, which, with my psychological interest, again, I just find almost amusing because we really cannot deal with uncertainty very well.

I think humans are really terrified of holding ambiguity, and not knowing. Maybe somebody on Election Day will decide differently -- can’t really stand this. I don’t know what my neighbor thinks, I don’t like it. And for a long, long time, we’ve tried to model absolutely everything. And we continue to try to do that again, with many scientific advances and wonderful progress. But that particular part, trying to predict the whole electorate in what they like and what they do, and that happening 70 years ago, I found that really fascinating. So Jill Lepore, If Then.

Baiqu Gonkar: Thanks. And finally, actually, I’m going to ask you to give an ask. So we touched upon this before we started the interview, but do you have an ask for anyone who’s watching this interview?

Lea Steinacker: Well, actually, I would love to hear from anyone who is also interested in the impact of technologies, since I spent my PhD research on a framework that I called Code Capital. So I really believe that having thought for many decades now about social capital and human capital -- not me, I haven’t thought about it for decades, I will but I’m not there yet. But many other people have thought and researched and worked on social capital, human capital, intellectual capital, all these kinds of capital. And we’ve really appreciated capital as something that is not just in the financial and monetary sense and the economic sense, but in a societal sense, something worth value and something that has leverage over others. I really think we need to start talking about this phenomenon of CODE Capital.

For me, it’s an acronym, C-O-D-E So I think it has to do with how we conceive of technology, how we operationalize it, what we put into it, and how we design the software, how we use the data, that’s the D, and then the environment, how technologies interact with everyone who is using it, and who’s also designing on all the people parts, right, so CODE.

And I guess my ask would be if anyone finds this concept, calling it CODE Capital helpful in summarizing something that we’ve talked about for a number of years now, as you know, the power of algorithms, the power of AI, algocracy as a version of algorithmic democracy, all of these terms that people have been trying to use to actually symbolize the power of these systems: not positive, negative,  nor neutral, right, don’t forget Kranzberg.

So my question would be does that resonate? Does the framework of CODE Capital resonate? And everything you and I have talked about today in terms of the societal aspects of all of these technological applications, right, does it make sense to people to really start to consider more social aspects and how we use technology? Because I’m definitely an optimist in using technology for the better, for the good. That’s why we’re trying to upskill people to understand it better. So I would love to hear from people about that.

Baiqu Gonkar: I love that and I always love a good acronym. So thanks, Lea.

Lea Steinacker: Me too, Also alliterations, I like acronyms and alliterations

Baiqu Gonkar: Alliterations are very satisfying when they’re good.

Lea Steinacker: Yes.

Baiqu Gonkar: Ah. The sound and saying it just feels so good on the tongue.

Lea Steinacker: I think alliterations is half of why Harry Potter was such a success..

Baiqu Gonkar: A success. Ok, thanks so much for your time, Lea. It was so good talking to you.

Lea Steinacker: Thank you.

@Léa Steinacker

ada education

UWC colleges

Reading recommendations:

Ada Lovelace’s notes on the Analytical Engine created the first computer program

Racial bias in facial recognition systems

The Invitation

If Then: How One Data Company Invented the Future by Jill Lapore.

Join 150,000+ curious readers who grow with us every day

No spam. No nonsense. Unsubscribe anytime.

Great! Check your inbox and click the link to confirm your subscription
Please enter a valid email address!
You've successfully subscribed to The Browser
Welcome back! You've successfully signed in
Could not sign in! Login link expired. Click here to retry
Cookies must be enabled in your browser to sign in
search