Riskgaming

The Orthogonal Bet: What AI Can Learn from Human Cognition

Hello and welcome to the ongoing miniseries The Orthogonal Bet

Hosted by ⁠Samuel Arbesman⁠, Complexity Scientist, Author, and Scientist in Residence at Lux Capital

In this episode, Samuel speaks with ⁠Alice Albrecht⁠, the founder and CEO of ⁠Recollect⁠, a startup in the AI and tools for thought space. Alice, trained in cognitive neuroscience, has had a long career in machine learning and artificial intelligence.

Samuel wanted to talk to Alice because of her extensive experience in AI, machine learning, and cognitive science. She has studied brains, witnessed the hype cycles in AI, and excels at discerning the reality from the noise in the field. Alice shares her wisdom on the nature of artificial intelligence, the current excitement surrounding it, and the related domain of computational tools for thinking. She also provides unique perspectives on artificial intelligence.

Episode Produced by ⁠⁠Christopher Gates⁠⁠

Music by ⁠⁠George Ko⁠⁠ & Suno

continue
reading

Transcript

This is a human-generated transcript, however, it has not been verified for accuracy.

Danny Crichton:
Hey, it's Danny Crichton here. We take a break from our usual Riskgaming programming, to bring you another episode from our ongoing miniseries, The Orthogonal Bet. Hosted by Lux's scientist and resident, Samuel Arbesman, The Orthogonal Bet is an exploration of unconventional ideas and delightful patterns that shape our world. Take it away, Sam.

Samuel Arbesman:
Hello, and welcome to The Orthogonal Bet. I'm your host, Samuel Arbesman, complexity scientist, author, and scientists in residence at Lux Capital. In this episode, I speak with Alice Albrecht, the founder and CEO of re:collect, a startup playing in the space of AI and tools for thought. Alice was trained in cognitive neuroscience, and has had a long career in the realm of machine learning and artificial intelligence. I wanted to talk to Alice because of her experience in the fields of AI, machine learning, and cognitive science. She has studied brains, she's seen the hype cycles in AI, and is great at helping to cut through the noise in what is really going on here.
Alice is full of wisdom around the nature of artificial intelligence, and how to think about the current excitement, as well as the related domain of computational tools for thinking, and how to think about advances there too. She is also good for many orthogonal views on artificial intelligence. I am so pleased that I had a chance to speak with Alice about her background, the world of AI, and how she thinks about everything happening right now. Let's dive in.
Hello, Alice Albrecht. Welcome to The Orthogonal Bet.

Alice Albercht:
Hi.

Samuel Arbesman:
I have many, many questions for you about AI, how you got into it, and all these different things. And maybe the best way to start, is to just share your own personal history of how you got to where you are today, and kind of what you're doing.

Alice Albercht:
Yeah. Well, first of all, thanks for having me. I always enjoy our conversations, so I'm really excited.

Samuel Arbesman:
Likewise.

Alice Albercht:
In terms of background-wise, I used to be an academic a long time ago, and so I studied something called cognitive neuroscience, which for those of you who aren't familiar, it's a mix of psychology and computer science and neuroscience. I am really interested in this idea of how humans deal with the amount of information in the world. I was an academic, studying that and human attention, and then I left about 10 years ago, and went into tech. One of the breaking off points there, from academia to tech, was that I did machine learning work back in 2010, and I'd studied humans, and how they attend to information. And so all those things combined in a time in tech, where that became very useful and valuable to be able to model what's important to an individual, what are they excited about.
Fast-forward, three years now with re:collect, I'm the founder of re:collect now, and we're really focused on enhancing human intelligence. So I've remained really committed to this idea of how do we take humans, and how do we make them better? How do we help boost their cognition in all these different ways? So I'm sure we'll jump into all those pieces.

Samuel Arbesman:
When you switched from academia, the world of tech, and you mentioned that the world of machine learning, and kind of the implications of thinking about intelligence and cognition obviously have become very relevant for the tech world. I assume that wasn't necessarily the reason you kind of... It wasn't like, "Oh, this is an area where I can be valuable. I'm going to jump in." I assume there was another reason you were jumping in. Can you share a little bit more about what prompted the shift into the tech world?

Alice Albercht:
Yeah, it was a couple of things. So one of them was academia didn't seem like a good fit for me as an individual. And a part of that which is really prevalent in the tech space, is I wanted to work on things in a really cross-functional way. I was very interested in lots of the different aspects of how we would understand human's cognition, and how we would change that. And so leaving academia, coming into industry, it felt much less constraining. I felt like there was much more open space there, and to actually apply it and build things that would affect people, instead of on the more research side, we were doing a lot of observation, a lot of learning about how humans work, but not a lot of making things that would make that work better. There was that piece. Certainly, it was not like, "Oh, there's better machine learning over there."
The other non-obvious piece that I think a lot of people forget, or don't notice, is that there was so much more data, and it was already there. And as an academic having to run experiments and bring people into the lab, and you maybe get 10, 15 people, it would be really arduous. And with tech at the time, and still, if you get people using a product, you get so much data and information from them. So there were a couple of draws there. I also had a good mentor when I was at Berkeley doing my post-doc, who was helpful, helping me sort of see the transition point there from one space to another.

Samuel Arbesman:
Do you also have a feeling the tech world is either more inherently interdisciplinary, or just kind of more disciplinarily agnostic? I imagine in academia, you kind of have to be a little bit more siloed?

Alice Albercht:
So academia is siloed in that you're focusing on a particular topic, and you keep whittling that down. And so even if I'd wanted to work on something really different within my given field of cognitive neuroscience, which was fairly interdisciplinary, it would've been a career killer. I built up this repertoire of papers that I was publishing on this specific idea. Moving to tech, it is much broader, but within an organization, I think there's still a lot of siloing. The benefit for me though, was that breaking down those silos was much easier. I could basically prove that I knew about all these different areas, and go talk to the product people, talk to the user research people, being in the engineering org, and really have much of a wider purview within one. And then I was in fintech for a little while. I was at Yahoo for a bit. I've done lots of applied research work.
So I was able to, even if there were those silos, look at them and say, "Those seem kind of fake." Within academia, they feel more real, another department's not going to welcome me with open arms to go start setting up shop in their space probably, unless it's changed.

Samuel Arbesman:
Given all the different experiences in the world of tech, do you think there's a natural size, or style of organization, that is most amenable to interdisciplinary thinking, or more flexibility? And the context for this is, I like thinking a lot about non-traditional research organizations, and creating places where people can do lots of different things. And in the tech world, do you feel like there's a certain size, or style of organization, that is most amenable to, I don't know, the most holistic kind of research-y tech creation, making things that you wanted to do?

Alice Albercht:
Yeah. And I've done all of them now. I think different sizes, different kinds, I really span the gamut here. My answer right now, would be either fairly small, so I think under 20 people. And in that case, the thing you're working on is probably a little bit more malleable. You have a lot of impact. You can span a lot of different spaces. Or really big. So all of the labs, there were the Yahoo Labs, Google Labs, Microsoft has a great research group over there. There are spaces in those huge organizations. They want to see the innovation, they want to see what the latest thing is. That's usually, I think, going to come from these kinds of groups. I do feel like we're still missing this medium-sized one, like a NASA [inaudible 00:06:57], or something like that, where the specific goal is to do some of this cross-disciplinary, or cross-idea work with lots of different people.

Samuel Arbesman:
Okay. And then is that one of the reasons behind starting your own company, is being able to say, "Okay, I've realized this is the size that's kind of most natural for, or at least one of the natural sizes for this kind of thinking, and therefore I'm going to build one for myself?"

Alice Albercht:
I mean, it was that. It was also that in starting something on your own, I get to write the script, and say, "Yeah, this is the kind of work we're going to do. This is the way we're going to engage with it." We're a fairly small team, but if it got bigger, I think we could still hold that. But past that 2030, I think it becomes harder. I do like the small. I like the small, and I've done small within big, and that's also okay.

Samuel Arbesman:
You've been kind of in the machine learning and AI world well before a lot of people who are now coming to it, and certainly well before the kind of transformer revolution. We discussed there's almost a hipster AI vibe of getting involved in it before it was cool. But you also came from a very different domain. You weren't coming from computer science or traditional artificial intelligence, you're coming from cognitive neuroscience. What has been your experience? And you kind of take this in whatever direction you want, what has been your experience both coming from a different sort of field into the world of machine learning and AI, as well as having seen it go through these changes?

Alice Albercht:
There's a couple of pieces in there. I think coming from a different field, to start with that one, it is a really different perspective. A lot of the people I've worked with over the years, that happened, there's a lot of academic expats that end up doing something adjacent, the AI and all stuff. Having not come from computer science, the way that I start to solve all of the problems, even algorithmically, in the design, starts with humans as a model. And I think that's such a difference from thinking about computers sort of as adjacent to human minds, but not really caring that that's true. You sort of cherry-pick, right? And not to dump on computer scientists, but I think historically, the field is really interesting, because if you look some of the early folks, like Turing or I.J. Good was working with him, they were really focused on replicating human intelligence in a machine, but really about getting the machine there.
And I think people coming from disciplines like mine, they bend more toward, "Oh, this is such an interesting way to help understand humans, but also to help build better humans." Getting to work though, with computer scientists, there's lots of physicists who has been into that, statisticians, people from different fields have come to this space over the years. And then before it was cool, yeah, the first time I encountered the machine learning world, was as a cognitive neuroscientist, and we were doing fMRI studies back then, so we were scanning people's brains and running experiments. And we realized we could use something called support vector machines, and we could decode what they saw. So you could take the pattern of activity in your brain, and we could say, "Oh, we know what happened, perceptually then. And that was huge. And Jack Gallant got very excited about this idea around decoding dreams, and it was an exciting thing, but it was a tool. It wasn't necessarily the path to anything in particular.
And what I've seen over my career now, is that the computer science space was aiming towards something. They were trying to create machines that had some idea of memory and attention, and ways of brain information, and cognitive neuroscientists were trying to understand the brain using some of these methods. And I think we're now getting to a point where more of this is around a thing that assimilates, or approximates a human, especially with a transformer. So that big change, where we push into language and we're past vision, and deep learning models in that sense, I think made it suddenly more human. With those two threads, being in it for a long time, coming from a different space, I've been beating this drum forever now, but we really should be borrowing a lot more from the models of human cognition than we do, in a real way. I've done this for years and years and years now. When I first learned about attention in neural networks, like this is not right, this is completely wrong. What is this?

Samuel Arbesman:
Right. You're using the word attention, but it means something very, very different than the way humans attend to things.

Alice Albercht:
Yeah. And the reason humans have attention, is to be able to cope with all this information in a way that's computationally efficient. So we're talking about building huge models, and we want them to be computationally efficient, it makes a lot more sense that we would use the human definition of attention. So there are a few places where the machine learning space and machine learning research, never quite grasped onto those ideas. I'm still trying now, right now.

Samuel Arbesman:
What are the kinds of ideas, whether it's how humans learn, how we filter all the information that we're receiving, are those the kinds of things that we should be integrating into how we think about AI? We learn through lots of information, but presumably, we're not learning through millions and millions of examples in order to figure out some example. We are learning in a very different sort of way. What do you think are the paradigms we should be importing from human cognition?

Alice Albercht:
So learning a memory is certainly one. Memory is an interesting one. It's one I've worked on a bit, and my current company, other people have been focused on. The memory piece of it though, we take that from humans, is really just a necessary component of decision-making, creative thinking, predicting things. We have to have a way to access information, otherwise we would just, every second, be making the world new, which wouldn't make any sense.
So I think taking memory in a slightly different direction, not just as, "Okay, well we have a storage unit for this stuff." But memory plays really heavily with attention, and it plays with your goals. So what are people's goals in the moment? What's important to them? And taking those in as a way to say, "If we wanted to create machine learning models that were more humans, or more like humans thought, we would need to have something that had goals, and would have to constrain through that." And so I think there's a lot to borrow from memory, attention, and learning, but in a slightly different way than just how would we make those physical substrates. Like memory in a computer is a concept that makes sense, but it's very different than what a human would remember.

Samuel Arbesman:
So when you hear people saying scaling is the key to get to artificial general intelligence, what do you respond to those kinds of people?

Alice Albercht:
I don't think it's going to happen. I'm pretty good at predicting these things. I think scaling compute got us here, which is great. I think that has given us a really solid foundation. I think we will hit a plateau with that. Having something embodied, having something that actually goes through the world and communicates with other beings in the world, and explores that world, and has something in them that wants to explore that, those things are going to be really, really important. AGI is something I'm not that excited about. I think it's fine. I think that we have been drawn to this idea for a long time, we, meaning people that have lived and died before I come along. Because we want something like us, or something smarter than us, or sort of this next phase of evolution, and starting with humans feels too messy from an engineering perspective.
And so I think for AGI, this idea that we would create something that is almost an AI. Just to step back for a second, the way I define these things is the artificial neurointelligence, ANI, then we'll get to AGI, which is more flexible across domains, across pieces. Getting to actual AI would mean that it would have to meet human intelligence across lots of different domains, I think including things like understanding people's emotions, which is a hard one. And so AGI on the step to AI, is fine. It's not that exciting to me. And I think it means we've given up that humans, who are already AI level, are a good starting point for things. I think we should start with the humans. We already meet an AI per that definition, and we could build on top of that one. Lots of people disagree. And the scale and compute piece has been about pushing, and pushing, pushing for this AGI, pushing for the Turing Test, although I don't think he even wanted this, or would've been all that excited. I think he wanted something slightly different.

Samuel Arbesman:
Do you think we've already hit some of these plateaus?

Alice Albercht:
Yes and no. I think what we're doing now feels, to me, like what we did maybe 10 years ago. So I think what we did, was we've got a new way of working with this stuff now, with these foundation models. Generally, I think we'll hit a plateau at some point around the language capabilities, I think we already have. They're fine in terms of syntax, and they feel like they can communicate.
And now what I'm seeing happen is we're basically taking those, any reasoning capability they have, any kind of decent knowledge, and saying, "Okay, do these menial human tasks." It is almost going back to ANI, this narrow version of it, where we say, or the robotic automation stuff, the RPA work, where we say, "Okay, now we'll show you a bunch of stuff humans can do, and do that, and replicate that." And so I think the plateau is it's not getting smarter necessarily, in that sense. For me, it's not getting to some sort of super intelligence, like farther out thing. I think to get past where we're at, we would need a different architecture of the models, possibly different data sources altogether, and a clear picture of what we're trying to get to, like what is the actual goal?

Samuel Arbesman:
Do you think... One of the reasons why for many people it feels like some of these things are closer than ever, is because they use language so well. Humans are so linguistically-oriented, we're almost being fooled by language manipulation. Do you think that's a factor?

Alice Albercht:
I think we evolutionarily, cannot handle a thing that speaks intelligently and is not human. We had no content for that, nor should we really. So if my cat started speaking, I would be blown away. And I don't think that's just because I've learned cats can't speak. It's really because language is fundamental to humans, our language rather. And so it's a little bit of fooling. The Clever Hans example is what I think of as a ruse, where we would be tricked by this thing. In fact, I think there is language inside there, and so the trick is really on our end, where we can't really fundamentally understand that it's not human-adjacent. And it's using words that make sense to us, so we've been a slightly smarter human than somebody who couldn't come up with them.

Samuel Arbesman:
Right. So it's not as much being fooled, and more like we're just giving it more agency or power than it might merit because of the sophisticated use of language, which it is a very real and powerful use of language.

Alice Albercht:
And we also love invisible things we can't explain, like religion, or science, or anything. All of these fields are really about, we don't know. I think there's also something in that that's really intoxicating, that we can't see it working. No one can say, "Oh, I know exactly why it came out with this thing." So there's almost like a mystical piece of it, I think, that humans are also attaching themselves to.

Samuel Arbesman:
Right. Because when you think about the first chatbot, or Eliza, looking at the code, you're like, "Okay, there's nothing sophisticated going on under the hood." These you can't interrogate in quite the same way. And as a result, it's easier to place onto them your hopes and dreams and fears, and whatever else. And as a result, they might seem more sophisticated than they actually are.

Alice Albercht:
Yep, I think that's right. And people are doing some work now, to be able to see a little bit more on these embedding spaces, which is interesting, what kind of concepts are coming out of this. But those are just, as you know, going to be what we fed into it too. It's not going to be novel. There's no actual magic behind this, which is hard because I think people want to believe it.

Samuel Arbesman:
Is that more the reason why there are these concerns, in your opinion, behind existential risk, than anything more real being there, is that your take?

Alice Albercht:
I get the question around the fear piece a lot when I'm out in the world and speaking places. And not understanding it, it feeling magical, I think a worry of something that doesn't have a moral system is ingrained in that. And when I talk to people, like I'm not worried about the models going awry, that feels unfounded to me. I am worried about humans over indexing on how capable these are, and hooking them up to our power grid, and saying, "Let's outsource decision making to this thing because it watched humans make those choices."
The foundation models really are predicting next word, they're not predicting like other predictive machine learning models we have. And so especially when people think, "Oh, we could take this one model, hook it into something, and they'll do all this amazing things." I don't want an LLM controlling the power grid whatsoever. I think it's a really bad idea. If that happens and it goes down, it's not a malicious actor, it's not the Terminator. It's not this sort of post-apocalyptic future we see, where the machines decide we're paper clips and get rid of us. Yeah, I think the veneration, I think the fear, I think a lot of that is wrapped up into not understanding the technology. But also it's very dangerous if you do put it in the hands of humans who don't understand the technology, and sort of apply it to things they shouldn't be applying it to.

Samuel Arbesman:
Related to the human part, going back to the human aspect, I feel like what you were arguing for is less so building these super complex things that control systems on their own, and more about human-machine partnership, or these computational tools for thought. And the truth is there's a deep history for that as well, which is run alongside, but somewhat distinct from the AI world. What are kind of the features of the tools for thought, or the human-machine partnership world that you feel is actually the fruitful stuff that should be brought into these discussions around AI and machine learning?

Alice Albercht:
I think if we think of AI and machine learning, like you're saying, it's a way to augment humans and augment their thinking. And I call this the cyborg view, where we have human first, and then we have machines attached to it. That world really does intersect a lot with other tools that humans use that are really not connected to the human. So if we look at the history of tools for thinking, we had long ago, people trying to record information, trying to save, they're trying to give it to other people. We have printing press, we've got encyclopedias, we've got this idea that we can use tools to expand our memory and what we actually know. And then we have things like Hypertech, and then you had Tim Berners-Lee comes along and adds protocol to that, and we get the internet. We've got all of these things that people may not think of as tools for thought, the internet being one of those, that are really important in this journey.
And not just because all of the data from the internet then gets trained into these machine learning models, and that's how I notice things, and that's a piece of how those took off. But if we think about the human-first approach, what we can take from the thinking tools approach is what are the ways that we've been trying and failing to augment humans, without machine learning, without AI? And if you look at modern computing, many people along this road have tried to say, "Okay, what if we made a machine that could do things for us?" Or "What if we made a machine," a calculator as an example of this, "What if we made the machine fill in something that we can't actually do very well as humans?" I think a lot of, if you look at, I'm looking at my desktop now, right? I'm looking at a computer. A lot of what we've built in that space is trying to take physical environments, make them digital first, and then make them do something that we can't do easily in the physical space.
And if we take AI, and we bring that into the fold here, then what we have is a new kind of calculator in a sense. You can think of it that way. It's giving us this thing, and it's not quite landed yet, I think, the fullness of that, but we've got a thing that can take vast amounts of information and make predictions over it. We can't keep all that information in our heads. We couldn't even put it all in our desktop and look at it, and say, "Okay, I got the gist. I can summarize this whole corpus of information I'm seeing." And so the AI piece of it, to me, is sort of another interesting tool in that toolkit. But I honestly think we took really the wrong path at some point on this road, and it was such as an enticing path, and I can see, and looking back, okay, that makes sense that this person came up with this idea, and met another person.
But I think we missed an opportunity, at least when we started to bring things into any kind of digital space, of rethinking how that information was going to be represented, how it's going to be stored, how you're going to interact with it. And I'm worried we're making a similar error now, if we're talking about AI augmenting humans, not just running off and being an AGI, not thinking broadly enough around what are the constraints that we had and we don't have anymore? So for instance, when we created GUIs, and word processors, and things, we had physical paper, we had typewriters, and then we didn't have those anymore because we could digitize it. But we could have done a lot more with the way we represented text at that point. We could have taken it out of a linear page format, and we didn't. And now we've got something where everybody can plug in OpenAI, or Quad, whatever your preferred model is, and have it summarize something. But it's really standard, it's a really banal, it's general, for sure.

Samuel Arbesman:
For the wrong definition of general, right?

Alice Albercht:
Right, but goal completed in that sense. But we were sort of like, "Okay, well now we've got, again, all of this information that was in the physical world. We've replicated it digitally in kind of a similar way, and then we put a summarize button on top of it that everybody has." And I feel like if we just unwound that a little bit more, went back a ways, and said, "Okay, what if that wasn't true? What did we already give up and what are we giving up now," we would be in a better place in, let's say, five years from now.

Samuel Arbesman:
If you could paint a picture of, if it had kind of gone the right way, or even if people just tried something a little bit differently, what is that vision, like that human-machine partnership allowing us to navigate large amounts of information very intuitively, what does it look like?

Alice Albercht:
To me, it's a lot more spatial in a sense. When we think about your modern desktop, we have a file system that's replicating a filing cabinet that you had in your office or your home. I think if we had been able to not just navigate that information hierarchically, but in a spatial way, that would've made a lot more sense from the get go. If you transported me back to, what was it, 19, let's say 1980, when a lot of this stuff was shifting, I think that it got weirder first before we settled on this, like really explored that, the retrieval part was always the issue. So if I have a really interesting way of representing all this stuff spatially, then I have to have a way to navigate that space. It can't just be like a crazy person's desk with papers everywhere, and you're like, "Good luck."
Because we know those people. I definitely have seen them, where you're like, "I don't know how you even get by." Having things be more spatial, having things be changeable, as soon as we stopped shipping floppy disks, there was no reason that software needed to be the same for everybody. It doesn't make any sense actually to me, now, in retrospect. So all of your applications should have been adaptable to it to some degree, they shouldn't have been, "This is how it works for you. It's very rigid." Whoever made this thing is very opinionated. It's somebody giving you a Trapper Keeper with all of the things labeled already, and was like, "Good luck, you have to use this now." And like, no.

Samuel Arbesman:
I appreciate the mention of Trapper Keepers. So anyway, continue.

Alice Albercht:
I loved those, right? Those are so good. So I think it's sort of like, okay, once we were actually sending updates to software through the internet, who cares? It's a problem of software development at that point, around we want code that is reliable, it doesn't break all the time. But we could have given it a lot more flexibility then. And I really feel like we started to, if you remember being able to put skins on things, and kind of change a little bit around the way it worked. And then that died off because people were like, "I don't want to put all this effort into customizing this." You still see people with their terminal that's got total customization of all the colors and everything, is kind of the way they want to see it. But more on the interaction side, I think we had some power there, that we left on the table.
I wonder if a piece of that is the desire to create something as a tool for thinking, as a machine that we're working with for our knowledge work, that works for everybody blanketly, to get the most market share. If I need everybody to like this pair of shoes, got to get a pretty standard pair of shoes because everyone has different tastes in shoes, and different needs they have with construction workers, they've got runway models. But if somebody was like, "We need the shoe, everybody's got to have the shoe." It's like a loafer or something, with a small heel, but that doesn't work for every situation. And so how do we create something that it's going to be in every single home?
And I get the push for that, but I think it made some unnecessary constraints. And I think we're also doing that again with AI, where now that these models are starting to consolidate, the big large language model providers are consolidating a bit. There is some daylight between them, but I'm with everybody else, shifting where someone's like know Claude this week is a way better one than the one last, you know, OpenAI. And the offering isn't actually that different in those cases.

Samuel Arbesman:
There is almost, it's like financial need for scaling up, and kind of making a one-size-fits-all. Do you think that, and historically then, this kind of narrowing of this possibility space, and tools for thought, and AI, is due to the investment incentives and the need to scale very quickly?

Alice Albercht:
I think it's partially that. I think it's partially a side effect of any smallish community knowing each other. We're all connected by the internet now too. And so everyone's ideas start to kind of smoosh in together, and you lose the big variations, I think, in that. Their financial incentive is to create something really different, and that some people might like, but not everybody, is not really there. But the way we're doing it now, I think that these smaller communities tend to cluster around these things.
I mean, I didn't know what tools for thinking was honestly, until I started my company. I'd never heard the phrase before. And so when is this, five years ago or so, I fell backwards into this. I was like, "Oh, fascinating. There's a whole group of people thinking about something that I hadn't named," because I thought about it as human thinking, I guess. And then realized, "Oh, there are a couple of themes, people do latch onto those." I also think, going back to the cross-disciplinary piece, or coming from different fields, which I think might change now that coding has become a little bit easier, software development, the barrier is, this is the biggest shift down that I've seen, where it's like you really just need to understand basics, and not basic language, but really, just the basics.

Samuel Arbesman:
Right, like the fundamentals. And there's been this massive democratization for this, for software development.

Alice Albercht:
Right.

Samuel Arbesman:
Yeah.

Alice Albercht:
So then I think what happens then, and we saw this with machine learning, I had expertise really early on, and Caffe was a language, and a lot of the stuff wasn't built. And over the years, now that bar is lowered, but the software development bar lowering now, I think gives the opportunity for a lot more people from different backgrounds, to start to create things digitally that they couldn't have done before. You can get more artists in this game. You can get people that are anthropologists. I don't know. Really, let's broaden it and see then, what comes out of it.

Samuel Arbesman:
Right. And building also the tools that each fit their own community, as opposed to having to find the one-size-fits-all. And going back to what you were saying, that there was kind of this narrowing of the possibilities in 1980, or whenever it was. I love seeing the past dependence of how technological evolution occurs, and then looking at old magazines and understanding technological history. Because sometimes there's a narrowing of the possibility space faster than maybe there could have been. But sometimes there are these paths not taken, that we just kind of forget about. And I wonder if for some of these kinds of things, it's worthwhile digging into the archives, and finding old advertisements of like, "Oh, this is an interesting idea that someone tried, and that it's worth revisiting."
Because I also feel that in the tech world, there's just an unreasonable amount of historical ignorance about technology itself, sometimes proudly so, which is not good. But sometimes, people come by it honestly, they just don't know the history there. And there's so much, and so much that even I don't know. But yeah, maybe that's one key, is just actually looking and seeing, "Okay, what were people doing? What were people trying," and seeing if maybe with new machine learning techniques, or whatever, some of these ideas that have been left by the wayside, are actually worth revisiting.

Alice Albercht:
I think that's absolutely right. I like looking at the history too, and seeing, "Okay, how did we get here? Who did what, and what for what field? And how did those interact?" It's fascinating. And imagining who those people were, and how those decisions are made, it feels so much more human if you go through the history of it though, and look at, "Oh, they just knew each other." They were thrown together in some way, and "Oh, okay, so now we get this coming out of it." Going back to sort of the almost being proud of the ignorance of how we got here, people use this term "First principles thinking," a lot, and I think they really use it incorrectly. And someone recently was like, "What that means is you know nothing. You didn't read a book." And I'm like, "Oh, I love that." It's true. There's this cult around, "Oh, if we just start from scratch, and we don't have anything influencing our thinking, we're going to come up with something totally new." And by and large, you're probably going to converge on the same easier, lower-hanging fruit ideas after a bit.
And you'll lack the historical context for, "Okay, yes, this was tried, and this is kind of why it didn't work." So I think blending those approaches. And the other thing I'll say is reading broadly, and I think we've talked about this before, but that, to me, is one of the best ways to think differently. There's tons of science fiction, it's such a great place for these ideas, because somebody was unconstrained when they thought of these things. They didn't care about, "Okay, how am I going to sell all these Apple machines to all these people?" So we have those people, I think, are really visionary in a sense. I think looking at old academic work, or pseudo-academic work, that wasn't in this field whatsoever, somebody was trying something really, like for some other purpose, it's a great space for that. But if you brought more of that in, it has to make contact with the constraints today also, which is hard.

Samuel Arbesman:
What are your favorite examples of science fiction in this space, that people could use as like touch stones, or things to think about, or even paths that one should not ever take? And there's lots of reasons why science fiction can be helpful here.

Alice Albercht:
Yeah, I think some of the easiest examples come from Neuromancer, right, where we've got all of these hybrid human-machine interfaces. William Gibson's fine. I don't love that book for lots of reasons. Some of Ursula Le Guin's work for me, gets you to another planet, another space, another society. The ones where it's more about the society has completely changed, have been helpful in my thinking.
There are the classics that tell the technological future, and I think we will have floating cars, or we'll have neural implants, or we'll have all of these things. For me, some of the science fiction that does help to broaden horizons, is more around this, like okay, the society is actually different, and what if we started from that space instead? If we didn't have these societal constraints, what would we need technologically? Because I think the two play off each other. We have the society we have, because we have the technology we have. We have the internet, and so now we have a global economy, huzzah. There are lots of things I think that can happen, and taking those pieces of inspiration, I think, are really helpful along the way.

Samuel Arbesman:
And this goes back to what you're saying of this human-centric approach, figuring out what do we want humans to be doing? How do we think about the best version of ourselves? And then from there, think about how to use technology in a way that kind of aids us to do those things, as opposed to take these technologies as givens, and then reorient our societies around them. Yeah, that feels like the wrong way to go.

Alice Albercht:
And another example that I like to give, is H.G. Wells wrote this book, the World Brain, and he wasn't asking for technology really. He was asking for all of people's knowledge to come together to solve world problems. But if you look at that, there's a big problem he's trying to solve, and we have technology that could solve that now, and the internet and connecting and encyclopedias, all of this is in service of that. His ideas were a little more nuanced in terms of the ethical obligation of somebody that has expertise to participate in something, and share that knowledge, and that's like the open-source community pushing science to be more open. I think we can look to some of these too, and see what were the problems they were trying to address globally here, and what were the technology we have now sitting around that might be used to solve for those problems too.

Samuel Arbesman:
Right. Yeah, how can we center the human aspect, figure out the problems, and then say, "Okay, we have all these technologies that were not available decades ago, how do we make our world better?" That actually might be a perfect way to, a hopeful way to end the discussion about artificial intelligence, and all these different things. Yeah. Thank you so much, Alice. This is amazing. I really appreciate it.

Alice Albercht:
This was really fun. Thank you.