Seriously Curious: From Algorithms to Ethics | AI’s Evolution (Part 1)

Podcast Episode 5

Host Chris Rockwell

Guest KATIE JOHNSON

The Seriously Curious podcast covers the most important topics in UX/CX strategy and design for business results. Hosted by Chris Rockwell and the team at Lextant, this podcast brings actionable insights from leading industry experts and the latest customer research. Each month, Seriously Curious unlocks human behavior, uncovers common design challenges and explores advances in new technology. Watch the latest episode below, or listen on Spotify, Apple Podcasts, and YouTube.

In this episode of Seriously Curious, Chris Rockwell, founder and president of Lextant, sits down with Katie Johnson, global head of consumer insights at Panasonic Well, to explore the rapidly evolving landscape of artificial intelligence. 

Together, they unpack the implications of AI from a human experience perspective, discussing how algorithms shape decision-making, the ethical dilemmas AI presents, and what the future holds for consumers and businesses alike. This thought-provoking conversation kicks off a two-part series on AI’s profound impact on the world of experience design.

CHRIS: Welcome back to Seriously Curious, a podcast all things business strategy and design. We’re covering how to understand human behavior and desires to help businesses succeed, delivering value through better experiences. My name’s Chris Rockwell, I’m founder and president of experience firm Lextant. Today I’m super excited to be joined by Katie Johnson, she’s global head of consumer insights for Panasonic Well. Katie and her team are developing innovative AI-powered solutions for wellness, and I’m super excited to learn more about her background and her history at Google, where she and her team were instrumental to the development of Bard, the AI LLM engine, which I think was pretty pivotal in, sort of, our technology innovation for UX. But not only is Katie one of the more brilliant researchers I know, but she’s also a mom of a growing family, and from what I understand, an amazing lead singer and rock-and-roller. Welcome, Katie.

KATIE: Hi, Chris. Nice to be with you today.

CHRIS: Today we’re going to be talking about the future of AI-powered experiences. There are a lot of aspects of AI that the UX community is really impacted by these days, including what does it mean for research and what does it mean for our roles in the design process? But today, I really want to talk about what AI means for experience delivery. What does it mean to have AI-powered experiences, and how do we use the inherent benefits of AI to sort of deliver better value to humans and better business differentiation to companies? So first, Katie, let’s talk a little bit about you and your background specifically. How did you come to this place in your career? Where did it all start?

KATIE: Yeah, so it started, you know, now quite a while ago. I grew up always wanting to be an astronaut. And so I went to college to be an astronaut. That was the plan. I chose to go to Washington University in Saint Louis because they had a small satellite program, which was an incredible opportunity to both sing acapella and build small satellites, which is perhaps the high point of nerd-dom — the intersection of those two things. So I got to do that for four years, including flying in zero gravity to test propulsion systems, which was just such a blast. And at that point, I was even more convinced, like, this is the path for me. So I actually went to Georgia Tech to do my doctorate, under an NSF fellowship that I won while I was a senior at Wash U. And when I got to Georgia Tech, I was there, you know, very laser focused on getting that doctorate, because other than going into the military at that time, that was the fastest track to getting into the astronaut corps, basically, which is, of course, still incredibly difficult to do. And while I was there, I started taking classes in what was called cognitive engineering, and human factors. And basically I realized very quickly that I had been building small satellites, and I had been taking classes in orbital mechanics and flight dynamics. And yet that was boring. It was boring! Which is kind of a wild thing to say. And yet, you know, all of a sudden I was in classes studying pilots, piloting the, you know, the Boeing cockpit, and talking about how their layout of buttons in the cockpit was affecting their, you know, cognitive load, their ability to fly, their situational awareness. And, before I knew it, I was basically cheating on my aerospace degree with a psych degree. And the aerospace school approached me and said, hey, you’re supposed to take oral qualifying exams soon in these very tough aerospace classes that you are basically ditching to sing rock and roll and, you know, take psych classes. And I was like, maybe it’s time for me to think about my life. And so I ended up leaving Georgia Tech with just a master’s in cognitive engineering and got this incredible job, that I can’t believe I got, being a human factors engineer for a big company called Emerson Electric, working for this incredible man, Steve Little, who literally met me in an airport and let me become his ward of the state effectively for two years. So I spent two years traveling around the world with Steve, teaching human-centered design, building products, you know, ranging from big industrial pipe cutters all the way to thermostats — really all over the map. From there, I went on to do, one of the Emerson business units needed a head of innovation, which —  I was 26 years old and had no business having that job, and yet they gave me that job. So I learned how to do innovation. I built them a 15-year roadmap, in oil and gas. And it was — I learned so much about the fact that innovation is not just this light bulb moment that people have, but it’s actually a, you know, thoughtful layering of perspectives that you’re doing deliberately so that you’re finding meaningful whitespace for a business to innovate in. I thought that was really cool. It was probably a pivotal moment for me in my career. But I missed being with users. And so at that point, I actually took a 40% pay cut and became a junior associate in basically the only UX firm that would have me remotely from Boulder in San Francisco, and spent two years doing UX research in, you know, all over the world, which was really awesome. Again, I was younger and didn’t have a family, so I was able to take on those assignments, going international, doing international research on the ground, which was a blast. From there, I spent two years in blockchain, running programs for a blockchain incubator out of New York, and then most recently went to Google, and spent three years in the assistant family doing 0 to 1 projects there, leading to the culmination at Google, which was Bard. And then, yeah, from there, basically Panasonic Well, arrived and said, you’ve done everything in your career that you need to be effective in the role that we’re looking for. And, and on top of that, we will let you do it fractionally, which, as the mom of two kids, is just an outstanding thing to be able to be offered to not take a step back in my career to do the most challenging work of my life and feel whole. And so I left Google, actually almost exactly right now last year, and have spent the last year at Panasonic Well running research for innovation’s sake for them.

CHRIS: Pretty amazing trajectory. And to take on leadership roles and innovation roles, I mean, you think about AI and blockchain, I mean, it doesn’t get much more cutting edge than that. I remember when we first met each other at Emerson, one of the things that was pretty profound in my career was the fact that Emerson was this huge conglomerate making like, fluids and engines and, you know, all this kind of stuff. And at the very top, the leadership was saying, human-centered design is the future. It’s the future for Emerson. And that, like, was amazing for me. And I think it was really a validation for what we were doing and I think what you and Steve were doing. So it was amazing to see that sort of launchpad, and to see where Emerson has gone since, you know, has been impressive.

KATIE: Yeah, I think for me, you know, when people look at my resume, I think they think that I have, you know, issues committing to anything, which may be true. But I think when you zoom out, first of all, I think everything in life is a question of altitude, right? If you’re looking at my resume and my trajectory with the right altitude, you’re going to see that I have spent my entire career at the intersection of deep human need and emerging technology. I’ve worked on everything from, you know, cockpit design to VR/AR to blockchain to, you know, to AI and LLMs, which has just been beautiful. And yet being that voice in the room arguing for, why are we building this, why are we using this technology, why do we care, who is going to benefit from this — has been just such a gift because I have spent time in engineering land so I can speak engineering. You know, I think, having gotten two degrees, one of which is from Georgia Tech, is a legitimizing factor, right? And that helps a lot in being able to go into rooms and have legitimacy and be able to argue for the thing I think matters most, which is let’s build something really, really cool technologically that actually does something for people that they need.

CHIRS: Yeah, yeah. Being able to speak the language of your collaborators makes you much more effective. I agree with you on that. Tell us a little bit about the Google Bard development. I think, you know, I see that as a real seminal moment in innovation. What was it like to be, you know, in that crazy environment and all the issues surrounding it? Tell us about that.

KATIE:  I think, again, it kind of comes back to this philosophy that it may seem like ChatGPT and Bard and all of these things were a light bulb moment, but it wasn’t, right? I mean, we were working on technology that, you know, the LLM technology specifically, but also technology that helped make our LLMs better long before that. And so, you know, even just working in Assistant was a lot of exposure to how people talk and how people engage with different, you know, types of things like, you know, non-sentient beings like a Siri or Alexa. And so I think it’s easy to think, okay, you were working on Bard and that, you know, you just arrived, but that’s not exactly what happened. I had been working in that 0 1 space in Assistant, for almost three years when I became the lead UX researcher on Bard. And I was only the lead UX researcher on Bard from about December of 2022 to about May of 2023. And now there’s a much more significant team of really accomplished folks that are running all that stuff at Google. I think for me, what was interesting is, you know, I think it’s easy to get in a headspace when you’re at a big corporation. I’m thinking, okay, we can’t launch anything quickly. Everything has to be perfect, everything has to be globally accessible and all these things that just hinder the ability to go quickly. And what I learned in the Bard experience was that if you have motivation and you have — especially executive buy-in — you can do anything in a very short period of time. I mean, we went from nothing to shipped with that product in three months. Now, of course, the underlying technology had been there for a long time. You know, LaMDA had even been in the news the previous year. But yeah, to go from we didn’t have a product to having a product in that kind of a time frame, at a company, the size and scale of Google, coming out of a global pandemic, is just mind blowing to me. And so I think that’s a lesson I’ll probably take with me in my career, of just, you know, big companies get a lot of red tape just because they want to be safe and cautious as they’re going to market, right? They can’t afford to have the brand risk. They can’t afford to have these things. And when there’s motivation, you can cut through that red tape with a buzz saw if that’s necessary. So I thought that was a really cool experience. And on top of that, just, you know, I think it can be, you can get this idea in your head of working at a big company that people are just, you know, resting, investing, or they’re cutting checks and, you know, just doing the job. The team that put that product together, the team that I was a part of, that went from zero to launched in that kind of timeframe, gave it everything they had. I mean, cellularly, they were committed to getting it out. And so being able to work with folks that were that ferociously dedicated to their craft —not just to product, but to their craft — on the team, was such a privilege. So yeah, those were probably the two lessons that I learned. And on top of that, you know, I guess just to not leave this unsaid —  that it could not have happened without standing on the shoulders of the giants and the products that were built before that product, right? Like there were teams that were literally building things that became the primitives of Bard for months and years before, and it could not have been possible without leveraging all of their work as well. So, yeah, I think understanding again, more lessons around innovation, about the building blocks, having those things in your wheelhouse, being able to repurpose people as you need to to really move quickly, and then, yeah, being comfortable with falling down because obviously that happened on the trajectory between December and March. There were a couple of missteps, not just for us, but for everybody, right? And so that — being able to get back up and dust off and try again — was really, really critical.

CHRIS: So, you mentioned, within Google, it looked like an evolution probably of the things that you had already been, even though from the outside that may have looked pretty revolutionary, you know, the things that you were coming up with. You had good support from upper management, which I think is huge. What do you think drove the dedication and compassion or the drive of the team? Like what was really motivating to them? Was it the nature of the work? Is it the kind of impact it could have? What was really driving that?

KATIE: I think the pride of being able to launch something and do something basically at startup speed, but global corporation scale. I mean, you don’t get to do that that many times in your career. I think that that was an incredible, incredible thing. And so, you know, making changes that were, you know, they weren’t small changes, and they were going out in product the next day, right? I mean, if you’re used to making changes to something like search, right, you have to go through a billion reviews and go all the way up to executive management, because if you screw something up on search, that’s the end, right? And of course, if you screw something up on Bard, it could be bad too. But, it was a very different feeling. It was a different feeling. And I think to be able to be operating like that and have the global impact, but the startup speed is just a rare experience. And frankly, all of my roles at Google — so I have three different real roles in the time that I was at Google, in the three years I was there, three different products — and all of my product work felt like that. Bard simply was the most accelerated of the three, and the only one that ended up getting to market, like completely launched. There were other experiences that I, you know, got to field in different ways, but that was, we got to market in, you know, like I said, I think three or four months. It was bonkers. So I think being able to do that at Google is an incredible privilege for people that aren’t used to getting to do that at a corporation, or they’re used to doing it in startup, but it impacts like three people, right? 

CHRIS: I think it’s sometimes easy for professionals to miss the impact that they are having, you know, and so it’s easy to get discouraged, I think, sometimes. So it’s great to hear amazing accelerated success stories. And like you said, it wasn’t perfect, but, amazing to see what a team can accomplish when they really, you know, pull together. You know, this is amazing technology, the AI that’s coming out, you know, a lot of people saying revolutionary, transformative. Is this another like Gutenberg printing press moment that we’re having? I mean, what’s your take on AI and its impact on humans?

KATIE: Yeah, so I think that’s a great question. And I think the answer is yes, provided we remember what the printing press was all about, right? Like, the printing press made it possible for humans to share ideas further, wider, more impactfully than ever before, right, because you could write something down. You weren’t relying on this, you know, oral history telephone. You know, access, proximity, all these things, right? I think in this moment in human history, it can be very easy to misunderstand what AI can do for us. And if we come back to this, if we remind ourselves of the fact that this technology is meant to enable us to be more human, not to replace us, I think that that is the core. And when you talk about the printing press, that’s what I think about. It didn’t, you know — I guess you could argue like, yes, there were people that were handwriting books and it put those people out of business, right —  but what it really enabled was the ability to legitimately, repeatedly share information broadly so that we could evolve faster as a species, which is a cool thing to think about. And so I love that analogy because that’s kind of what I keep coming back to when I hear people get super excited about AI. 

CHRIS: I think about, one of the things I always, when I’m teaching, I always say, work practice, or we think about life, you know, human existence is sort of fundamentally evolutionary, even though these sort of revolutionary technologies can enter the scene, like the printing press, where all of a sudden now we can massively communicate in a consistent way, whereas we really couldn’t before at that speed. But, you know, the need to communicate, the need for consistent communication, still existed, right? I mean, it existed before and after the Gutenberg printing press. So, I think that’s one of the things that I always think about when we think about AI as a technology. I think more about what it can do. What are the benefits? What are the value propositions around AI technology that can move the human condition? And what are the characteristics of AI as a technology that we didn’t have before, that will help us make better experiences? I sort of think, it’s just kind of funny, clients come to me and they say, hey, we’re going to do AI, which I just think is funny, it makes me laugh. But I try to change the way they think about it and say, you know, what are the value propositions that we can really drive using AI technology as a fuel, you know, for experiences. And then what can we do? What are the characteristics of AI experiences that we can use now as a design tool that we couldn’t use before? So when you think about, this idea of smart experiences, right? Or I think about things like trust or confidence, you know, these are the things that humans need from experiences — authenticity, you know, empathy and empowerment. And then there’s the sort of attributes of what AI can give us, things like anticipation of need and radical personalization, sensing knowledge, sense making, you know, and the like that we maybe we couldn’t do before. Tell me a little bit about, you know, when you think about AI kind of as an engine to fuel experiences, you know, what’s it really all about?

KATIE:  Yeah, so I think one of the beautiful things about my career is that having been a part of hype cycles in the past, I think my awareness of hype cycle mania is high. And so, I mean, I’m saying this as literally someone who worked on a project that was blockchain, but in space, right, which is the highest point that you can possibly reach of hype — blockchain and space. And so, in this case, I think, yeah, remembering what it does and what we do is really important. And then I also reflect back — I mean, this is an outdated term, but it is the phrase — I reflect back on human factors back when we’re talking about flying airplanes and that kind of thing, and when automation was first coming into the cockpit, and there was this list that was called the MABA MABA list, right — the men are better at, machines are better at (I’m sure it would be the PABA MABA list today) but this is the same thing, right? It is your list that you’re making now,n which is that it’s important to understand that if we put automation in the cockpit and, you know, pitot tubes freeze on the outside and your automation tells you that you are going down and you’re pulling up, and you don’t realize that you’re stalling the plane, then you’re not doing the human part of flying, right? And that is literally how Air France 447 crashed into the Atlantic Ocean. It was dark. They couldn’t see the horizon, right?  And so overreliance on automation isn’t a new concept for human beings. We have this also in cars now. Car cockpits are very similar problems where we have too much automation and not enough focus on what we’re doing, which is driving. And so I think coming back to that, you know, being aware of these things like hype cycles and the task that we’re actually trying to accomplish, I think LLMs specifically, it’s very important to remember what LLMs are. And so when we’re talking about LLMs instead of just AI — because AI is a very blanket term that can mean a lot of things — when we’re talking about large language models, specifically, we are talking about a model that is, its entire job is to predict the best next word to say, right? And so that means that it can be really effective at things where it’s really easy to tell it what the next word should be. So one of the funny things I always witness when I see people playing with something like ChatGPT for the first time, is that they ask it a really hard question, right? Just to see, to establish its goodness. And what happens is they ask it a question that was hard for them to learn. Usually it’s a question that took them years to learn — something really complicated, something perhaps that required additional schooling, something from a medical school textbook. Okay, well, that took them a lot of reading and time and coursework and education, possibly mathematics, etc., to learn. But the LLM is really, really gifted at reading that medical textbook and saying the next word should be blah, blah, blah, blah, right? And so it’s interesting because people establish truth of these models based on a human way of establishing truth, not a machine way of establishing truth. And therein they almost mis-code what it is good at, right, because they then attribute their own knowledge bank, their own understanding, and all of the work that got them to understand how to treat cleft palettes, for example. They attribute that to the machine and it’s not true. The machine is really, really good at reading medical textbooks that say you should treat cleft palate this way. So I think it’s really important as a species to understand and guard close the things that we do that it should not be doing. And so when I think about that, I think about, you know, like you said, even something as seemingly innocuous as differentiating between sympathy and empathy is a critical component of understanding what these models can do. Can it tell you, that sounds really hard, Chris, I’m really sorry that you broke your leg. That sounds hard. Yes, if you are someone that climbs mountains and, you know, can it understand what it is for you to have lost access to your hobby that matters? No, it cannot have empathy for your loss. Right? You need a human for that. So I think there’s some interesting nuance to what we should and should not be using these things for. Now on the other hand, they are some of the greatest synthesizing engines known to mankind, right, and one of the things that the internet has given us as a species is the ability to exhaustively review information. I mean, you can spend literally hours and hours and hours forever trying to understand, you know, which type of dog you should buy, right? And like, and it’s still going to be too much. Or which car you should get for your growing family. It’s going to be too much information for you to process, even if you really like, make a chart and you’re really deliberate about it, there’s just always going to be another blogger, another link, or another place to go. And in this case, being able to synthesize information quickly so that you can decide more effectively is a really good thing that AI and LLMs can do. And I’ll end by saying that to me, the thing that we must guard as humans is the ownership and the accountability of the decision that comes at the end. Because it can synthesize, it can recommend, but ultimately you have to choose, you have to act. And as long as that is the case, you own that outcome. And I think if we don’t protect that, we will lose something of ourselves. 

CHRIS: Yeah and it doesn’t care about the outcome either, right? 

KATIE: Right. It can’t. It’s not sentient. 

CHRIS: Right. Yeah. So a couple of things that you mentioned, knowledge that the LLMs provide is imperfect because knowledge is imperfect, right? I mean, we’re still, it’s not all

completely known, but our LLMs do something that humans don’t do well, and that is synthesize large amounts of information. So back to your what do humans do well, and what do people do well, or what do machines do well? Machines are great at computation, at gathering and synthesis and analysis. But interpretation or action from that information, can be more a human instinct, right? So if that’s the case, then AI doesn’t replace humans. And that was your, you know, what you said originally. And I think that’s a sentiment that I share that it’s not really a replacement, but an accelerant or an empowerment to humans and what humans can do. So what if we talk about that for a minute? Like, what does it mean to have AI now as sort of a copilot in, you know, the experiences you have and what you’re capable of, a true collaborator, maybe that you didn’t have in previous kinds of interactions. What kind of experiences can we deliver? I mean, how do we elevate work practice or safety or the like?

KATIE: Yeah, I think, again, it comes back to a really important balance that we have to strike and hopefully a lesson that we will have learned from, you know, 25, 30 years of access to the internet, right, which is that there is a limit to what the human brain can process. And so there, you know, there’s this idea that in work practices like, oh, good, now I can just generate all my emails with ChatGPT and I can generate all my Word documents with ChatGPT, and I can generate PowerPoints. And it’s like, okay, but if it’s just a robot, you know, talking to another robot because the other person, if you’re sending them any documents, is probably using ChatGPT to synthesize the stuff you’re sending. And now we have a very broken system, right? Like, I think it comes back to and, you know, some of the easiest things for us to offload, if we’re not cautious, are things that make us uncomfortable or bored, right? And sometimes tedious things are important for understanding and earned expertise, right? And so, you know, I think of something like performance reviews is a really great example. If you have to write a bad performance review for someone, that’s a great opportunity for you to say, I don’t want to do this. I don’t want to write something bad about Chris’s performance. So I’m going to have ChatGPT do it. But again, coming back to owning the accountability of that decision, that’s not where we should be going as a species. So I think a good, you know, litmus test for whether or not you’re doing this correctly is, am I enhancing my ability to think critically? If the answer to that question is yes, yes, you should be using it. If the answer the question is no — and especially if the answer to the question is I’m trying to avoid something I either don’t like or am dreading — then, yikes. Run. Right? So, but I think there are things in the world that are just so complicated that getting your arms around them for the first time can be really overwhelming. I mean, say your child has been diagnosed with a, you know, a specific kind of illness or some kind of learning disability, like being able to just get information and know that you’re synthesizing appropriately — and especially as these models get better and they can cite sources — that’s going to be really important to be able to know that you’re not just synthesizing a bunch of blogs that may or may not be grounded in medical science, right? Like, I think there are some really powerful things that people can do with these models. So, you know, when you’re starting a new job, for example, getting up to speed on all of that stuff, there’s a great way to just say like, hey, here are the 10 most important things about this company based on my (the AI’s) catalog of the last 30 days, 60 days, 90 days of what’s been going on, this is what you need to know to get started effectively. Awesome, right? And you’re removing that individual human bias that you would get from talking to any one stakeholder about what you should be doing in your new role. There are some really great opportunities like that. Coming into a meeting where someone has done a lot of critical thinking and written a very long document, and, you know, there’s lots of practices like this, even Amazon is famous for its practice of making people come into a meeting and read first, right? That’s a great example of like, if you’re supposed to read a 300 page document or, you know, a bill for Congress, maybe people don’t have time for that. I mean, frankly, that’s how Congress gets its bills read today. It just uses humans. It uses the more junior staff in the House of Representatives to read these things, right? So like those are great opportunities to just get familiar enough to have a meaningful conversation and engage. And again, that to me passes the test of like, am I using this to engage more critically to bring my human perspective and my brain and my accountability into a decision? Great. I think we should absolutely be using it for stuff like that and for things like — and you mentioned anticipation, I’ll just end with this — for things like mundane responses to emails, scheduling, calendar, moving things around, like I can’t wait for it to take on a lot more of that stuff, because that stuff is tedious but not value add, right? It’s not tedious because you’re reading, you know, a medical science book that you really need to know because you’re going to do surgery and you need to know that book, right? Like, it’s tedious, boring stuff that is just moving calendar appointments around to free up some time in your calendar. That’s the kind of stuff that I’m excited for it to take on.

CHRIS: Yeah. So, I’m with you on that. So, it’s giving us the ability to make the best decision based on the best knowledge. So if we assume that our knowledge with these tools is better than our knowledge without them, then we can make a better decision. Because we know more. We can be more prepared. We can contribute to the collaboration or conversation. We talked about skill, though, a little bit too, which is an interesting thing. You know, that whole 10,000 hours thing, right? Like it takes you 10,000 hours to learn anything. And we have robotic surgery. We have chat bots now that give doctors some time to actually do diagnosis and not information gathering, which is like non value add time, right? And then we have sort of the human-machine paradox, right. Which is, okay now if I ever need that skill set I’m not going to have it because the machine’s been doing it for me, which I think is a real risk when it comes to skill sets like maybe driving or things like that. You know, if we rely more and more on the technology, when the technology can’t hack it and we need to drive, will we have the skills to be able to drive effectively? So those are, I think, some of the more interesting issues. What things do we need to take as opportunities to maintain skills? What non value work practice can we get rid of because it’s just not value add time? And how do we sort of elevate our  human roles to things that are of more high value? I think it’s really about optimization, right? Like like from the very beginning, like the calculator in your hand, you know. You can do, you know, exponential calculations way faster than I can. So let it do that. So I don’t have to do that. That’s a fundamental principle of human factors.

KATIE: Right. And I think giving ourselves the opportunity to revisit, you know, not just because of the advent of AI, but, you know, some of the practices that perhaps have gotten a little out of hand. I mean, meeting culture is a great example, right? Like we spend all this time in meetings because we’ve forgotten that information transference should not be happening in meetings. Meetings should be a place where people are coming to make a decision. Only the people that need to be in the room to decide should be in the room. Everybody should be informed before they get there, and the meeting should be about healthy debate so that we can decide, effectively commit and move forward. But often, just because there’s so much information out there, so much, you know, huge companies generating so many documents, we end up spending these hours and hours very expensive, which we don’t value appropriately, we don’t calculate, you know, we need one of those clocks that’s like the national debt clock on meeting

time in every room in an office. Because you should know: This meeting is costing us X number of dollars. And all we’re doing is reading documents. Like all these people could be doing this on their own. And then we come together for a very different purpose. And that’s assuming virtual meetings. So it’s not assuming we’re flying everybody in, right? So I think there are things that you know, this kind of technology can give us an opportunity to pause and say, hey, where have we kind of lost the plot on some of the things that we’re doing that are not value add that we’ve convinced ourselves, perhaps, are value add? I mean, being available 24/7 is another really great example. Like just because you’re responding to emails or slacks at three in the morning doesn’t necessarily mean that I’m getting the best version of your brain. And I might not choose, if I really knew, to have an answer right now versus six hours later, but it’s going to be a better answer, right? So I think this is an opportunity for us as a species to revisit, you know, things like daily digests, waking up in the morning and having things organized in a way that’s meaningful, helping sort things in. Oh, you have free time this afternoon that’s heads down, let me slot in order the things that you’re going to knock out. Like those are the kinds of things that I think are really helpful and assistive. That would be to me like what the robot surgeon is doing. Right? Like if anything goes wrong in that surgery, that doctor is taking over; the robot is not going to improvise, right? Like that’s really the core of this, is that we need to think about things that are algorithmic in nature, predictable and repeatable. And when that does break down, to your point, we need to be able to step in and act effectively. The example I always give with autonomous driving is, you know, everything is going great — this actually happened to me a while ago, it’s a very random story — I was driving on a very big road, going, like, I think 55 miles an hour and a Christmas tree, like a full blown Christmas tree, blew into the road in front of me, right? And it’s a great example of, like, now my, you know, my car has now a lot of automated features that would sense that and try to throw on the brakes, right, because that’s all it is supposed to do. But in that moment, if I throw on the brakes, we are getting in a ten-car pileup at 55 miles an hour, and what actually needs to happen is I need to bite it and drive through that Christmas tree at 55 miles an hour, knowing that if it’s blowing across the road like a tumbleweed, it’s going to explode over the top of my car, right? Like, and in that moment, that is a human decision-making engine with accountability that’s happening live that is way better than machine is ever going to be. Like it’s just not going to be able to improvise in the moment like we are.

CHRIS:  Yeah. So that’s interesting that this sort of collaborative nature between humans and technology, you know, the humans, I think it’s ironic that we, (I don’t know why I think it’s ironic)

but we developed this technology that is supposed to sort of take over some of the human condition, but at the same time, humans are necessary for that to be successful, right? Because we are making the decisions. We do have the instinct to know when the technology isn’t correct. And by the way, in that situation — you know, we’re doing a ton in human experience, in automotive and future mobility. And if your brake assist was on, it would not allow you probably to override that and drive through the Christmas tree. So you might have the big pile up, unless there’s a whole bunch of automatic braking systems in the vehicles behind you that are activated. And then it might, you know, be able to navigate that situation. But I do think it’s interesting the way that we can use some of this technology. I mean, just another automotive

example is many, many customers won’t use the automatic lane keep assist because it tends to either hug the edge lines, or it keeps you right in the middle of the lane. And most people don’t drive right in the middle of the lane. And so in either case, it seems like the vehicle is fighting the human all the time and so they just turn it off and they won’t use it. So it’s a great example of how, sort of the imperfections around some automation and whether or not this AI can really learn what humans need from experiences and be able to provide that or assist in that kind of experience. You know, it’s an example of incorrect learning, I suppose, or incorrect programming in that case.

KATIE: As someone who lives up a windy mountain road, that I don’t exactly drive perfectly every time, I turned that system off the day I bought my car. So yes, I often think about it when I’m on, like, a highway, I’m like, man, I should have this on. But the amount of time that I mostly spend driving up and down a winding mountain road where you’re going out of the lane or you’re going around the corner or whatever, and it would just be beeping and fighting me the whole time, I agree. And it comes back again to all these lessons from the cockpit. I think this is like for me, one of the biggest lessons is that we don’t have to relearn all of these things every single time. Like there’s so much — and it’s interesting because I think people don’t think of aviation and the cockpit as a place to look for automotive, but it’s so much a place to look for automotive because, yes, okay, we don’t have people that are getting trained to fly 787s driving around, and maybe we should rate with how dangerous driving a car really is; we don’t talk about that. But like the amount of automation in the cockpit of a 787 is now not that dissimilar to the amount of automation that’s in a car. I mean, my husband just bought a new car, and some of the buttons — I mean, this goes back to grad school for me — some of the buttons are digital buttons, which I already have an issue with, because you’re supposed to be driving, which means you’re supposed to be using your hands and get comfortable, you know? That is why, like landing gear is shaped a certain way, different from flaps, like for pilots, so that they’re not having to look down and find these things, right? Wayfinding is hard. And his car has now digital buttons that shift. If you hit one of the digital buttons, it shifts and all of the buttons change to a different set of buttons, and it drives me absolutely bonkers. I can’t use it. Like a it changes the knobs so that they go from climate control to volume, and it depends on which digital setting you’re on. It’s a disaster. And so I think being able to, you know, look back and learn from areas that feel not necessarily adjacent, but really, truly are adjacent, where we’re talking about managing a lot of information, a lot of stimuli. And simultaneously, as these cars get more and more automated, dealing with boredom, right? Like, what are you going to do when you’re bored? Because, to your point, if I’m bored and I’m playing Tetris on my phone instead of, you know, and letting the car automatically drive, when that tree rolls out onto the road, even if I wanted to take over, I don’t have situational awareness. I don’t know who’s behind me. I don’t know how fast I’m going. You can’t come into a very high stakes situation and take over. You have to be still engaged in the experience if you’re going to have enough situational awareness to drop in at the moment, and these are things that we can bring into LLM and AI learning as well.

CHRIS: Yeah, it is interesting that now that we’ve automated things in the vehicle, we’ve built really complex active driver management systems which watch you and make sure you’re paying attention to the technology driving the car. So it’s like, where’s the value add? You know, now what was an active, you know, driving scenario becomes one of vigilance. Now I have to pay attention. But as you say, as the more automation comes, then we get into a state of boredom. And what happens with that? What happens to the six to eight seconds that it takes for people to become reacquainted with the headway scene and be able to take control, that’s pretty precarious. So there is a lot to learn, I think, between all these types of automation that have been happening over time and the elevation to now, sort of, the intelligent aspects of it. Tell me a little bit more about like, how do we build trust and sort of authenticity in these kinds of experiences? You know, that seems to be a lot of where we find things begin to break down. You know, for example, you know, if the LLMs are incorrect because the knowledge is incorrect, that’s going to come out in some different kind of ways, you know, hallucinations and all these kinds of things. How do we establish and maintain trust with these AI-powered experiences?

KATIE:  Yeah. So you mentioned hallucinations, which are really insidious and not solved yet. And it’s not just because knowledge is imperfect. It can be. It can also be because, you know, hallucinations can happen because again, this is a language prediction model. So, when we talk about, you know, if I ask ChatGPT, or any LLM, to schedule me an appointment, it knows that the next thing it should say is I scheduled your appointment. Now, it did not do that. Right? It doesn’t have any way to do that today, right? It’s not connected to actually be able to act. And yet if I think it’s scheduled my appointment because it said it scheduled my appointment and I’ve already established trust with it because it knows about cleft palates, then, you know, great. I believe it and I walk away and then I go to my appointment and they’re like, what are you talking about? You don’t have an appointment. And so I think that the most important thing, I think, is for every person that uses these tools is to get familiar with them enough to be able to spot them when they’re being used, to be able to, you know, it happens now where people will send me something that they claim to have authored — not just they don’t say, they actually claim to have authored it — and it’s not written by them. And I think because people don’t know necessarily about my background and the time that I spent in the field, they don’t think that I’ll notice. But if you learn these models, if you use them and you’re familiar with them, you’ll notice they have a signature way of talking, and you can see it. It’s almost like seeing, the magic eye

coming out of the two dimensional book, right? And once you learn to see it, you can’t not see it. So that’s like thing number one. I think people need to be familiar enough to be able to recognize these things. You know, it’s really hard to do as it gets better, but that’s something people want to do. And then on top of that, you know, like you said, and we’ve been talking this whole time, recognizing what it’s good at and what it’s not good at is critical. So that when you’re in a situation where A you either know or are suspicious that you’re interacting with a model, and B you know that you’re getting close to territory where you shouldn’t be operating or trusting it, then you need to suspend, you know, your trust and move forward with caution. It doesn’t mean you can’t do it, right, but it does mean that you have to move forward in a different place. And that’s where these models are incredible for things like creativity. Right? Like if you wanted to write haikus, it can write them all day long and they’re going to be awesome. But if you’re talking about trying to make a decision that really matters for you or your family or country or whatever, like you should be proceeding with caution as you’re getting into more and more high stakes, complicated, ethical, those kind of places. Those are places you need to be very suspicious. And like I said, I think this gets much, much muddier if A you just don’t know what these things are good at, or B you’re not sure you’re interacting with one.

CHRIS: Katie, thank you. We’re out of time for today, but I want to continue our conversation on trust as it relates to AI experiences, so I encourage everybody to come back and tune in for our next episode of Seriously Curious with Katie Johnson as we dive deeper into AI.

(Visited 20 times, 1 visits today)