Andra Keay: Robo-Pragmatist, Humanoids, Technological Shifts, Laws | Turn the Lens with Jeff Frick podcast Ep42
English Transcript
© Copyright 2025 Menlo Creek Media, LLC, All Rights Reserved
Cold Open:
So I will count us down
and we will go
in three, two, one.
Jeff Frick:
Hey welcome back everybody. Jeff Frick here coming to you for another episode of 'Turn the Lens' and I've got what's turning out to be a pretty frequent guest. What's interesting is I've got to interview her a lot of times on other shows and this is the first time I've actually had her on my show. I think it's like the fourth time I've interviewed her. So I'm really excited to welcome in through the magic of the internet She's Andra Keay the Managing Director of Silicon Valley Robotics. Andra, great to see you.
Andra Keay:
So good to be here Jeff. And you're right, too many times on other stages. It's really good to be here with you.
It took a while but we finally got it together so I have a long list of topics which I always have every time we talk. And I don't know if it was you that said it or somebody said it that really 'Humanoids are having a Moment' right now. What is going on on the humanoids. You know, we've they've always kind of been there. They're kind of spacey and you know, they're kind of the cool things that people dream about and draw cartoons. But what's happening in humanoids that's so special right now that's different than what it's been in the past.
Andra Keay:
Well once again, it's taken 20 odd years for humanoids to become an overnight success but we really needed a certain critical tipping point to reach for humanoids to have this moment. And those of us that have been enthusiastically following companies like Agility Robotics, Apptronik and some of the 1X and some of the other companies from Firia, Unitree Deep Robotics. We've watched them get commercial trials and in the case of a couple of them go from commercial trials to actual commercial contracts. And now that is a significant step. And now that's opened the doors to say there may be a business case for humanoids now that wasn't previously possible. It's possible now. And here's the thing. If you remember what a mobile phone looked like in the 19 oh, let's see, I think it was around I was working in film and television and I think I got my hands on one of the first ones that was a kind of a mobile car phone in around 1987. And it was.
Jeff Frick:
That's about right I had the big giant Mitsubishi brick. You could jam it in your car. You could pop out the brick and bring it inside and plug it into the wall.
Andra Keay:
Now, they never got bigger, heavier or more expensive And that's what we need to think about with humanoids today. They are never going to be any worse than the current state of technology. They are going to continue to improve they're going to get more capable and they're going to get more affordable. And these are the things that are going to mean it's not just 1 or 2 things that humanoid robots can do. They're going they're here to stay. Because the range of use cases for them is pretty broad. I think I've come up with about 11, maybe 12 different business models business use cases for humanoid robots. And part of it is what you might say being labor. But a robot is capable of more than simply you know, looking like a human and being a human replacement. You know, in some ways robots are far less capable than humans but in other ways they are far more capable. They're capable of doing repeatable precision tasks and now they are capable of being flexible Right as well. So there's a range of different activities where we can start to augment. It's about doing something perhaps better doing something in a way that we weren't able to do before and about being able to do it affordably. And the big reason that humanoids are very appealing to many people one anthropomorphically where we're just predisposed to find things that are like us more appealing
Jeff Frick:
Right
Andra Keay:
But we also don't need to create infrastructure and change our world. The primary reason for creating a humanoid robot is that it can use human tools. It can operate in human spaces designed for humans operate equipment designed for humans and it's also much easier for humans to train them whether it's through remote teleoperation having that direct kinematic mapping to the shape. So this translation if my hands do this then that robot's hands do this. If I reach up it reaches up and it's really a parallel mapping. And that makes humanoids better in many ways than saying, no let's create a custom built purpose robot that could potentially do this job much, much better. And these are the things that you know I see as being more of the future thing when we're talking maybe about creating completely lights out operations of something.
Jeff Frick:
Right
Andra Keay:
Then we wouldn't need it to be if we're custom designing the full automation the full building, the full work flow then no, we wouldn't need to have it being a humanoid, but there's clearly cases for both and I think that there's significant demand for both sorts of robotic automation Right humanoid and non humanoid. And with such incredible demand I predict that there will be 1 million humanoid robots in commercial operation by 2030 And that's simply based on projecting the plans of the top half dozen or so humanoid companies, the ones that are seriously getting traction and some of the analysts have been looking further down the track to 2035, 2040 and beyond and seeing an incredible step change like it might take us five plus years if we count perhaps the last five years of development 10 years to have gotten to 1 million humanoids it will only take us In 10 years, we'll probably be at 100 million.
Jeff Frick:
Right, right. So was it just a was it just a combination of kind of classic technology curve effects on the price of hardware the sophistication of software the speed of processors and the connectivity of cloud just kind of all those things you know kind of moving that it's got it to this. Or was there was there a single event or was there a a Cambrian event, I think you like to use the phrase that that happened to change kind of the trajectory.
Andra Keay:
Certainly for robotics technologies in general. There's been a Cambrian explosion event that we started to see in the world from 2010 onwards. It's been building for quite some time but it's what I call the 'Robotics 2. 0'. It's about the ability of robots to do real world real time navigation and you can see that as being navigation like an autonomous mobile robot. Sidewalk delivery robot mobile robot in a factory or self-driving car. But you can also see it as three dimensional navigation like a drone, something in the air or underwater, or a robot arm moving in all directions but capable of responding sensing what is actually in space around it and responding in real time if there's something out of plan. So, for example if a pedestrian comes in front of the vehicle if somebody reaches out their arm to stop the robot arm any of these things that are interrupting the planned action now a robot can sense it think about what needs to happen to avoid to prevent collision or just to redirect around something. Could be as simple as bags of things falling from the shelf. Right But previously any robot technology was only as good as 'Stop! There's an obstacle. ' And then you had to wait for some kind of help to reset or restart. Now it's not just about stop and go, it's about go around. It's about make changes in plans in a real world, real time fashion. So that is a fundamental step change in terms of competencies that have opened the door for an incredibly wide range of use cases for robotics technologies and that's robotics technologies across the board.
What you were asking before that, though, was, is there, that same thing for humanoids? Certainly they need the ability to navigate. Arguably humanoids are doing more complex navigation. The bodies of humanoids have far more actuators far many points of movement and there are robots that are moving into social situations in some cases for humanoids. And we're still not really great at working that out. So, there's a lot of complexities ahead for humanoids. But the tipping points came on us really quickly. There have been a whole sea of things Moore's law in terms of computing technology. The same thing having an impact on sensing technology, the same thing happening in terms of just us knowing how to build robots better so that we can now build a smaller, far better robot. And the role that AI and simulation have played. And so what I see now is it's been exponential growth meeting exponential growth.
Jeff Frick:
Right, right.
Andra Keay:
That was this incredible speed up in competencies. So as robotics has met generative AI or large scale learning and large scale not just language models but multi modal models. Right. And that's very much important for robotics. And that has meant simulation. So what we're seeing now and this goes back to say Google arm farm in the 2014 period where a whole roomful of robot arms working individually on the same tasks would come up with a whole lot more scenarios than one robot arm. But then those same strategies would be tested in simulation and through AI and that would multiply the range of options and the ability to find the most successful pathways out of them which would then be fed back into the real action of the robot arms the next day So you had this to and fro happening between what the robots were exploring during the day informing what they were learning and training during the night. And then going back into the real world during the day. So you started this incredible flywheel of feedback loops and we've just multiplied in a sense the number of feedback loops and the transitions from simulation to real world back to simulation to the real robotics deployments back to simulation and so on. So it's compounding. We're seeing incredible exponential growth compound.
Jeff Frick:
Lots of great topics here. Lots of great paths of of development. So let's just jump in a little bit deeper to the one you just touched on which is the change in training. And as you you've kind of touched on briefly we dig in, there's two things that I saw at least at the show. One is kind of LLMs and not necessarily always large language models, but kind of the robotics version of those for different types of activities changing the way that these things are trained. And like you said it's about variability to change and being able to adapt to not exactly follow the script and just say yes or no. And then the other one is this as you mentioned briefly kind of this teleoperation piece where you're integrating a human's behavior into the training process to accelerate. So very different way than you know kind of a classic factory implementation where you're just mapping out, you know, the point at the end of the laser welder to put a car together. This is a really different way to think about training and opens up both speed and flexibility of the training. It's completely different. Absolutely.
Andra Keay:
And there's several different ways that training is happening for robotics and I've been listening with interest to different discussions around is this the right way? Is this the way that training is going to be happening in the future? And at the moment I would say the general agreement is that we need all of these methods and quite possibly all of them together. So there's real world teleoperation then there's having a person beside the robot doing pose and movement training. There's programing training there's using English to instruct there's training from videos there's training through simulation and there's training through extracting ideas from a range of other ways that you're mining data around how these things have happened. And it's opening I think many people's eyes to the fact that we have very few. We have limited insights into how humans operate in the world. You know we've been able to take many great strides with artificial intelligence through text based learning. Even if we count video based learning and there's very much a limit to what's on the internet about a whole range of things like how a particular job happens in a factory for example. And we're only just starting to see some companies instrument things like production lines to learn exactly how different tasks are done and what's the range between a task being performed successfully or really, really well and being performed not up to scratch? And how we can learn across those things and improve those outcomes. It's a little bit like when robotics went into warehouse into logistics and factories. There was a lot of feedback that within the United States up to 75% of factories were not digitized in terms of their inventories and operations. It was kind of moving from an Excel spreadsheet to a printout to somebody recording things on clipboards to transferring them back in. There wasn't a kind of a digitized real time base of knowledge and a lot of these things you needed to build in some of these foundational steps before you could go and add more sophisticated automation or autonomous mobile robots, for example. There are some instances where the robots are able to step past the lack of previous digitization. But in terms of training one of the big reasons that I think humanoids are really having the moment is because they're going to be ideal platforms for data collection about doing all of these human type tasks in the world. And they'll be developing this digital library of what is the range of how you do things. And even simple tasks for example the number of different door closures that there are the world is almost infinite. They open outwards, they open inwards, they open sideways, they open dual doors. You push, you turn, you clip, you [psst] There are so many different ways that we as people can navigate doors. That's such a basic thing. And yet collecting enough of that information to have the sort of materials that we need to do successful training. Well, maybe we do need robots out there jut exploring and discovering a whole lot of those edge cases that they're still having trouble with.
Jeff Frick:
Right, right. So, you've covered ethics a lot in your time. You've spoken on ethics and robotics a lot and some people might say robots are kind of the embodiment of software or some people might say these days it's AI with arms and legs. As the robots move from the factory floor out into this world what are some of the ethical considerations both historically that that have not really changed, but really now are new things now that they've left the confines behind the yellow tape and the glass doors.
Andra Keay:
That's a really great question. And sometimes I like to say I'm on the side of robo-topia because I don't want to be on the side of AI Apocalypse. I can see a lot of things potentially going wrong in the use of AI and one of the reasons that it is perhaps what I consider to be more dangerous is because at some levels it's cheap and easy and invisible, and so we don't see what might be happening under the hood and therefore we are not paying the attention that we need to pay to it. Whereas robots are inherently more visible being physical and they cost more to deploy in the world we're not going to be spamming the world with robots and that makes us take more care. So I think that there are inherently some guardrails built into robotics that AI might not have. And yet there is an incredible overlap between the issues. And oftentimes people think of robotics only in terms of physical issues. Where as we're going to see the same financial and emotional issues as well. One of the areas that really got me started in robotics and that I am most passionate about is the impact of the technology on society and on all parts and levels of society. There are ways that we can address these things with, say ethical or user centered design. There are other ways we're talking about democratization of technology getting it out into more people's hands. But fundamentally we also need to broadly understand what are the potential issues of this technology moving into society. And we don't need to be too scared because we've dealt with new technologies multiple times as societies. And there are guidebooks for the sorts of ways that we as society deal with major technological shifts. I talk about it very often in terms of here are my '5 Laws of Robotics. ' and they're the antithesis of Isaac Asimov's '3 Laws of Robotics' which many people still look to. They express our needs and desires from robots. We don't want robots that can harm us or harm anybody else. Those particular rules that Asimov came up with I always thought that they were examples of rules that you could never deploy successfully because all of his stories were about how they went wrong. But a lot of people still look to them as these are the sorts of rules we need. It reflects on the things that we want. The things that are our priorities but it doesn't reflect on Pragmatically, how would we actually go about doing this. And how is this possible with a robotics technology? So I like to consider myself a techno pragmatist. Not a techno optimist or a techno pessimist but I believe that we're going to be moving forward technologically speaking. it's almost an imperative but that doesn't mean that we should do it thoughtlessly. I want to be one of the people that's moving forward helping to put a brake on when it's needed in the right spots. And I want to share that information and that ability to see where the potential problems are. So my '5 Laws for Robotics' start by sounding like Asimov because I say robots should not kill. The thing is this is one of our most fundamental laws, rules moral principles across society. So we actually have frameworks to help us monitor that or control that. We have laws. If a robot is going to be acting lawfully then it would not kill people. Physical safety, though is just the starting point. See, the next thing from that is robots should be designed to be law abiding. Now that starts to raise the question of but whose responsibility is this? Is this the responsibility of the people building the robot? The people operating the robot or the people around the robots? You know, where is the responsibility sitting? So we need robots to be abiding by laws and we have a lot of past record of this. For example, Who where is the liability for an automobile accident? And we've come up with different ways of assigning responsibility based on the circumstances.
Jeff Frick:
Have we modified that for Waymos though? I was gonna it's funny you just said that because I was going to say, you know, if a Waymo gets in an accident as a proxy robot now you've got who wrote the software who built the thing. I mean, you got a whole nother layer of complexity on whose responsibility. Be curious. I don't know if that's gone to court to kind of have some, precedent set or maybe it has.
Andra Keay:
Many court cases. One was in an Uber vehicle driving autonomously in Phoenix and a pedestrian was killed Now, in this instance, the safety driver who was there as the responsible person was the one that was held accountable. But I think that was largely a victor victory for the companies with the best paid lawyers versus what would be repeatable case law.
Jeff Frick:
Right. And there's no and there's no safety drivers anymore. We've moved past safety drivers.
Andra Keay:
And this does mean that people in the legal profession and legal scholars have not been thinking long and hard about this in the last 20 years, And really diving into it Where we're at is that these things haven't happened frequently enough for their to be some more established, precedents and understandings.
Jeff Frick:
Go to number three. The third law.
Andra Keay:
Robots need to be good products because the same thing happened in a non-fatal accident in San Francisco and it caused the complete shutdown of Cruise, which was GM's self-driving vehicle play It killed the company.
Jeff Frick:
Yeah. Well. Arguably in that case they weren't ready for prime time because when the data came out on the number of interventions they were so far from autonomous their intervention rate was like once every two miles or something. [actually 4 - 5 miles] It was like you guys aren't ready for prime time. Put those things away.
Andra Keay:
Without taking without kind of litigating that one further because maybe they shouldn't have been able to be testing. Maybe we should be talking to whoever was certifying that or allowing that. But these things can become kind of Jenga towers and All I'm saying really is that there is this kind of hierarchy. If you start by saying robots should not harm should not kill they should obey the law then that's talking about where law is one of our major frameworks for dealing with this. And then if we talk about they need to be good products then we recognize that to be commercially viable there is a really strong incentive for robotics companies to be incorporating this early. And certainly I see this with the roll out of humanoids. The CEOs of the humanoid robot companies want to see standards. They want to see benchmarks for what is safe They want to be able to assure deployments and customers that they are as good as is reasonable to expect Right from this technology that they're they're doing everything right. So they want to create these benchmarks so that the industry can move forward and so that they as companies can be successful. Certainly a dependency right for mass commercialization. Absolutely. And this is where I say my laws. It's about techno pragmatism, because each of these laws speaks to how and where we should look to be implementing this. But the next two really deal with financial and emotional harm, which we rarely talk about when we talk about robots. And yet to me they are potentially deeper, darker problems waiting for us if we don't factor that in. And yet it really plays into being good commercial products Robots should be identifiable. And to me that means that any, any mobile robot is like a vehicle and it needs to have an identification number it needs to have a registration. And then you can look at what the process is that go behind that. And we accept that in other vehicles if it's out there then it has to go through certain licensing Right and registration processes. Now then you go one beyond that and it speaks to they should be transparent and they should be truthful and truthful not just speaking the truth but transparent as in is this robot speaking to me as an autonomous entity? Is this a remote operator speaking through? Is this a commercial script? And it's designed for what particular outcome? If a robot is being nice to me we are pretty good at starting to assess. Well, we're okay with assessing if someone's being nice to me. What does that mean? What are they gaining from that? And potentially, how are they trying to exploit me? Is this is harmful for me? Now with robotics just like with AI only that's more invisible. There are many levels at which that interaction can be being steered or guided for other purposes. Right. And that can be as simple as saying robots and AI are going to be really good at being salespeople. They might know our particular interests in history or they might be able to sense and detect our eye gaze and our interest levels things like pupil dilation all of those signals of interest that as people we often unconsciously pick up these things can be picked up by by camera technology today. They might also know exactly what to offer as the next step. The advertising industry is really good at coming up with sort of psychological profiles of who are the people that they want to have buying things and how to appeal to them.
Jeff Frick:
I think we need to apply your rule number five to and number four, actually to social media. Maybe we wouldn't have some of the problems that we have in social media But it's funny, I can't help but think, right. One of the big potential markets for the humanoids is senior care and taking care of people in assisted living situations or whatever. And, you know, I can see people with dementia asks, you know, where Susie, and Susie might be dead. Susie, you know, maybe was an old friend, you know you start thinking about who's what are the different priorities for the robot and the caregivers in that situation in defining the answer to that question both to be transparent but also to be careful, safety appropriate for whatever that medical situation, may require, which might not be just bald faced, you know, black and white facts.
Andra Keay:
And I think that there are a lot of parallels in health care and the way that we've developed guardrails in that so things like informed consent and developing consent plans ahead of time for example in some situations you could see that a medical provider might say it is the better outcome for this particular person say with dementia if there is a robot caregiver that can take on whatever character they think it is and that will keep them far more comfortable. Or there could be a case where there is a strong desire from the person as an individual to say I never want to be lied to regardless and that that might then take precedence. And we need to work these sorts of things out. But this is really great because I did the five laws and this is based on some of the best think tanks around is some of the best work. And I've taken part in a lot of these different activities. Most of them I kept thinking, how on earth are you ever going to apply these ideas or ideals? And I very much like the direction that this Five Laws are in terms of pragmatic ability to be deployed. But I went beyond that and I went How would I think about that? And I've got five things that I think could be answers One is we talked about already the robot registry licensing. A second one we've talked about in AI ethics cases which is algorithmic transparency having things like model cards having things where the hidden workings have to be detailed and accessible. If people need to look up these things. A third one taking from health is to have independent ethical review boards and maybe it means that you're not looking at something on a case by case basis, but you're saying in these circumstances the best outcome for people is if a robot is allowed to behave like this. Whereas in all of these other circumstances it's far better if a robot does the opposite. So we'll build up these things if we have I think, independent ethical review boards. Beyond that, I think there could be a very interesting role for the idea of a 'Robot Ombudsman' to represent concerns that individuals and or groups of society have that they don't have necessarily the ability to voice. They don't necessarily know who to take these concerns to or may not want to or be scared to. But if you can resort to an ombudsman who can then say aggregate these concerns and say, okay maybe as a state or as a company or as a country, this needs to become something that we develop policy or legislation around because it is having an impact on parts of our society. And we we're letting their voices be heard. Finally, I think rewarding examples of what good robots are making it something that's desirable for companies to create. If we have ways of let's not just punish poor behavior or poor corporate behavior but let's reward good robots good robot design and good applications of this technology.
Jeff Frick:
Those are lofty goals. I hope we do a better job than we've done again on social media. And the other one is privacy where, you know we've just not been not been diligent in keeping up with you know, something as simple as the cookie policies. And, you know, you get GDPR in the EU where they get a whole bunch of countries that can come to an agreement but we still fall back on our fundamental states versus national head banging and can't even get a national 'Disclosure When Breached' regulation done So we'll fingers crossed. At least you got a framework which I like. Getting ready for this I came across one of your older interviews and you talked about robots being the AI canary in the coal mine. So what is special about robots and their connection with AI that gives them the potential to give visibility into positives and negatives that maybe, as you said were kind of hidden behind a screen inside of a computer before.
Andra Keay:
Exactly. Robots are the physical embodiment of AI and because they're so visible and because they are capable of physical damage we have much greater. We take a lot more care with what's happening with robots and so I think that will allow people to understand what some of the problems with AI might be, where AI is that invisible potentially toxic gas that might be all around us and we might not be aware of it. And because robots are the physical embodiment of AI when we see things happening that we don't like it can inform us as to where these things might also be happening invisibly.
Jeff Frick:
It's really interesting that, you know we react more viscerally and maybe more aggressively to physical harm than emotional harm because you can see it, right? They can put a picture of it on the front of the newspaper. If there's a crash if there's a crash in an autonomous vehicle. Where the emotional harm and some of the other problems that can come again just picking on social media because I like to pick on social media. Aren't necessarily as visible kind of in your face. So maybe they don't get the priority that they should have in terms of addressing them. So a very different situation. You're all over the place in Bay area Silicon Valley Robotics. You got a great newsletter. You keep up on a lot of good things. What's happening in the funding world? Has the funding figured out that this is not yesterday's robots and the opportunity and the technology has progressed to a point where they shouldn't be making a comp to you know, when they were looking at the little Sony dog or Pepper or some of those early cute iterations of things.
Andra Keay:
A lot of the investment community has worked out that not only robotics is ready for investment but also a lot of other deep tech. And what I think is most exciting is is in the last ten years. And this is in spite of there being a whole lot of obstacles in funding in venture at the moment. But we're seeing a completely new class of investor, and that's the engineer, that's the scientist, that's the investor that can have better appreciation of a complex system that they're investing in. And if you look at the background experience and educational background of investors that were able to do really well during I suppose, the rise of social media and the internet and smartphones many of them were coming from financial or liberal arts backgrounds as opposed to being engineers. You didn't necessarily need to understand the technology to understand what might be making good business cases. Whereas now I think there's so much of this deep tech and robotics is a great example of it because robotics is a complex system and it really is something that is new is new as this really complex robotics 2. 0. So we need to have investors that are capable of understanding the complex systems. Now, I don't mean that every successful investor in robotics necessarily has a physics degree or has had a background as an engineer, but there's certainly a lot more investment firms that are focusing on these technologies that do have a strong background in those areas. And this is actually more similar to the first wave of venture capital in the semiconductor and early computing days. The deeper tech kind of kind of deeper tech foundational investment.
Jeff Frick:
Another thing I want to get your take is I know you have an opinion on it is autonomy And, you know, you said earlier on that one of the big breakthroughs was when, you know, a robot could make a course correction when there was something that was interrupt interrupting its standard path. And for me just in terms of fun when Skydio introduced the autonomous drone and we've talked about this before where I no longer had to actually pilot the drone, but now I'm really instructing the drone as to what I want it to do, whether that's in a program or coordinates or mapping it out whether it's, you know, inspect a a transmission wire or inspect an ugly factory that I don't want to climb all over the silos but it changed the relationship with the operation of the thing where now it's you're you're giving it instructions to fulfill a task as opposed to actually operating. Where do you see autonomy and how is that going to change things? And then I just just to throw it in with Waymo was that a long time or a short time they've been working on autonomous vehicles. You know it's like they've been talking about it well guess what. It's here. At least 30 years. Is that short or long?
Andra Keay:
I think it's appropriate for the difficulty that we're facing. And I think it's it's not a mature tech by any means, yet. But it's starting to be let's see, it was very, very much kept away from the rest of the world and limited use case scenarios or research only for the last 30 years. Whereas what we are now seeing is we have the early adopters, the early use cases and I was a little hesitant to put myself into the autonomous taxis around the Bay area. Even knowing that some companies have definitely got, you know much better track records than other companies. I was still not wanting to be the first person to make that leap but I've started using them recently. Yeah. And I'm. I think though that it's an interesting area because whenever there is going to be a problem and arguably there will be fewer problems than with human drivers, it's just there going to be different problems.
Jeff Frick:
Oh, they're completely different problems. The problems with human drivers have nothing to do with driving at all. It's distraction. It's I had a fight with my spouse in the morning. My boss pissed me off. You know, I got a bunch of bills in the mail this morning. It's I got three texts coming in. My kid's having a bad day at school. Those are the things that make human drivers such bad drivers.
Andra Keay:
But we think that we can understand those and, or maybe predict if somebody's acting under the influence of something that's going to make them a bad driver. Now, we don't have that insight level into that occurring with an autonomous vehicle. And so I think we've still got to go through that shaking out period where we start to be. We might find that there is a couple of things that as experienced users of autonomous technology if we see this happen then we go, 'Oh, I'm putting pause. ' I don't like that happening. We know that that might be an indication that things aren't working perfectly, seamlessly. So we haven't yet worked out these kind of codes. Codes of conduct codes of behavior and codes of how of reaction to the technologies I thought you were going to go somewhere else when you said you know let's talk about autonomy because one of the things is we're seeing increasing autonomy and yet this is just shifting the level as you talked about we're no longer giving step by step. We're now expecting we can say, Okay, take me from this point to that point. We're reaching a point which is democratizing the technology in the sense that it's no longer something that you need to program or code. It's something where you can make a request Do this, do that. And that's much more like when, for example, someone throws a ball to us, we don't say, okay, now I want to extend my elbow I want to move my shoulder up 90 degrees. I want to open my fingers, rotate my wrist, etc. We automatically do that. We just needed to say 'catch that ball' and I think it's really going to democratize the technologies in the ability for them to be out in the world and being used by a wide, wide range of people. If we can reach a point where English becomes the common operating system
Jeff Frick:
What a concept.
Andra Keay:
We're getting a lot closer to that. That's one of the things that's been happening pretty rapidly. Is really exciting.
Jeff Frick:
Yeah. So we're getting to the end of our time. I know one of the topics you are very passionate about and speak often about is the demographic trend. And where robots play in the reality of kind of some of the demographic trends that we're seeing in terms of just having enough people to do the jobs, period. especially in more developed countries, right? The birth rate is not sustaining or it's heading in the in the negative direction. So the demand for these both in terms of great opportunities and all the the crappy jobs that traditionally robots have been targeted for but it's going to go way beyond there in terms of the opportunity for them to fulfill really a real lack of just flat out, people and labor.
Andra Keay:
I'm going to paraphrase a tweet that became viral from an author and I've forgotten her name but she said I don't want AI to do my art and write my stories. I want to do that while AI does the dishes and takes out the trash. We need robots for all of that. Do the dishes, take out the trash. All of those jobs that particularly factoring in unpaid labor that so many of us do as parents or with aging parents all of the caretaking for society as our society is aging on so many ways. We have a loneliness epidemic. We have aging populations. We have a lack of labor in most all of the dirty, dangerous and dull jobs. People are saying I would rather do anything else than do that. So across the board there is no way we can we can maintain the level of society without developing robots that can take some of the some of the load off us.
Jeff Frick:
Yeah. All right, Andra. Well, I'll give you the last plug. Give a quick little plug for for your newsletter. And what's going on with with Circuit Launch in the Bay area for our Bay area folks.
Andra Keay:
Thanks, Jeff. Well, once a week I put out a newsletter sometimes even more often 'Robots and Startups' on Substack. [https://robotsandstartups. substack. com/] And those are my two favorite things especially when it comes together. And that can include things like what are upcoming conferences or events? Not always in the Bay area, mind. I attend a lot of robotics events around the world and I like to just bring all the news that interests me about robotics into the weekly newsletter and I'm very pleased that a lot of my audience are people in robotics and they say It's just a one one thing that they read each week that makes them feel like they're up to speed with what's happening and Circuit Launch, my favorite place. I'm hoping that we see more Circuit Launches not just in the Bay area, but in Australia around the world. It's the hardware acceleration particularly for robotics but for biotech, hardware, electronics, and IoT that is an open ecosystem of acceleration. So it's not about giving you money and taking equity it's about making it really easy and affordable for you to have a full community around you With access to the prototyping and small batch manufacturing equipment that you need and being in the company of other people building similar technologies. So I am now at Circuit Launch Mountain View most of the time and Circuit Launch Oakland is of course the first Circuit Launch. And as I said in 2026 I'm hoping that I get to spend time In some other great Circuit Launch locations and I'd love to show you around. And we have lots of events like we have the Robotics Network event often often on the first Wednesday of each month. But we have things like Robotics and GenAI Hackathons. We have discussion meeting groups we have workshops and we just love to be a place where the community that is into deep tech comes, hangs out and helps get their deep tech built.
Jeff Frick:
It's a great community. That's how we met. And Andra's newsletter is fantastic. I'll have links in the show notes and if you're in the Bay area check it out. Like she said, there's a lot of events. There's a beer and robots thing I think once a month so it's definitely worth checking out. Well Andra I still have pages of notes we could go for three hours but I think we're going to have to save it for our next get together which hopefully won't be in the not too distant future.
Andra Keay:
Yes, let's make sure we have more discussions.
Jeff Frick:
Absolutely. All right. Well thanks again, Andra.
Andra Keay:
Thanks so much Jeff.
Jeff Frick:
She's Andra I'm Jeff. You're watching Turn the Lens with Jeff Frick. Thanks for watching. Thanks for listening on the podcast. Catch you next time. Take care.
Cold Close:
Okay.
Super.
Thank you.
Andra Keay: Robo-Pragmatist, Humanoids, Technological Shifts, Laws | Turn the Lens with Jeff Frick podcast Ep42
English Transcript
© Copyright 2025 Menlo Creek Media, LLC, All Rights Reserved