The Indicator from Planet Money
ExplorePodcast overview and latest content
EpisodesBrowse the full episode archive
TopicsDiscover episodes by category
PostsBrowse published articles & write-ups

Podcast

  • Explore
  • Episodes
  • Topics
  • Posts

Recent Episodes

  • Want a 2.5% mortgage? Buy it.
  • The anxiety rattling China’s youth
  • Why Paramount went looney tunes for Warner Bros.
  • Should the families of organ donors be compensated?
  • ICE is bad for business, heat is bad for coffee, and sci-fi is bad for markets

Links

  • Apple Podcasts
  • Overcast

About

The Indicator from Planet Money

The Indicator from Planet Money

A bite-sized show about big ideas. From the people who make Planet Money, The Indicator helps you make sense of what's happening in today's economy. It's a quick hit of insight into money, work, and business. Monday through Friday, in 10 minutes or less.

Powered byPodRewind
    The Indicator from Planet Money
    Episode•May 12, 2025•9 min

    It's actually really hard to make a robot, guys

    Robots have been a thing for a long time, but they've never quite met expectations. While AI has changed the game for chatbots, it's not quite so clear for robots. NPR science desk correspondent Geoff Brumfiel spoke to our colleagues over on our science podcast Short Wave on how humanoid robots are actually developing with the help of artificial intelligence. It was a fascinating discussion and so we are sharing that conversation with you today on the Indicator. Related episodes: Is AI underrated? (Apple (https://podcasts.apple.com/us/podcast/the-indicator-from-planet-money/id1320118593?i=1000663256517) / Spotify (https://open.spotify.com/episode/449pYMEzLj6wQ2XDLfUeLq?si=2579122c740e4ace)) Is AI overrated? (Apple (https://podcasts.apple.com/us/podcast/the-indicator-from-planet-money/id1320118593?i=1000663366364) / Spotify (https://open.spotify.com/episode/0Cx1SvScerT2OEP353JVLK?si=cee769eae89a4a3e)) Dial M for Mechanization (Apple (https://podcasts.apple.com/us/podcast/planet-money/id290783428?i=1000615476788) / Spotify (https://open.spotify.com/episode/0jD1scmbkibfWuMaaMV13d?si=7613412566694e23)) For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org (http://plus.npr.org/). Fact-checking by Sierra Juarez (https://www.npr.org/people/g-s1-26724/sierra-juarez). Music by Drop Electric (https://dropelectric.bandcamp.com/). Find us: TikTok (https://www.tiktok.com/@planetmoney), Instagram (https://www.instagram.com/planetmoney/), Facebook (https://www.facebook.com/planetmoney), Newsletter (https://www.npr.org/newsletter/money). To manage podcast ad preferences, review the links below: See pcm.adswizz.com (https://pcm.adswizz.com) for information about our collection and use of personal data for sponsorship and to manage your podcast sponsorship preferences. Learn more about sponsor message choices: podcastchoices.com/adchoices (https://podcastchoices.com/adchoices) NPR Privacy Policy (https://www.npr.org/about-npr/179878450/privacy-policy)

    Apple PodcastsOvercast

    Transcript

    0:00
    Npr.
    0:11
    This is the indicator from Planet Money. I'm Darian Woods.
    0:14
    And I'm Jeff Brumfield, one of NPR's science correspondents.
    0:17
    Geoff, you recently went down a rabbit hole into artificial intelligence.
    0:22
    Yeah, I feel like I'm always down a rabbit hole in artificial intelligence, actually. It's a confusing place to be.
    0:28
    I can imagine.
    0:29
    Recently, I have been sort of looking at how AI has been moving out of the online world and into reality. I don't know if you caught Tesla's big marketing event last year, but AI was there.
    0:42
    Tesla, the car company, of course, led by CEO Elon Musk.
    0:46
    Speaking of robots.
    0:48
    Yeah. A big part of that event was about a humanoid robot powered by AI called Optimus.
    0:55
    The software, the AI inference computer, it all actually applies to a humanoid robot.
    1:02
    Are we meant to be, like, cheering this on? I don't know. It sounds scary to me. Yeah.
    1:08
    I mean, robots have been around for a long time in sci fi as technological marvels, and sometimes they're the villains. And that's been true long before AI
    1:17
    came around, but they've never quite met expectations.
    1:21
    Yes, exactly. And that's why I set out to understand the truth about this new AI revolution in robotics.
    1:27
    And.
    1:27
    And I think I found it in a bowl of trail mix.
    1:31
    An intriguing hook. Today on the show, what happens when artificial intelligence moves into the meatspace world, the world of you and me. We bring you Geoff's conversation with Regina Barber on shortwave.
    1:48
    Okay, so, Jeff, you are interested in finding out more about how AI works in robots. Where did you start?
    1:53
    Well, I didn't go to Tesla or Google, but I did drive right by them on my way to Stanford University.
    1:59
    Okay.
    1:59
    And specifically the IRIS Laboratory, which stands for Intelligence through Robotic Interaction at scale. I got a tour from a graduate student named Moojin Kim. Moojin works on a new kind of robot powered by AI, similar to the AI used in chatbots.
    2:15
    It's one step in the direction of, like, ChatGPT for robotics, but still a lot of work to do.
    2:21
    So, Jeff, what did the robot look like?
    2:23
    Well, this wasn't some humanoid robot that the big tech companies are rolling out. It's just a pair of mechanical arms with pinchers.
    2:31
    Okay.
    2:32
    But what made it interesting was that it's powered by an AI model called OpenVLA. So first we should probably just say quickly, you know, a regular robot must be very, very carefully programmed. An engineer has to write it detailed instructions for every task you want it to perform.
    2:48
    Yeah, and AI is supposed to change that.
    2:49
    Exactly. This robot is powered by a teachable AI neural network. The neural network operates kind of how scientists think the human brain might work. So in practice, this means Mugen can just teach OpenVLA a task by showing it.
    3:04
    So basically, whatever task you want to do, you just keep doing it over and over, maybe like 50 times or 100 times.
    3:12
    The robot's AI neural network becomes tuned to that task, and then it can do it by itself. Mujin brought out a tray of different kinds of trail mix, and I typed in what I wanted it to do. Okay, so scoop some green ones with the nuts into the bowl, see what happens.
    3:28
    Okay. So, Jeff, personally, I've been waiting for something like AI in robotics, because you can teach it to do something, you can ask it to do something to, like, make me an ice cream sundae or something without, like, any fancy programming or special knowledge.
    3:39
    That's exactly it, you know, and this really is the dream of the researcher who runs this laboratory. Her name is Chelsea Finn.
    3:46
    So in the long term, we want to develop software that would allow the robots to operate intelligently in any situation.
    3:54
    Chelsea also has co founded a startup called Physical Intelligence. It recently demonstrated a mobile robot that could take laundry out of a dryer and fold it again. This robot was taught by humans, training its powerful AI program.
    4:09
    Okay, so ice cream sundaes, is that too advanced? Is folding an easier start?
    4:14
    I mean, I'd actually argue, Gina, that folding is harder. Okay, let me show you a video.
    4:19
    Okay. It's going to the the dryer. It's pulling stuff out, putting it in a basket. It has the concentration I have when I'm going to do laundry. It almost looks like annoyed with folding like I do. Oh, my God. It's doing really well, actually.
    4:36
    Yes, it is. Right? And this is a complicated task. It's gotta pull these clothes out. It's gotta figure out what they are.
    4:42
    Okay, so is it really as simple as, like, just teaching a robot, like, what to do? Because if it was, wouldn't these robots be everywhere?
    4:51
    Yeah, I mean, right? It looks cool on the video. The truth is that, you know, when you get out and these robots are trying to do these tasks over and over again, they get confused, they misunderstand, they make mistakes, and they just get stuck. So, you know, it might be able to fold laundry 90% of the time or 75% of the time, but the rest of the time, it's going to make a big mess that then a human has to get in there and clean up.
    5:15
    Got it. Okay.
    5:16
    I spoke to Ken Goldberg, a professor at the University of California at Berkeley. And he is pretty emphatic the AI powered robots weren't here yet.
    5:24
    Robots are not going to suddenly become the science fiction dream overnight.
    5:29
    Okay, so, like, tell me why. Because, like, AI chatbots have gotten, like, way better super fast. So why are these robots getting stuck?
    5:36
    Chatbots have a huge amount of data to learn from. They've taken basically the entire Internet to train themselves how to write sentences and draw pictures.
    5:45
    But Ken says for robotics, there's nothing. Right? There's no examples online of robot commands being generated in response to robot inputs.
    5:57
    And if robots really need as much training data as their virtual chatbot friends, then having humans teach them one task at a time is going to take a really long time.
    6:07
    You know, at this current rate, we're going to take 100,000 years to get that much data.
    6:11
    What? Okay, that's so long. Like, are there any alternatives? There must be.
    6:16
    One. Might be to let the AI brain of the robot learn in a simulation. A researcher who's trying this is a guy named Pulkit Agrawal. He's at the Massachusetts Institute of Technology.
    6:28
    The power of simulation is that we can collect, you know, very large amounts of data. For example, in three hours worth of simulation, we can collect 100 days worth of data.
    6:40
    So this is a really promising approach for some things, but it's much more of a challenge for others. So, for example, let's talk about walking. We. When you're just dealing with the Earth and your body, the physics of walking around, it's actually kind of simple. But if you want your robot to, say, try and pick up a mug off a desk or something, that's a lot more complicated.
    7:00
    More forces, you know, if you apply the wrong forces, these objects can fly away very quickly.
    7:05
    Basically, your robot will fling things across the room if it doesn't understand the weight and the size of what it's carrying. And there's more. You know, if your robot encounters anything that you haven't simulated 100% perfectly, then it won't know what to do. Just break.
    7:20
    Okay, so, Jeff, you've taken me from, like, optimist to pessimist. It's the. You know, the road I take every day. I'm starting to think that AI is, like, never going to work that well in robots or, like, it's going to be a really long time.
    7:35
    You know, I'm sorry if I've, like, turned you into a pessimist here, Gina. And then it happens, and I'm going to have to sort of whip Shaw you back, because AI is already finding its way into robotics in ways that are really interesting. So for example, Ken Goldberg has co founded a package sorting company. And just this year they started using AI image recognition to pick the best points for their robots to grab the packages. And I think we're going to see a lot of that AI being used for parts of the robotic problem. You know, walking or vision or whatever just may not arrive everywhere all at once. And to really end on a high note here, let's get back to that Stanford lab. Remember, I asked it to grab some trail mix, Right. So the robot correctly identified the right bin, to Moojin Kim's relief. And then very, very slowly and kind of hesitantly, it reached out with its claw and picked up the scoop.
    8:36
    It's doing it.
    8:37
    Mujin, did I just program a robot?
    8:40
    You did. Looks like it's working.
    8:42
    And to my mind, it's incredible. Like, remember, nobody really programmed the robot. Exactly. This is all neural network learning how to move the claws and respond to the commands on its own. And to me, it's pretty wild that that works at all. And I think it's going to lead to some very cool developments.
    9:02
    Jeff, thanks for bringing us this piece on the frontiers of technological development.
    9:07
    My pleasure.
    9:10
    This episode was originally produced by Rachel Carlson and engineered by Jimmy Keeley. It was edited by Burley McCoy. Tyler Jones, check the facts. The indicator version was produced by Koopa Cats for Kim. Kate Concannon is our editor and the indicator is a production of npr.

    It's actually really hard to make a robot, guys

    0:00
    0:00