| Episode | Status |
|---|---|
The robotics industry is on the cusp of its own “GPT” moment, catalyzed by transformative research advances. Enter Memo, the first general-intelligence personal robot, focused on taking on your chores...
Nobody wants to do their dishes, nobody wants to do their laundry. People will love to spend more time with their family, with their loved ones. So what we believe in is that if the robot is cheap, safe, and capable, everyone will want our robot. And we see a future where we have more than 1,000,000,000 of these robots in people's homes within a decade.
Thanks, Memo. Hi, listeners. Welcome back to No Priors. Today, we're here with Tony Zhao and Cheng Qi, cofounders of Sundae, makers of Memo, the first general home robot. We'll talk about AI and robotics, data collection, building a full stack robotics company, and a world beyond toil.
Welcome. Chang, Tony, thanks for being here.
Thanks for having us. Yeah.
Okay. First, I wanna ask, like, why are we here? Because classical robotics has not been an area of great optimism over time or, like, massive velocity of work. And now people are talking about a foundation model for robotics or a CHAD GPT moment. Can you just contextualize, like, the state of AI robotics and why we should be excited?
I would say I think we're kind of in between the GPT moment and the Chad GPT moment. Like, in the context of LMs, what it means is that it seems like we have a recipe that can be scaled, but we haven't scaled it up yet. And we haven't scaled it up so much so that we can have a great consumer product out of it. So this is where I mean, like, GBT, which is like a technology, and chat GBT, which is a product.
Yeah. And so we're seeing across academia, there's consensus around what's the method for manipulation, but everybody's talking about scaling up. It's like we know there's sign of life for the algorithms people are picking, but people don't know if we have more data, like what happened to g p t two, g p three, what will happen. And but we see a clear trend that, you know, there's no reason to believe that robotic doesn't follow the trajectory of other AI fields that, you scaling up is gonna improve fullness.
Maybe even if you took a step back, like, what was the process for deploying a robot into the world, like, ten years ago, like, pre set of generalizable AI algorithms? Like, why why was it so slow as a field?
Yeah. So previously, you know, classical robotics have this sense, plan, act modular approach where there's a human design and interface between each of the modules, and those are need to be designed for each specific task and each specific environment. In academia, that means for every task, that means a paper. So a paper is you design a task, design an environment, and you design interfaces, and then you produce engineering work for that specific task. But once you move on to the next task, you throw in all your code, all your work, and you start over again.
And that's also kind of what happened to industry. So for each application, people build a very specific software and hardware system around it, but it's not really generalizable. And therefore, it's it's feel like we're just running in loops. We build one system and then we build the next one, but there's, like, no synergy between them. And as a result, the progress has been somewhat slow.
I feel like that's a good segue into some of the amazing research work that you guys have contributed over the last five years to the field. Should we start with diffusion policy? What was the impact of that?
Yeah. Diffusion policy is like a specific algorithm for a paradigm called imitation learning. That's really the most intuitive way of how to use machine learning for robotics. So you collect paired data, action and observation, of what the robot should do, you use that to train a model with supervised learning, and then the robot do the same thing. The problem is that in the field, it's known to be very finicky.
So when I talk to researchers, when I start into the field, people are like, the researcher themselves, the specific researcher, needs to collect the data so that there's exactly one way to do everything. Otherwise, the robot either like, either model training will diverge or the robot will behave some weird way. And the fusion model really allows us to capture multiple modes of behavior for the same observation in a way that's still preserved training stability. And that really kinda unlocked more scalable training and more scalable data collection.
So it doesn't have to be you personally wearing, you know, a Telopp set in order to make a robot learn.
Yep. Yep. So, like, we can have multiple people, sometimes even untrained people, collecting data, and the result will still be great.
Where do Aloha and ACT play into
this? Yeah. So these two papers are actually, like, super close to each other. They're, like, one month or two months away. That's actually how me and Chun know each other.
It was about looking at each other's paper, like and we met on Twitter, I think, when Chun is back in Colombia. Before Aloha, I think the typical way people collect data is with a, like, teleoperation setup with VR headset. And it turns out to be very unintuitive to do, and it's hard to collect data that is actually dexterous. What a lot of brains is a very simple and reproducible setup, so it's very intuitive.
Sorry. In terms of just for most people who haven't worn a teleop setup, is it the lag? Is it, like, just you know, how how should I compare it to, like, playing a video game or something?
Yeah. I think Aloha make it feel more like playing a video game. Normally, it feels kinda disconnected
Mhmm.
That you're just, like, moving in the free air, and the robot is moving with some delays. Mhmm. But Aloha reduces that delay by a lot, and that contribute to the kind of smoothness and how fast human can react. Like, once we got those really dexterous data, what it allows us to do is to investigate on algorithms that are actually solving things that are difficult. In this case, it's through the introducing of transfer using transformers in the case of robotics.
And there was a long period of time that I think robotics was stuck with three layer MLPs and ConvNets, and as you make it deeper, it works worse. But it turns out that once you have very strong and dextrous datasets, like, just throw a transformer at it, and it works quite well.
Actually, like, just in terms of progress of the industry over time, transformers didn't make sense without certain level of data collection capability. Okay.
And also other system around it, for example, action chunking, which is to predict a trajectory as opposed to predicting single samples of actions. All these things kind of combined to make textless task, bimanual tasks more scalable.
Why is chunking important here if I think about, like, just the analogy to LMs and, like, text sequence prediction?
I think it just kind of throws the model off if you're trying to force it to react every millisecond. That's not not not how human act. We perceive, and we can actually move quite a bit without looking at things again. And that turns out to make things the motion a lot more consistent and our performance to be a lot better.
And and you discovered that actually transformers architecturally did apply to robotics. Cheng, you felt then that data collection was still a problem, so enter UMI.
Yeah. So after Aloha and diffusion policy, I was super excited about imitation learning. But at the time, both of us started still doing KA operation, and that just feels super limiting. I think the problem is that in your setup at a time like a teddy op setup, it takes a PG student a couple of hours to set up in the lab. They pretty much restrict data collection to a lab.
But in order for a robot to actually work as a as a product, it needs to be worked in the wild in in unseen environments, and that requires data be also be collected in the wild. And at the time, I was thinking, okay. Is there a way we can collect robotic data without actually using a robot? That, like, forced me to think, okay, what's the actual most essential part of a robotics data? And after different policy and act, actually, the paradigm is kinda simple.
You just need paired observation and action data. In our case, observation is the video clip. The the action is the movement of your hand plus how the finger moves. I realize that all of this information you can get from a GoPro. You can track the movement of GoPro in space, and you can track the motion motion of the, gripper and also finger through images as well.
And that's why I built this UMI gripper. It's three d printed. At the time, the project had three PhD students. We just took the grippers everywhere. I it was two weeks before the paper deadline.
Every time I go to a restaurant, before the waiter would come in, we just collect some data. And very quickly, we got, I think, 1,500 video clips of this espresso cup serving task. And that turns out to be one of the biggest datasets in robotics and just simply by three people. And that's where the power shines. And then with that amount of data, it allows us to train the first end to end model that can actually generalize to unseen environments.
So we can push the robot around in Stanford. Actually, Tony was there as well. You know, push robot arm around the Stanford campus, and then anywhere, you
know, the robot can serve you a drink. Yeah. I think that is the moment I was like, hey. Maybe we should start a company. This is actually working so well.
I remember, like, just falling in tongue and
A few times it doesn't work well.
The Yes. I think the only exception I saw was when it's under direct sunlight. Yeah. Right? And I think the reasoning was, like, over that whole, like, two, three weeks of data
that two weeks is all raining. It's like, oh, there's no sunlight data, so, like, it fails. That also demonstrated the importance of distribution matching. So in order for a robot to work in a sunny environment, it must have seen sunny environments during the training data.
Yeah. It's really interesting because I remember when I first met you guys, was like you spent like, I don't know, 200,000 across all of your academic research. And yet the scale of data collection as translated to model capability is leading. Right? So it's very interesting that, you know, we look at where we are, maybe going back to Tony's point of scaling and massive capital deployment.
But that entire paradigm actually wasn't relevant before people realized, like, you should train on all of the Internet data, and we just don't have that in robotics. So the entire field is just blocked on having any scale of data that's relevant.
Yeah. I think these days, they're still, like, assuming debates about, like, what is even the right way to scale. There are, like, world models. There are simulations. There is teleoperation.
There are, like, all these new ideas. And I think this is the the sort of area that we really want to innovate, that we want to differentiate. That we want to find out something that is both high quality and scalable.
And then you guys you decided to start a company pushing this cart around Stanford. Tell me about that decision, and congratulations on the launch and and sort of the the direction and team you've built.
Yeah. It's a it's a very interesting journey. I remember in the beginning, especially two of us in Chung's apartment, on his desk, we were, clamped a robot there and tried to do some tasks. And it soon becomes, like, I think, an eight person team towards the 2024, and now we're at around, like, 30 to 40 people. We're not the best at everything.
Right? But starting a company allows us to find people who we really love working with and then bring all the expertise together from mechanical engineering, supply chain, like software engineering, like controls, and to build a system together that is not like a demo, but a real product.
You've built this amazing team. What are people actually signing up for? What's the mission of Sunday?
Yes. It is to play a home robot in everyone's home. I think there are a lot of AIs trying to make you more efficient during the work, but there is not enough AI that actually helps you with all these mundane things that are not creative, that really has nothing to do with what's making us more intrinsically human. What's ideal for people to spend more time on is actually with their hobbies, with their passions, as opposed to spending more time doing chores.
So if you guys are going from these amazing research breakthroughs to we're actually going to ship a home robot, and that's a product you have to talk about, cost and capability and robustness, like what's the design philosophy?
As these AI models becomes more capable and as hardware cost continues to go down, the home robots or all kinds of robots will be everywhere. So if we start from the most surface level, which is design of the robots, when we design it, we think about what should the robot look like if it is ubiquitous. You just see it every single day. What should it look like? And what we end up with is that we really think the robot should have a face.
It should have a cute face, and it should be very friendly. So instead of, like, a terminator doing your dishes, we want the robot to feel like it's out of a cartoon movie. And then the huge decision is, like, how many arms should a robot have? Should it have, like, four arms? Should it have, like, one arms?
Should it have legs? Should it have, like, five fingers, two fingers, three fingers? It's a huge space.
Why isn't the obvious answer it should just be, like, a full human arm? I
think the core motivation for us is how can we build a useful robot as soon as possible. So whenever we see something that we can accelerate it with simplification, we'll go simplify that. So one example of that is the hand that we designed, which has three fingers. We kind of combine the three of the fingers that we have together. Mhmm.
And the reasoning there is just that most of the time when we use those fingers, we use it together, let it be, like, grasping a handle, let it be opening the dishwasher. So it really doesn't make sense to add the cost, like, multiply by three x to have separated into three when we can do one with most of the benefits. So this is how we think about the whole robot as well. It's kind of with the constraint that we are building a general purpose robot that can eventually do all your chores and will simplify everything we possibly can so that the robot can be as low cost and as easy to repair as possible.
Yeah. I just wanna add a little bit more to the actuator and, like, mechanical design. Traditionally, most robots are designed for industrial use cases.
Uh-huh.
And the robot are very fast, they are very stiff, and they're very precise. The reason is because all the industrial robots are blind, so they're blindly following a trajectory that's programmed by someone.
It's not reaction to perception Correct.
But because of the breakthrough we had in AI, now a robot have eyes, so it can actually correct its own mechanical and hardware inaccuracies. So that kinda, like, opened up a new different space of design.
Intuitively, it should be like, I can't tell you exactly what the distance is that you're on a millimeter scale, but I'm gonna get to the cup because I could stop.
Yeah. Exactly. So that allows us to use these low cost actuators that's achieved, that's compliant, but they're imprecise. But because of the AI's algorithms and systems we build, it allows us to build robots that's mechanically inherently safe and compliant while simultaneously be able to achieve the sufficient accuracy we need for the home tasks.
Where are we in that timeline? You said we're between GPT and ChatGPT. And so, like, when do consumers get ChatGPT and when will you guys ship something?
Yeah. It's actually a really exciting time because, like, we have so many prototypes internally. What we will do next year, 2026, is actually start doing beta programs. We'll have these robots, all kinds of different ones, into people's home and see how they react to it. That will be when we learn the most about, like, how people like, do people want to talk to their robots?
Do people want to have their robots maybe teach their kids some new knowledge about the world? And this will inform us what the eventual product should look like. Internally, we just have a extremely high standard of what is the minimal consumer product we want to ship. It needs to be extremely safe. It needs to be extremely capable and low cost.
Do you feel like you know something now that you didn't when you started the company?
Absolutely. So I I think at the beginning, I would describe it as, like, we see light at the end of the tunnel of there are two axes. There's dexterity. There's randomization. When we add more data, things works better.
And what this company about is the cross product of these two, how can we scale and have both dexterity and generalization. And this is something we're able to show in our generalization demo, which is, like, we can pick up these, like, very precise, like, forks, like, actual metallic forks, only ceramic plates with very high success rates. And, honestly, this is not something that's like, we thought that would work so easily just by having so much more data.
Yeah. So actually, just wanna expand a little bit. It's actually the process was long and painful. So there are so many issues. Just scaling up a system, a robotic system, is very, very hard.
There are mechanical issues, like reliability issues. There's, like, data quality issues that come out
of it. In the beginning,
I actually thought it's gonna be much easier than this, but, really, just it takes time and effort to grind out all the little details for this to work. I also think, like, to TeliaOps, it's much harder to get the system scaled up. But once it's scaled up, it's very powerful and very repeatable.
So it is both harder than you thought it would be to get to here, and you are further than you thought you would be. Yes. Yeah.
And I remember in the beginning, we're having this, like, funny conversation of we're like, if we build this, someone can just, like, take our glove, and they'll build the same thing. Like, what mold do we have? Are we worried about that? And, And anyway, in the beginning, actually, we were a little bit worried because we thought, like, oh, you know, they can probably just replicate it. But as we go along the path, it turns out things are so much harder than we thought it was.
There's so so many small Noise. Yeah. Yes.
And when you say scaling up a robotic system, you mean the data collection to training pipeline and the hardware itself?
Yeah. So, actually, for for this yeah. For this to work at all, you need the data collection system. Yeah. You need the robotic and control system to be able to deliver the hand to where you wanna go.
Yeah. You also need the data filtering pipeline and data cleaning pipeline and the training pipeline. And all these things need to be iterated together. So I actually gone through several loop of these. It's kinda hard to imagine without having a full stack team in house how this can even be done.
Yeah. The glove we're using right now is we call it, like, v five. Mhmm. And for v zero to v five, each version has, like, around 20 iterations.
Okay. And So a 100.
Yes. Yes. And, also, like, just when you make these at scale, right now, we have more than 500 people using these GUFs in the wild. Like, all the things that could go wrong will go wrong. For example They they did.
They did. Yes. For example, like, how things are assembled. If you don't specify exactly how it should be done, people will assemble it in creative ways. And the creativity doesn't help us here because, like, we really want the data collection device to be extremely precise.
So you guys can't obviously know everything that's happening in every company in in academia and industry. But from what you know, how would you compare the scale of training data that you have today relative to the industry?
At this point, we are almost 10,000,000 trajectories being collected in the wild. And those trajectories are not just like, oh, pick up a cup. It's these long trajectories with, like, walking, with navigation, and then, like, doing these long horizon tasks.
Tony, as you mentioned, like, it's an open question, actually, what the right way to scale data up is. And so there are strong theories around Teleop, around, like, pure RL, around video and world models. Like, how did you think about all of these?
Yeah. So from our perspective, actually, it's kinda somewhat surprising. So in the beginning, we worried that, you know, the data from GloVe or Umi like data that has higher quantity but lower quality compared to TallyUp. Because for TallyUp, you're using exactly the same hardware and software stack between training and testing. It's perfectly distribution matched.
But what we realized is actually this glow form factor encouraged people to do more text version, more natural movement, and those actually result in a more intelligent behavior on the on the on the modeling side. And in terms of, you know, data quality, we don't really see a difference in terms of, you know, how much like like, there's a gap between TidyOp and GloV data.
After we did the 20 engineering like, series method? Yeah. Yeah. Like because, like, apparently, is a mismatch, right, that in the camera frame, there's a human inside of the robot. And there are just a lot of things that we need to do to kind of convert a human data one to one to, like, as if it is robot data and have the model not being able to tell the difference.
Yeah.
And that kinda, like, relies on, again, the whole the full cycle iteration between hardware and software.
What about RL?
We see a lot of great promise for RL in locomotion, and we think that will continue to be true for locomotion. So what we see really RL as a method is very powerful, but it is much less sample efficient compared to immunization learning. And we see that to work great in environments where it's easy to simulate. For the case of locomotion, you only you only need to worry about rigid body dynamics and rigid body contact between the the robot and and the ground. And, you know, because you engineer a robot, you know everything.
But for manipulation, it's kinda hard for us to imagine, like, have this actually the same amount of diversity and the distribution of real object in terms of matching both appearance its its appearance and physical properties. And we think that it's gonna be challenging compared to global data collection and tally up.
Yeah. I think it's really about which method can get us there faster. There might be different methods that will eventually get there. For example, like, you know, simulation world model. Right?
And, like, it's almost a tautology to say that if I have a perfect world simulator, anything can be done there. Like, as long as you can do it the real world, you can do it in a simulation, and you can, like, you know, cure cancer in a simulator. Right? But what it turns out for robotics is that just some some things are harder than others, and it really depends on the problem itself. So in the case of locomotion, as I mentioned, all we need to model in a simulator are point contacts with a somewhat flat ground.
Mhmm. Like feet.
Yes. Yeah. But sort of the behavior we want out of it is actually very difficult to model. Like, it's all these reactive behaviors that when you feel like your your leg is hitting something, you should, like, retract and, you know, step again. These are very, very hard to describe or try to learn from demonstrations directly.
But in the case of manipulation, I think the difficulty is flipped. That it's a lot easier to capture the behavior itself, and it's a lot harder to simulate the world. Mhmm. For example, if you were to grasp a transparent cup with some orange juice in it, it's ridiculously hard to simulate how, like, your hand deforms around the cup and how all those ripplings, how those like, the color of the juice results in, like, the rendering and what the policy end up seeing. Simulating that is very expensive and difficult.
But all we need to learn is just to, like, get your hand to be in front of the cup and then close with the appropriate amount of force, and that's actually very easy to learn. That's why, like, we see so many success of imitation learning in the case of robotics manipulation is because the behavior itself is actually not as hard as simulating the world, and that's why we see faster progress there.
Is there anything that you have changed your point of view on in data over the last year?
Like, it's one thing I wouldn't say changed, but just data quality really matters. I think we know I always knew data quality matters, but once you scale it up, it it really matters. And and then because just the diversity of behavior that you experience in the wild it's very hard to control, and the hardware failures are hard to control. You need to constantly monitor them. Just you need to spend a lot of huge amount of engineering effort just to make sure that, you know, the data is clean.
Yeah.
And also building all those automatic processes. Yeah. Right? We have our own way of calibrating the glove before we ship it out. And we have this whole, like, software system to catch if a something is broken on the glove, and we can detect it automatically.
Is like, the importance of data quality kinda translates to all these repeatable processes, and we don't need a human to be staring at the data to know that something is wrong.
When you described the beta for next year, a lot of it sounded like, we just wanna understand behavior, like how people actually want to use it. We can make some design decisions for the actual product. What technical challenges do you still see?
So to me, I think there's two kind. The number one is really figuring out the training recipe at scale. We as a field just entered the realm of scaling, and we just got the amount of data that we need. I think now is a perfect time to start doing research and actually figure out what exact training recipe we need to actually get robust behaviors. And I think we're in a unique position because of the amount of data and the entire pipeline we built around data.
The second point, think, is just really hardware is hard. We're still pushing the bound the envelope, performance envelope of hardware. It's not really clear what is needed, what is necessary for the hardware to be reliable, because whenever the mechanical team build a hardware, the learning team will try harder to push it against the boundary, and then it'll break at some point. But I think what's interesting in this company is that everybody's under the same roof. So immediately after something breaks, it goes straight back into mechanical design, and then we have another iteration, like I said, for the hand parts very quickly.
Hardware is hard, but it is important, and I think it's a hard but right thing to do. And I think we as a field shouldn't avoid doing the hard things just because they're hard.
Yeah. I want to echo Chen's point about, first, the research. I think when there is data scarcity, it is really easy to come up with, like, cute fancy research ideas that doesn't end up scaling very well. And this is why, like, when we build a company, we actually focus on the infrastructure and a scalable data pipeline and operations before we start to, like, really dive into research, which we only started to do, like, three months ago. Think we really want to avoid doing research that doesn't scale.
We want to focus on things that contribute to the final product. The second point is, like, I think robotics is so intrinsically a, like, a system that right now, we don't like, there's not a existing general purpose home robot out there, and we don't really know what the interface of different system is. Like, what is even good? And in that case, if you're working with a partner, it's actually really hard for them to understand your standard of good because your standard of good is changing all the time. Mhmm.
This is why we are, like, building everything in house in a more full stack approach that we build our own data collection device that is codesigned with a robot. We build our own, like, operation team to be like, how can we most efficiently get the most high quality data out? And, of course, our own AI training team that's make the best use of these data. I think these are the things that are really not easy. It makes the company a lot harder, like, to build that right now you suddenly need, like, so many teams, and they need to orchestrate together.
But we believe it is the right thing to do.
Okay. I'm gonna ask you a few questions that are uncomfortable guesses now, but when will people be able to buy robots commercially for the home?
Like, this is something we're really excited about because we have so many of prototype robots in our office, and we really wanna get it out there. So the next step of our plan is to have a beta group program 2026. And what it means is that for people who sign up that we selected, they will have a real robot in their home, and it will start doing chores for you. And it's it's going to be a really interesting learning lesson for us because we will see, like, how human interact with the robots. We'll see, like, what kind of things people just really want the robot to do.
I think this will be before we actually ship it to the masses because we just have an incredibly high standard of what we are willing to ship as a for a consumer experience standpoint. We want the robot to be highly reliable, want it to be capable, want it to be cheap. I think it really depends on the results of the beta program will decide when is a good time to ship it. Is it 2027? Is it 2028?
And but all those are possible.
But it's not a decade away?
No. It's definitely not a decade away.
How much do you think it could cost?
Right now, the prototype robots we have in in house, I think the cost ranges from, like, $6,000 to something like $20,000. And this is actually pretty interesting that the big difference here is not like, oh, we find a better actuator. They're using the same actuators that are, like, very low cost. But, actually, it's the cladding of the robot. When you're trying to make them at low scale, it's just really expensive.
Like, the cladding are, like, a few thousand dollars to make. But this is the type of things that as we scale up, it becomes, like, dirt cheap. Because instead of, like, doing CNC, instead of hand painting them, it'll become injection molding. What we see is that as we get the scale to a few thousand units, we can drastically reduce the material cost, likely under 10 k. And what it implies is that when we sell the robots, the price will be somewhere around it.
Okay. So you fast forward two, three years out. If you look like five years and beyond, the home robots are ubiquitous. Like, what does life look like? How does it change for your average person?
This is a different answer for everyone. For me, like, I just really hate dishes. Like, in my sink, there's always, like, four or five dishes that are, like, somewhat dirty out there that kinda stinks a little bit. And after a long day of work, it really doesn't feel good to come with like, see a home like that. So I think the world we'll live in is It's gonna be cleaner.
It's gonna be cleaner. And I was just thinking about it as, like, the marginal cost of labor in homes goes to zero.
The last thing I wanna make sure we do is, like, talk about demos. Right? There's a lot of robotics launch videos today. It's been years since you saw an optimist serving drinks at a bar. Why are those not available, and what is actually hard?
Yeah.
I I think the way I will put it is make zero assumptions. No priors.
Okay.
As in Nice. If you see a robot handing one drink to one person, first, ask the question of, is that autonomous or is that tally operated? So this is the first thing. So we should look at the tweet and see what they say about it. And and then is that does it show giving another slightly different color cup to the same person or not?
If they didn't show it, it means that a robot can literally only pick up that single cup and give it to that same person. When we look at demos, we tend to put our human instinct into it. They're like, oh, if you can hand a cup to that person, it must be able to hand a different cup to another person. Maybe you can also do more dishes. Maybe you can do a laundry.
There are a of, like, visual thinking that we can have about it, which is what's great about robotics that there are lot of imaginations. But I think when we look at demos, only index on things that is strong, and that's likely the full scope
of that task. I think another aspect is, at least me as a review researcher, I I appreciate the number of interactions that happens in in in a demo. So, usually, the more interactions you have like, every interaction, there's a chance of failure. So the longer the sequence is, the harder it actually is. So that's something we really emphasize here.
And that's actually somewhat uniquely easy for us because the GloVe way of data collection is so intuitive to people.
Yeah. It's really about, like, generalization and reliability.
So can you explain the demos that you guys are showing?
Yeah. Of course. So we are showing, like, basically three categories of demos. The first one, as you saw, is we have this whole, like, messy table, and what the robot does is to clean up the whole table and, you know, dump the food into the food waste bin and load the dishes in your dishwasher and then operate the dishwasher. What makes this demo really hard is that it is a mix of really fine grained manipulation with these super long horizon full range task, as in, like, you need to go up and you need ask them to go down very much.
It's a mobile manipulation task. Exactly. The reason that we can show this is just how nimble and easy for us to collect these datasets to make Horizon Dexter demo possible. And it's also about the forces as well. So you might have seen, like, we're trying to pick up two wine glasses with one hand.
Mhmm. I struggle with this, but yeah.
Yeah. It's actually really hard. And because it's, like, transparent objects, we need to also load it very precisely into the into the dishwasher. A lot of it is about how much force you apply. Mhmm.
Because if you are trying to grasp two, in one hand, if you squeeze a little bit harder, you're going to break one of the glass. And when you load it into a dishwasher, if you're pushing it in the wrong direction and it hits something, it's going to shatter. We we did shatter a ton of glasses when we're, like, experimenting with it. So these are tasks that are, like, really high stake that it's not just about recovering from mistakes but about not making those mistakes in the first place. And this is what's generally the case in a lot of the home tasks, that you're just not allowed to make any mistakes.
And then we get into the generalization demos, which we basically show our robot we book, like, six AMB and Bs, and we get it there, zero shots Mhmm. And see if it can do, like, part of the task. So two sides we use. One is I go around a table and collect all the utensils into the caddy. The other is to grasp a plate and then load it into the dishwasher.
What makes these these demos very interesting is that we don't need any data when we enter that home. It's pure generalization. And this is as close to getting, like, a real product as you can get. Because when someone buy buy our home robot, we really don't want them to, like, collect a huge dataset themselves just to, like, unbox it. Also, in addition to the generalization, those two tasks are also really precise.
We're using the exact silver bears in the home, and you need, like, basically, like, a few millimeter of precision to grasp it properly. Those forks are also hard to perceive because they're reflective. Like, the lights looks weird on it. We help have a transparent table home. Think I that the table looks like nothing, and the robot still, like, reacts very well to it.
And, again, the reason that we can do it is because we have all these, like, more than 500 people, and we've seen so many glass tables in that dataset. So the robot is able to do it. I think the last bit of the task that we did is kind of pushing what's possible in terms of dexterity. The two tasks we chose, one is espresso operating espresso machine. The other is, like, folding socks.
Mhmm. What makes these hard is that they require very fine grained force that is hard to get if you're dealing with tally operation. Because these days, there's not a good tally operation system that can let you feel how much force the robot is feeling. Mhmm. So, basically, when you're tally operating, your hand is numb.
Mhmm. And sometimes you are applying, like, a huge amount of force on the robot, but you don't know it. And that can result in, like, very low data quality that robot is also doing in that aggressive way that we really want to avoid for our system. The sock is a very good example that when you're trying to fold it, your two fingers can touch. Mhmm.
And that forms a what we call, like, a force closure. You have a closed loop for the force. And if your rollwise stiff, you can apply infinite amount of force at it, and it doesn't look like anything. But for us, because we're using the glove to collect the data, the human who is collecting it can just naturally feel it. It's very intuitive.
I think we're the first to do the sock folding and using end to end to do, like, espresso machine out of the whole industry.
One of the things that you will also need to scale as you guys, you know, scale up the company is the team. What are you hiring for? What are you what are you looking for?
One thing I'm really looking forward is Thanks, Lisa. Yeah. Yeah. So is full stack roboticists and people who aspire to become full stack roboticists. Really, what we learn in this company is just that robotics is such a multidisciplinary field.
You need to know a little bit of mechanical, little bit of electrical, a little bit of code, a little bit data to actually fully optimize the system. And we have a couple examples of training just full stack software engineers to become roboticists, training engineers to become roboticists. And so if you want to learn about robotics, if you wanna learn the whole thing and not just to be boxing into your small, you know, little cubicle, let us know.
And you told me that you didn't write code until you got to college or something.
Yeah. I was super enthusiastic about robotics, but I was mostly doing, like, a mechanical and visual design before that. And then I realized, okay, the bottleneck is actually how the robot will move, and there's something called programming. And then the more I get into it, deeper it gets. And then toward the end of college, I realized, okay, there's a thing called machine learning, and you figure out how to train models.
I think these things just go on and on. I think it's very natural for me to gradually expand my skill set because I'm always looking forward to build a robot.
Well, I hope you discover the next field because you're no longer doing dishes. Tough.
It's a very fun place to work. Whatever you can imagine about robotics and consumer products and machine learning, you can find it here because we're just fundamentally such a full stack company. We're not just about the software. We're not just about the hardware, but we're about the whole experience, the whole product, and making sure that product is general and, like, scalable in the future.
Awesome. Congratulations.
It's really exciting.
Find us on Twitter at no priors pod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Pod casts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at nopriors.com.
Sunday Robotics: Scaling the Home Robot Revolution with Co-Founders Tony Zhao and Cheng Chi
Ask me anything about this podcast episode...
Try asking: