| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Imagine learning chess from a grand master, or negotiating tactics from an expert FBI hostage negotiator. ElevenLabs’ voice AI technology is making that unlock possible. Sarah Guo sits down with Mati ...
Hi, listeners. Welcome back to No Priors. Today, I'm here with Mady Stenezes, the co founder and CEO of Elevenlabs, which was founded to change the way we interact with each other and with computers with voice. Over three short years, they've skyrocketed to more than 300,000,000 in run rate. Mady and I talk about the future of voice, education, customer experience, and the other applications of this voice, as well as how to build a multisegment from self serve to enterprise and combined research and product company.
Welcome, Matti.
Sara, thanks for having me. And thank you for
doing this at seven in the morning.
Our pleasure. Thank you for doing that at seven in the morning. It's great we got to finally do this together.
I think a lot of our listeners will have used or played with Eleven at some point, but for everybody else, can you just reintroduce the company?
Definitely. We, at Eleven Labs, we are solving how humans and technology interact, how you can create seamlessly with that technology. What this means in practice is we build foundational audio models, so models in a space to help you create speech that sounds human, understand speech in a much better way, or orchestrate all those components to make it interactive, and then build products on top of that, foundational models. And we have our creative product, which is a platform for helping you with narrations, for audiobooks, for voiceovers, for ads or movies, or dabs of those movies to other languages, and our agents platform product, which is effectively an offering to help you elevate customer experience, built an agent for personal AI, education, new ways of immersive media. But all is kind of underlined with that mission of solving how we can interact with technology on our terms in a better way.
You started the company in 2022?
That's right.
And you've had amazing, like, rocket ship growth since then. I'm sure it's fell to up and down different ways. I wanna ask you about that. Can you give a sense of what the scale of the company is today?
So we've grown to three fifty people globally. We started from Europe. We started as a remote company and are still remote first but have hubs around the world with London being the biggest, New York being the second biggest, Warsaw, San Francisco, and now Tokyo, one in Brazil. We are at 300,000,000 in ARR, which is roughly fiftyfifty between self serve, so a lot of subscription and creators using our creative platform, and then approaching 50% on the enterprise side using our agents platform, Work, and that's on the SalesLed classic SalesLed side, and we serve more than 5,000,000 monthly actives on that on that on that creative side of the work, And then on the enterprise side, we have a few thousand customers from Fortune 500s to some of the fastest AI growing startups.
I think this is such a you're an amazing founder, but I also think this is such an interesting company because it is very unintuitive to, I think, many people and investors in particular. I don't know if you faced this at the beginning, but I remember I was there in 2022. There's a there's a class of companies that allow creation in some way when we look at your, like, first business beyond the research itself. And I would put Eleven and Midjourney and Suno and Haizen in this category. And I think there's, like, this overall sense of, like, who really wants to do this?
What was your initial read of, like, how many people want to make voices, or what made you believe that was gonna be much broader than like, if I look at dubbing, for example, it's not a huge market. I think
first piece was which is, as you mentioned, there is, like, a very it's very tricky to do both the product and the research. I'm in lucky position that my co founder and I known each other for fifteen years. I think he's the smartest person I know and has been able to create a lot of that research work to be able to create that foundation to then elevate that experience, but both of us are from Poland originally, and the original belief came from Poland. It's a very peculiar thing, but if you watch a movie in Polish language, a foreign movie in Polish language, all the voices, whether it's a male voice or a female voice, are narrated with one single character, so you have like a flat delivery for everything in the movie.
That's a terrible experience.
It is a terrible experience, and it's still like, if you grow up, as soon as you learn English, switch out and you don't want to watch content in this way, and it's crazy that it still happens until today in this way for a majority of content. Combining that, and I worked at Palantir, and my cofounder worked at Google, we knew that that will change in the future and that all the information will be available globally. And then as we started digging further, we realized So,
in every language in a high quality way.
That was kind
of And the
the big thing was like, instead of having it just translated, could you have the original voice, original emotions, original intonation carried across?
Mhmm.
So, like, imagine having this podcast, but say people could switch it over to Spanish, and they still hear Sara. They still hear Matti and and the same voice, the same the same delivery, which is kind of exactly what we did with Lex back when he interviewed Narendra Modi, and you could kind of immerse yourself in that story a lot better. So that was the original kind of insight, we then started digging further, which is that just so much of the technology we interact with will change. Whether this is how you create, it's still relatively tricky to bring voice alive. Need You to go through the expensive process of hiring a voice talent, having a studio space, having expensive tooling to then actually adjust it.
The tooling isn't intuitive to be able to do this, so all that creation process will and should change to make it easier for new people with keenness to bring that to life, then a lot of the technology wasn't possible for you to be able to recreate a specific voice or be able to create that in that high quality way, and then, of course, as we dived into further and shifted away from the static piece, the whole interactive piece is still crazy in the way it functions where most of us have seen this technological evolution over the last decades, but you still will spend most of your time on the keyboard, you will look at the screen, and and that interface feels broken. It should be where you can communicate with the devices through through speech, through the most natural interface there is, one that kind of started when the humanity humanity started, and and we realized we want to we want to solve that. And I think now, fast forward from 2022, I feel like many people will carry that belief too that voice is the interface of the future as you think about the devices around us, whether it's smartphones, whether it's computers, whether it's robots.
Speech will be one of the key ones, but I think 2022 it wasn't, and as we think about the market for the creative side or for that interactive side, it was very clear it will be a huge, a huge, huge one.
So even when you think about just the research part of your business, and then you have products for at least two different markets, and then you have this larger mission, a lot has changed in the last five or ten years. But it used to be a very strongly held traditional belief of like, one must do one thing well in a startup, and there's no other path. Like, you're treating this like an interaction company, a platform company. How did you think about sequencing, like, the research and the product effort? Does that make sense?
Or, like, thinking about new markets? And maybe wrapped up in that question too is just like, well, where are we in quality on voice as well? Because if if I I would sort of claim, like, if the models are not good enough for certain use cases at all, like, it kinda doesn't make sense. Due product?
And I think that's right. It's almost exactly like when we started originally, what we did was try to actually use existing models that were in the market and optimize them for our first use case was actually starting with a combination of narration and dubbing, and then on the creative side, and we realized pretty quickly that the models that existed just produced such a robotic and not good speech that people didn't want to listen to it, and that's where my cofounder Genius came in where he was able to assemble the team and do a lot of the research himself to actually create a new version of creating that work. But, like, to your question, I think that the way we are kind of organized internally and how we think about sequencing a lot of that was looking at the first problem and then creating effectively a lab around that problem, which is like a combination of mighty researchers, engineers, operators to go after that problem. And the first problem was the problem of voice, so how can we recreate the voice? And like you say, it needs to have that research expertise to be able to do that well.
So we started with effectively a voice lab, which was that mission of can we narrate the work in a better way? There was a combination of roughly five people that were doing that work, and then sequence the research first, and then build a simple layer on top of that work to allow people to use that work, and then kind of expand it from there with a holistic suite for creating a full audiobook and then creating a full movie narration, movie dab. And then we move to the next problem, which is the realization that, okay, we have solved the voice, great for making content sound human.
Mhmm.
The first problem, For that to be useful for us to interact with the technology, you need to solve how you bring the knowledge on demand into that. So we effectively started then the second team, which was a second lab, an agent lab effectively, which was a team that would combine researchers, engineers, and operators once more, which would try to fix, okay, we have text to speech. How do they now combine this with LLabs and speech to text and orchestrate all those components together while integrating that with other systems to make it easier. And then similarly, you kind of expand from looking just at the voice layer into how those systems work together, and here too, you need the research expertise to do that in a low latency way, efficient way, accurate way, but at the same time, there's that product layer that starts forming that it's not only the orchestration that matters. It's also the integrations of how you link up to the legacy systems, how you build functions around it, or how you deploy that in production and test, monitor, evaluate over time.
Do you feel like you were creating new use cases? When you built the tools, do people know that they wanted to do this already? Because one argument, like, that I remember hearing was like, ah, like, you know, enterprises don't know what to do with voice. How many people really want to do it? And then you're serving essentially like perhaps the creator publisher side of Right.
Your
It's definitely a combination of like initiatives that we believe will happen in the world, and then like response to a lot of that. So like as I think back, you know, of course, voice the internal voice lab or agents lab then kind of that kick started so many of the other labs in response to their problems. We started a music lab because people wanted to create music with 11 labs, so it's a fully licensed model where people wanted to use and create speech, but they wanted to add music in a simple way. We wanted to deliver that. And then, of course, that kind of came together through how do we combine music, audio, sounds.
We are now integrating partner models from image and video into that suite, how could you combine all of that in one, and all of that was in response to the market saying us, hey, we would love this. And then you will have completely different use cases, even in that space, let's say dabbing. Dabbing is a use case that we didn't feel there was a big push for that, but we knew that in the ideal world in the future, you will be able to have that content delivered naturally around the languages, still carrying that, and I still think actually this market will be immense because it's not going to be only the static delivery in movies, but if you travel around the world and want to in real time, like the full Babelfish idea from Hedgehacker's Guide to the Galaxy, this will happen. It will be like the biggest, like, whole breaking down language barriers, the barriers to communication, to creation, all of that will break, and that will be the foundational real time dabbing concept. So super excited about that part.
And similarly, on the agent side, you are some obvious things that, of course, customers that we work with or partners will want to integrate, which is we want integrations with XYZ systems, but then there are other parts that might not be as easy to predict of as you interact with technology, of course want to understand what's happening, but you also want to understand how the things are being said and bring that into default, which would be something we try to prioritize on our side, so then the people, when they actually interact with technology, they realize, oh, expressive thing is actually so much more enjoyable and beneficial and helpful.
So, wanna ask you a question about this, which relates to quality. You know, I work with a series of companies where we're selling a product to the buyers. They're generally not machine learning scientists.
Right? Right.
And even the scientific community does not have the full suite of evals and benchmarks to understand every domain well. It's a well known problem. But I imagine for a lot of your customers, it's not like they know how to choose good voice. So how do you how do you deal with that problem? Like, is it like, hey, I make a clone, and, like, that sounds like me, and I believe in.
I'm gonna try all of these different options, or or, you know, actually, are you teaching people to do eval?
It's a great question because I think there are, like, two big problems. One is, like, how do you benchmark the general space in audio where, like you say, it's, like, so dependent on the specific voice, let alone, like, if you are training it to interactive, then it's, like, even more tricky. And then the second piece, which is as you are working on a specific use case, how do you select a voice? So, I'll take the second front first, which is we have a voice sommelier effectively as we work with enterprises. We deploy that person to work with them and help them navigate.
That person is like a voice coach, has an incredible voice themselves, and now we have like a team under that person that will partner to help you find what's the right branding of that invoice.
And now you have like the celebrity marketplace. And now you
have a celebrity marketplace to help you even get the iconic talent in there, like Sir Michael Caine. That piece was important because, of course, their voice will depend on the use case that you are trying to build, the language. All of that have an impact of what's the right voice for your customer base, so we have effectively a voice person helping those companies, and some companies will be very opinionated on what they want, so they will sometimes select it themselves, sometimes give us a brief of, hey, we want a voice that sounds professional, neutral, it's coming. We recently had a company, one of the biggest European companies that gave us a brief, which is very original, that they wanted as robotic voice as possible. Which was counterintuitive.
But first of
You feel like we can't do that anymore.
Almost, but we were like trying to go backwards of like, how do we do that? But I think we got a good result. But recently we had a company in Japan and Korea where they wanted to serve different voices depending on the customer that's calling in. They have a older population and a very younger population. The younger one, they wanted one of the famous voices in the market that's very excitable and happy, and for the older one, they wanted a calm, slow speaking one.
We help a lot with that, so that's on the voice piece, and I do think it's going to be a big and important
So, like, personalized choice, and then it can even be dynamic in a customer.
Yes.
Okay.
Exactly, exactly. And then maybe in the future, it's going to be fully, depending on your interaction, you will have a voice created as we understand the preferences of what people want, so let's say you're in the evening and you are tired and you want a slightly different Or maybe not. Maybe that's the best focus time that you have a voice that's giving that energy, and probably it's different when you wake up and gives you the morning news of what's happening or what's the weather. So, all of those could be different. Yesterday, we had a dinner with some of our our partners, and one of them the first thing they said is like, hey.
I have a new request for you. I want a New York voice with a Long Island accent, which I never knew is a thing, and it's a territory supposedly a thing. So so we have that. And then on the first piece, I don't I think it's unsolved problem still where I think you have a good benchmarks, of course, in LLMs. I think in ImageSpace, they are pretty good.
In VoiceSpace, you you have, of course, the speech quality, but then so much of whether you like or not the speech depends on the voice that just if you compare model a to model b and you serve them different voices, even if the quality is very different, the voice itself can just make that so different. We've seen this. I don't know you know, artificial analysis benchmarks, I think they're pretty good. Just switching the voice makes such big advantage.
That's so interesting. Yeah. And I wonder if, as you said, this is mode we've had for millennia of all of human history. I'm biased
and self serving, but I think so.
We're just very sensitive to it. And I think people are gonna be very sensitive to their own personalization as well.
A 100%. I think this is also a third piece, which maybe is not directly to your your to your note, but we've also realized that you have so you have the benchmarks. You have, like, how do I find the right voice for my audience? But even the understanding of how you describe audio data is still lagging in the industry. Like, when we initially started, we, of course, went into the traditional players for them to help us label not only what was said, so, like, transcription, but also how it was said, like, what are the emotions, use, accent, and most people just weren't able to do that work effectively because you kind of need to hear and have a little bit of a skill set of how would I describe this specific delivery, so we need to create that ourselves, so I think there is that piece as well of how do you effectively interpret the data of audio in a more qualitative basis.
That's, yeah, trickier.
Can you talk about what's happening on the agent platform side? Like, what is challenging for, you know, businesses or even creators that are trying to build agents and what the maybe what the surprising or high traction use cases are? I everybody's kind of aware of the idea of, like, agent based customer support, but I imagine you're doing many things beyond that.
Yeah. So the exactly. Customer support is probably the one that's, like, kicking off the quickest, and and that's the the one that, like, we see overtaken so many use cases, whether it's why I work with Cisco or Twilio or Telus Digital, all of all of them are kind of elevating that to a high extent. I think the second exciting piece within that domain, which is happening is the shift from effectively a reactive customer support, I have a problem, I'm reaching out to customer support, into more of like a proactive part of the experience customer support. So, to make it explicit, we work with the biggest e commerce shop in India, Micho, where they started working on the customer support side where I want a refund, I wanna see the tracking of the package, to actually having an agent be a front part of the experience.
So if you go to the website, you have you have the the widget. You can engage it through voice, and you can ask it, hey. Can you help me navigate to item x, item y, or can you explain me what's the right thing for me to give up for a gift for this period of time? And then it will actually help you based on your questions, based on what is on the offer, show you those items, navigate to the right parts of the piece, maybe go all the way through the checkout, and I think this will be a phenomenal thing of, like, elevating the full experience where that's more of an assistant across the whole thing. We kicked off our work with Square that enables other businesses to do that work.
Exactly the same pattern. It started with voice ordering. How can now this be part of the full discovery experience too where you get items shown to you, you can have a lot more explanation, which I think will be a phenomenal piece where it's effectively from the beginning to the end. So, that's one category. The second one is the wider shift from static to immersive media, where there's just so much incredible stories in IP that today exist in effectively one way of delivery, and now you'll be able to interact with that content in a completely new way.
We I think one of the incredible use cases was working with Epic Games. We worked with them on bringing the voice of Darvader and Darvader into Fortnite where millions of players could interact with Darvader live in the game where you had, like, a full experience of of Darvader in a in a in a a new way, and I think this will be a theme across whether it's talking to a book, talking to the character that you like, to the whole space shifting. And then I think the one that I'm most excited about for the world and for the shift is going to be education, where you will just be able to have effectively a personal tutor on your headphone, and you could actually study something in an amazing way. I'll give you two quick examples. One is we recently worked with chess.com, and I'm a huge fan of chess.
I'm a huge fan. Okay. Great. So you can learn chess, but you can have Hikaru Nakamura or Magnus Carlsen be your teacher of how you deliver that, which is amazing, or even Botis sisters, or it's like all the plethora of different players that engaged with that, which I think is great, and then maybe a last one, which is a master class that we worked with to shift from you can, of course, have the content go through step by step, but you can also have an interactive experience, and the best example of that was working with Chris Boss, the FBI negotiator, one of the top negotiators, who has a masterclass lesson, but then you can actually call him and have a practice negotiation, which is crazy.
Yeah. Gotta get that hostage out. We'll definitely try it.
Yeah. Can I add one more? I think the one last one, which combines all of them together, which I I realized just recently is which was crazy. So, recently, I went to Ukraine where we are working with Ministry of Transformation, where they are effectively creating a first agentic government, and the crazy thing is they have all of those pieces.
Agentic government.
A gentic government. So they want to re change how they run all the ministries. And it sounds like a big ambitious goal and love to you.
No, think the baseline is here, so actually I'm by that And
the crazy thing is I think they are so ahead in actually doing that, and I think there are two concrete things there. One, they kind of combine all those use cases, so we are looking into how they can have effectively customer support of government, whether it's asking about benefits or employment, about the process of how you leave the country. All of that be run through effectively a digital app, then two, how you can have a proactive way of informing citizens of things that might be happening, and then helping an education system that also run through this personal tutoring experience, and all of that is happening, so that was incredible to see, and the second amazing thing was that the way they've done it, so they have the digital transformation piece, but they have engineering leaders in each of the ministries that lead those efforts and then bring them back to that one central piece, so that is incredible to see and also proud to be able to be working with them on that shift, but despite everything that's happening, they're like so That's
That's really encouraging. Can I ask you a business model question here? Because looking at the strategic landscape Actually, I have many questions here. One of the observations I'd have is if I look at one of these like, rich voice and action agent experiences, there's a lot of, let's say, Fortune 500 Global 2,000 leaders who listen to the pod. I think a lot of them are gonna buy the idea of, like, I want this amazing, automatic, real time available, 20 fourseven, every language experience for my customer that's consistent and high quality.
The ways I might get there include working with a Palantir or a large consulting firm, working with Eleven or a platform technology company, or like an OpenAI or something, right? Let's talk about that. Or working with a sort of more use case oriented company like Sierra. Right? How do you think about how people are making that decision or how they should make that decision?
So my past is also in Palantir, so I started exactly from that side, and we do blend a lot of the forward deployed engineering inside of the company too. As I think about kind of our offering and the customers making that choice, if you're looking just as a one pointed solution and only that one, then likely we aren't the best choice. If you are looking to deploy that across a plethora of different experiences, so be it customer support, but then you also want internal training, then you might want to elevate your sales part and actually increase the top line with new experiences of how you engage customers beyond that kind of reactive piece, then it's a great platform to build, and then we effectively, as we engage with customers, combine that platform work with our engineering resources to help those companies deploy on that, or which we also see increasingly in Fortune 500s, G2000s, where they will want to build parts of the things themselves because they already have a lot of the investments in that platform while then engage us on some of the new ones and combine those, And and and I think that our model and the way it's different to to a lot of the use case specific ones is that our platform is relatively open where you can use pieces of that platform and not all of them Mhmm.
For for those different use cases. Palantir, of course, will will or or some of the consulting companies will have a lot more resources to go in the wider digital transformation journey. In our case, it's, like, very specific conversational agents.
Mhmm.
It's like if you are looking for new interface with customers, that's the the best way. And companies like Sierra, phenomenal, of course, on how they are thinking about the specific pointed use case. And maybe the other piece is, as we think about our work, depending on how you are what you're optimizing for, so we we have a lot of international partners. If you have, like, a a wider geographic user base, great. That's what we optimize for.
Our voices, our languages, our support for integrations internationally are just so much broader. There's frequently a piece that you will look into. Depending on your exact scope, this will be a big factor, but I would summarize that if you are looking for a solution across a set of different use cases that you want our engineering help and deploy that, then we are the right solution and probably the best solution.
I wanna talk a little bit about maybe OpenAI and the foundation, LM foundation model companies. One of the reasons Elad and I called this podcast no priors is because we're like, okay, people are making a lot of assumptions all the time about how the market is gonna work, and lo and behold, like, many of those assumptions end up being nonsense, actually. And you you you can't you have to very much decide your own narrative at this point in time. I think, correct me if I'm wrong, like, in 2022 and '23, you probably heard a lot of people say, like, Google can do this and OpenAI can do this, and like, why do you get to persist working on voice anyway as a general capability? What's the answer?
That also adds kind of another element to that, a couple of the other previous questions where whether it's agents' work, whether it's the creative work. To deploy the value in those work, you need a very strong product layer. You need integrations. You need to help people deploy the work, which is the most common piece, but our superpower and our focus for a long time was building the foundational models to actually make that experience seamless. And as I think about the companies in the market, they will optimize for a lot of other things, and that will be the differentiator.
In our case where we will make the whole experience, especially with voice, seamless, human, controllable in a much better way.
And so fundamentally, you would argue that the labs just aren't gonna focus on this and haven't.
Exactly. So I think most of those companies, and that's the thing about the long term, it's going to be incredible research and an incredible product that meets customers where they are and work backwards from there. I don't think the labs will focus on building that product layer that's so important, but I think the part of the question that you're asking is how or and why they haven't done even the research part to the quality that we've been able to as here, I'm also biased, but we are happily beating them on benchmarks with text to speech or speech to text or the orchestration mechanisms, and a credit to my cofounder and the team that they've been able to do it. It's just a mighty researcher is just continuing their work, but I think the main part that I think is different in audio space is that you don't need the scale as much as you need the architectural breakthroughs, the model breakthroughs to really make a dent, we've been able to do that a couple of times, and I think the number of people doesn't matter, but the people that you do does, we think there's maybe 50 to 100 researchers in audio space that could do it.
We think we have probably 10 of them in the company that are some of the best ones, and I think this obsession of just those people working across and then actually giving the full focus on the company on making them actually work on that and bringing their work to production, seeing how the users interact back was so important. So that's, I think, how we've been able to create models better than some of the top companies out there, but, you know, the truth is, to a large extent, they weren't able to do it is also an interesting we don't know. They have such an incredible talent there too.
How do you think at the same time about open source models?
Anyone you ask in the company I think will say that same, and the second narrative we think about is in the long term, models will commoditize or the differences between will be negligible. For some use cases, they will still matter. For most, like, most use cases they want, and
And they'll be broadly available and totally Exactly. Agree with
And we don't know where that is, whether it's two years, three years, four years, but it's going to happen at some stage. Then, of course, you will have a fine tuning layer that will matter a lot on top of those models, but the base models, I think, will get pretty good. And that's why, for us, the product piece is so important from the company perspective but also from the value perspective because if you have a model, that's great, but to actually connect your business logic and knowledge to to be able to have the right interface for creating an ad for your work or a completely new material, that's that's a very different exercise. But open source models are getting if I split it into two, like, of that async content narration, I think narration is pretty much open source is great, commercial models are great, differences are are getting smaller on the out of the box quality. What most of the models haven't figured out, and I think we were, is how to make them controllable.
So that's kind of the narration piece. I think the whole interaction piece of how you orchestrate the components together, whether that's cascaded speech to text, the Lemp text to speech approach, or whether in the future it's a fused approach where you train them together, I think this is good for customer support or customer experience, but it's still away from conversation like we have of passing that Turing test, I think this is still at least a year, like within a year, and then you will have real time dubbing kind of variation of real time translation conversation, and I think that's maybe more within two years away.
You know, a very uncomfortable belief that I feel comfortable having this belief, but I I think is uncommon in the market right now is that actually most advantages in technology, like, they could they could last you a year or they could last you ten, but they're not, like, infinitely defensible. And if you think about that from a model quality perspective or a product perspective, they allow you to, like, serve the customer better and build momentum and build scale for some period of time. And actually, that's really powerful over time, right? But it's not like a clean forever answer, and so I think that makes, I don't know, business people and investors uncomfortable.
And I mean, it's very true as well.
I see it as.
I mean, the way you think about it, research is head start. This gives us we can give advantage to the customer earlier, and it's six, twelve months of advantage. That is also a way for us to build a right product layer for you to get best of that research. Frequently, we do that in parallel, so the moment the research is out there, you have the product because we know our initiatives, we know what the product is that's right, so you have research, product in parallel that extends that, but the thing that will really give that long term value is the ecosystem that you create around, whether that's the brand and distribution, whether that's the collection of voices you can have, the collection of integrations you can build, the workflows that you can build. I think that's the way we kind of sequence that in our mind, that research product ecosystem that we built, And research, all it is, is a head start and being able to accelerate the future a little bit closer.
I think that's a really powerful insight, especially if the research team and the company team believe that as well internally.
I think the piece that was interesting for us is, and I think this is the big questions for all companies that do research and product, is do you wait for research, or do you do a product change? Or even not only research product companies, do you wait for someone else to do the research? Because the timeline for that isn't clear. Is it three months, six months, twelve months? You don't know exactly what it will do, which is the hard choice of do I invest into product layer, or do I just wait more for the research?
So in our case, we internally let all the product teams do research initiatives so we can paralyze that work, but we don't hold them that if a product team thinks we should deliver value to the customer by doing something different, they can, and rough rule of thumb is like three months. If we think it's going to be longer than three months, we will probably build it. If it's less than that, we probably won't.
Can you talk about some of the research that you're doing now and then how you think about the cadence of delivery and what's worth working on?
We have now a number of different initiatives across the audio space, and there are kind of two big buckets, roughly they will relate to that creative and agent side. On the creative side, what this means, we did text to speech models that are controllable. We then added speech to text model that transcribes in a high accurate way, but across a low resource languages as well, so covering almost 100 languages. Then we created a music model, a fully licensed music model, and as you think about the future, it's how those models will also interact with some of the visual space. So that's a lot of effort in how you can get the best of audio and then potentially combine that with existing video that you have to really have the best delivery.
And then on the agent side, it's of course how you optimize the real time speech to text, real time text to speech. We just released our speech to text model, Scryvy two, which is under a hundred fifty milliseconds, 93.5% accuracy across the top 30 languages on on Flowers. And it's only top 30 here because we serve so many others, but most of the people don't, so it's beating all the models on benchmarks, but as you think about the future, it's also the orchestration piece of how you bring speech to text, LLM and text to speech. We'll be releasing over the next couple of months a new orchestration mechanism that will lower the end to end part, we think, in a great way, but second thing, which is what is so hard, is it's not going to only allow you to combine those pieces, but add also the emotional context of the conversation so you can actually respond with the model and we think in more expressive and in a better way. And in the future, and something we're investing is parallelizing a speech to speech, more fused approach as well.
And of course, depending on the use case, if you are enterprise reliable use case, the cascaded approach is the approach for the next year to Has more structure, yeah. More structure, you have more visibility into each of the steps. It's reliable. You can I'll call it tools. If you're feeling more expressive and can hallucinate, speech to speech might be the choice, and maybe over time, you'll see them kind of go one over another depending the industry, but that's a huge investment on our side, which is where the foundation of all the platform and the main part that we are continually investing in is a plethora of different models that combine the best of audio with some of the best of the other modalities together.
I wanna take our last few minutes and ask you a few questions about the future that I think you'll have a really good point of view on given you think about voice and audio all the time. What do think of AI companions?
I think they will be a big thing and exist in a big way. Not something I'm personally excited about or something that we spend much time on, but I think the whole line of of, let's say, assistant, companion, character that you enjoy as part of experience were kind of blurry and bland to a large extent.
I think can be very common, but you're not enthusiastic personally about it.
I'm more excited about more of the Jarvis version of that or more of like I have a super assistant super pilot.
Versus the social version.
Versus the social version. That's like I think it just would be like such an incredible unlock, and it's also like it's in a and it's something blending in the personal life context. I would love to start the day and, like, someone that understands me and, like, start and tell me what's, like, relevant to me and open the blinds and then tell me about the weather and the sunshine. It'll be and play music straight away.
It's gonna happen.
It's gonna happen. That's I'm excited for. I think the companion use cases will mention solving loneliness in that part. I think that's one way. Maybe there are different ways of engaging people back.
I do think there will be an interesting future, even if you think about education where you will have superpower with learning from AI tutors, but I think on the flip side of that, and I think this will, like, that's my personal take, you will have education, a good percent of time spent with AI tutors, but then explicit percent of time spent without any technology, human to human, so you kind of learn that part too.
Yeah. I think this is the correct model, both in terms of emotional guidance and coaching and guardrails, as well as peer to peer learning.
Yeah. Exactly.
What do you think about dictation or what happens in terms of how we, like, control technology that isn't necessarily personified as well? Or does it just all become personified?
I think not all personified. I think, like, some, you know, communicative and oven and home probably will, like, stay pretty static and
Or code. I might
just Yeah. Exactly. Like, you don't probably need that much of of, like, additional emotional input, but but I think it's Yeah. It's going to be a huge part where, like, in a way, what I hope will happen is you will have ability to, like, stay more immersed in the real life with the devices going back into the pocket, back into some version of an attached element, assuming that's in the right setting, and that kind of acts on your behalf. And in many ways, like, let's say dictation, it's, as Carpathi says, decade of agents.
Let's call it decade. Then you'll have a decade of robots. If you are interacting with robots, of course voice will be the input and the output as one of the key interfaces, so you will need that dictation as a huge part. But similarly
I think the robot's gonna be personified.
Yeah. Relative. 100%. 100%. Yeah.
No. Like, yeah. I think that I think most of the use cases will be personified. Mhmm.
Okay, last one. What's, like, one thing that you've seen already exist today, or if you project out a few years will change about how we interact with content? Maybe it's, like, personalized voice content or just something people are gonna do with AI voice that they don't do today or that not everybody knows about.
I think that's still the biggest one that hasn't yet kicked into the system is like how education will be on. I think this goal will be I think learning with AI with voice where it's on your headphone or on a speaker, it's just going to be such a big thing where you have your own teacher on demand and who understands you very personified and kind of delivers the right content through your life, I think this will be one of the biggest use cases, and I don't think it happened yet. I think we see, of course, some of the commercial partners, but schools, universities, how that's deployed in a safeguarded way, in a way that supports the other part of the education, the social part of education, think all of that will evolve, and maybe there's a cool version of that where you have Richard Feynman or Albert Einstein deliver those lecture notes or other teachers that you love. It'll sick.
That's a great note to end on. Thanks for doing this, Maddie.
Tara, thanks so much.
Find us on Twitter nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at nopriors.com.
The Future of Voice AI: Agents, Dubbing, and Real-Time Translation with ElevenLabs Co-Founder Mati Staniszewski
Ask me anything about this podcast episode...
Try asking: