| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
From applied cryptography and offensive security in France’s defense industry to optimizing nuclear submarine workflows, then selling his e-signature startup to Docusign (https://www.docusign.com/comp...
We, as a human species, like, we started to write because we didn't have, like, enough storage for stories that we were telling to each other. So we had to write to store those stories. Now, like, all the content can be stored in YouTube, in TikTok, or whatever. It's like, what's even the need to write? What's the need?
Because everything can be vocal. And I see kids now, they don't read article. They want a TikTok video talking about the article. Being a bit more grounded, what does it mean about, like, the future of the user experience for email and communication? Will people still type, or will they just talk to emails and they want to hear an email?
And this is where it becomes interesting because Rahul, as a CEO, maybe next year, he doesn't want to write to you with the new feature. Maybe he wants to talk to you. And then the way you will have received our marketing campaign about the new features, you in your car commuting, listening to Raul talking about about that.
Hey, everyone. Welcome to the Laid in Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swiggs, editor of Laid in Space.
I just realized I have the tough job of always pronouncing names. And I know, man.
You gotta prep. You go on YouTube.
Namepronunciation.com.
Loy Kusiye, welcome.
Wow. I'm impressed.
It's I I I did I get it right?
I know. You got it right. You got it right. I'm I'm surprised. You thought you're making a joke about, like, yeah, you know what?
You can clean it like the way you want and everything, but, like, you nailed it. So I'm impressed. Thanks for having me, guys.
Yeah. Of course.
Thanks for coming by. So you're CTO of Superhuman Male, which is the new name for Superhuman. I've been using Superhuman for a long time. I think I was one of Rahul's personal onboarding things back in the day. And, yeah, I mean, we're here to talk about all things AI engineering, but also you have a lot of history in products board, First Base, DocuSign, and nuclear submarines.
Yes. Yes. That's kind of like the the fun, like, icebreaker that I give to people sometimes. They're like, like, two truth and a lie. Like, I went into a submarine, and the people are like, no way.
But I did. I did. I spent one year working around submarines. And
the the trajectory is a bit weird. You were an engineer, and then you were sort of chief of staff on on some submarines thing. Yeah. So And then you went back to engineering.
I I started, like, studying math. So I'm a math graduate. I was about I was about to do, like, a PhD in math and applied math in cryptography. So, like, crypto before crypto to some extent. It was cool for a moment, and then I was like, no way.
Spent, like, three years of my life on the same topic. But in the same lab, there was, like, a bunch of people doing, like, security, like, offensive security type of stuff, and I was like, that's what I wanna do. So I was basically an engineer. I would say security researcher in that lab. But I did that in a pretty big corp.
That's the one in telco and then in the defense industry. And in the defense industry, they have this nice kind of like carrier framework, like you're young, high potential ish between quotes. So they want you to do, like, different type of jobs and kind of like have a spiral of career so that you, at some point, reach to the c level eventually. So they gave me the opportunity to be out of the tech industry for a year. And I went in and harbor, and I was there as, I mean, financial controller, process improvement type of person, and basically helping people do a better job, which was interesting because I had no clue.
Torpedo system, radar systems, like, even, like, nuclear engine inside a submarine. But still, I had to help people take a step back from what they were doing and everything. And that was really fun because I came from Paris, came with my tie and my suit and my ego. I was used to drive people through my technical legitimacy on the security space, and all of a sudden, I didn't have any technical legitimacy at all. But I still had my ego.
So, like, it was a pretty fast ramp up. And, like, who put my ego in my pocket? And basically drive by questioning people. Like, how does that work? Like, help me.
Like, I don't I don't get it. And just by questioning, I kinda, like, build a new skill, which is, like, getting curious and understanding how people are working and being comfortable facing people that are way smarter than me, knowing better their field, but probably having a way to ask questions to help them, like identifying gaps or productivity gaps, for example. So that was cool, but I missed the tech. So I moved back to the tech industry after basically two years.
Yeah. What are some of the other maybe highlights or stories you haven't told about other experiences? I mean, DocuSign is another product that we all use. Yeah. Any other No.
DocuSign was cool.
I mean, DocuSign was cool because it was
an acquisition. OpenTrust and DocuSign.
Yeah. Yeah. So I was a CTO of a small company in Paris. And and we were like a typical, I would say, European company, Alessio. So, like, very focused on the tech, not very focused on the marketing.
And we are trying to like, we were one of the biggest signature company in Europe, but it's a very fragmented market. So we were winning France, starting to expand, and DocuSign is coming. And we're hey, guys. We need to do a partnership and everything. And pretty soon, they understand that European market is tough and, like, the technology behind DocuSign is not sufficient, lack of standards, lack of compliance, and everything.
So pretty soon, they were like, with us or against us. But the way they were explaining the value, I was like, holy cow. Like, we're not talking the same language. We're doing the same job. We're selling the same type of software.
But, like, we are talking to CIOs from a technical standpoint. They are talking to head of HR, head of functions, and sell them the value. So pretty fast, it was easy for us to understand like, wow, wow, wow. Not the way to set a product, better to partner with them. So they did an acquisition, but it's not a full acquisition.
It was a security oriented company, two business lines, one doing signature, which is, I would say, the one that Hunter Keusen was interested in. The other piece was doing strong authentication. So PKI stuff, SSL certificates, those type of things. And we were working for the Department of Defense in France. So we had the Ministry of Finance in France basically saying, no way.
No go. You cannot sell. So we had to do a carve out, which is like the funniest acquisition type you can do. So you have your team. You need to divide everything into your team, your systems, your crowdsource, and all of that.
Even your data center, have to replicate and get rid of, like, all the shared systems and everything. So we did that for, like, something like six months to be able to sell the new Carvellate company to DocuSign. Crazy. Don't do that.
Are you still involved at all with, like, the French startup ecosystem? I'm curious, like, how you've seen things evolve since then.
Yeah. It's pretty interesting. Like, I've seen a I've seen a change. Now that I'm getting some gray hair and I have some experience, like, I try to give back to some extent, I spend more time helping, like, the ecosystem there. But it's funny to see, like, the difference.
Like, when you're here, you we live in a small bubble, and it's crazy to see how even, like, other tech scenes are different. So, like, the great like, just like the great, like, to get shit done and, like, to to to move forward and everything. They have great education when I said they. Sorry. Like, oui, they, I don't know where am I now.
But so great education, great engineers, and all of that, but not the mindset of, like, creating things. So not not a lot of entrepreneur that much. It's changing. We had, like, successes in Europe. Especially in AI, like, there are some some cool stuff happening.
But still, like, the way to think about product led growth, like Superhuman nailed it, but ways to think about, like, the way to structure your organization to scale fast, the level of ambition as well, how to, like, maybe not target France or target Italy to start with, Target English and the world from the get go, and that would be something to to think about. So I'm I'm doing that quite some, highly rewarding, but it's, yeah, it's it's pretty cool.
There's a common question that people have about DocuSign that I'm just gonna indulge. What do other people do at DocuSign? I love it. I you know this is a meme. Right?
No. No. No. It's a meme. I'm sure inside DocuSign Why do you need, like, so many people?
You have signing. Why do you need 3,000 engineers?
It sounds crazy, but, like, you want to know you're up. You need a different product. You need a different team to view your local data centers because of the compliance. You cannot just run your data centers from The US. So you need the local team there.
Oh, and by the way, the way to do digital signature in Europe, totally different. So like the stack itself is different. So like the way to make a digital signature is different, not the same standards and the same ways. So you need dedicated team to maintain that thing. The same way, some people want to have DocuSign on prem.
So you need a team building appliance to basically plug and play and, okay. You have your DocuSign appliance.
There's a DocuSign box?
There's a DocuSign box. Wow. Acquisition made in Tel Aviv at the time. Wonderful people building, like, security appliance and where, like, you you shake the box, the keys disappear. Like, if someone is, like, stealing your box, no one can sign in your name.
You're kidding me. Oh my god. I mean,
some banks.
What if there's an earthquake?
Yes. That's a good question. They are mounted, like, on some, like so there's, like, earthquake mitigation, I would say, associated to this. So just that. But, like, apply to FedRAMP.
Yeah. Dedicated teams, dedicated data centers. And like, oh, and we need, I would say, to have, like, DocuSign run-in Canada because data residency. Oh, we need the same in Australia. Okay.
Cool. And now you have, like, something even different. We want Japan as a market. Oh, but Japan is not signature. It's a hanko.
It's kind of like a stamp. So you need a team to understand how Japanese market is thinking about even processing an agreement. Totally different. And then you have, like, verticalization, like some different verticals and everything. I mean, it's a good business.
It's well run, and, like, people are not costing there. So there's a lot of work. And it's very interesting to see it from the inside because when you see those memes, you're like
Right.
Yeah. I know. But, damn, I see.
I mean, people, know, you're the VPN, so you know you know.
You actually know. Yes. Yeah.
Yeah. Yeah. I just wanted to get that.
Obviously, hope it's providing some This this episode is not about DocuSign,
but we have to ask.
No. Yeah. No. Of course. Of course.
Totally legit. Let's talk about superhuman. So you joined January 2025. Yes. Just give people a lay of the land of, like, superhuman AI.
I think a lot of people that are listening are familiar with the email client. Yeah. I think the AI stuff is generally new. So just maybe you can get the canonical definition of what you wanna do with AI in Superhuman, and then we'll kinda
dig through. The main driver is how you can put AI in the product to accelerate the productivity of people. It's not to just like do AI things and like sparkles and everything. We don't care that much about it. Our people are pretty high expectation oriented, and they don't want to slow down.
So you cannot add latency. You cannot everything that we do is done in a way to improve the productivity of people. So AI included. First thing that we started to do is auto label emails. Like, is it a pitch?
Is it marketing? Is it kinda like typical classification that you could do. And so people can say, okay. I would think that is a pitch. I will look at that, like, on a Friday.
So, like, during my days, like typical days, I don't look at it. So, like, that that was one of the first thing. Summaries. Like, you have a long thread. What is this thread about that someone, like, shared with me?
Okay. You have, like, a quick summary. So not nothing that is that was very, like, groundbreaking, but, like, just well thoughts. Just like adding things that make sense at the right time. Another example is now.
Like, we automatically detect if one of your email requires an answer. And if no answer after two days popping up, hey, this one needs to to be like a I would say, you you need to to send another email to to to the person because you you didn't have an answer. So that was the first step. Second step was like, you know what? The draft is already ready.
You can just hit send. So it's very subtle, but it's like adding a, oh, damn. Shoot. Yes. I wanted to remind people to to give me an answer.
And the draft is already there. Pretty cool. Sand. And now we have like more and more of that. Now it's detecting, oh, this is a request for you to ask for your availability.
Oh, you have an executive admin that is doing that for you. Your draft is like, hey, let me cc the right person. And boom. So that it's ready. It's in.
It's done. And the typical chatbot, because more and more of the use case we see in people using AI inside Superhuman, is to query your emails. A good example, I would say, tech people, we receive like a bunch of Substack, like, bunch of newsletter. I would say some are great. Sometimes, like, the content is meh.
I probably have, like, I don't know, thirty, forty subscription because everyone has, like, something interesting to say at some point in the matter. Now I don't read them. I auto archive those. And, like, every week on the Friday, I just, like, ask AI, which is the name of the feature. I ask my email, tell me about, like, the the summary of the old sub stack that I received this week.
What should I pay attention to? Mhmm. And then I can deep dive, you know, outside the place where I I want to to pay attention to. So this is always thought, you know, in a way to accelerate, I would say, the pace and try to not be in your way, hopefully. Feel free to ping me if that's not the case.
I would say I I don't know if this is a recent change, but I feel like Ask AI, I've started using it a lot more. I've been a superhuman user for many years. And you've had it a while, but somehow this year it kicked up a notch. And I don't know if it's because anything changed in the products because I wasn't using it before, or is it just me trying it again?
I now that's a good question. Yeah. That's a good question. I think people are more and more used to the muscle of querying things, because ChatGPT and
because So the general consumer behavior
is Yes, exactly. So the user experience, people I mean, now every single product has a chatbot when you can ask questions. So it's becoming, like, more and more natural to ask questions compared to managing, like, a to do list of emails.
And agentic search as well. Like, previously, I was like, oh, you have to embed my documents, and then it's just gonna retrieve. And, like, I that's not what I want. But agentic search where you can actually figure out what do I mean when my question when I ask is, like, half formed, you expand it, and then you actually answer it. It's it's actually really good.
Yeah. And we spend a lot of time on the quality of the answers. So quality of the answers, and you've talked about the agentic framework. But one thing that is And
this is a frame it's not like chain, right? It's like your own framework?
Yes. I mean, we've done a lot of iteration. And there's a lot of subtleties in, like, multiple PCs there, but and multiple different models based on where they're, like, really good at. But where we spent quite some time lately is, like, around quality and making sure across different dimensions, but, like, making sure that we are generally good for typical queries and very optimizing for them. And especially one thing we we try to to solve for is agent laziness.
So through this chatbot, you can one of my use cases I is receive a Slack, and I'm like, hey, Louis, can you review this document, please? Because whatever. It's a tech, I would say, tech strategy document, or I need to review the doc. I take the link. I go to Ask AI, and I basically pass and say, hey.
Find me fifteen minutes tomorrow. I need to review this doc. And I don't need typically the agent to say, hey, I found this slot and this slot and this slot. Which one do you prefer? I just ask for fifteen minutes.
Find it. Do it. I have an admin when I was asking her like on Slack, find me fifteen minutes. She's not asking me if I need care in the morning, on the afternoon. She's not doing it.
So working on this agent laziness, because the handoff they were doing to the user is losing time. So working on making things happen faster. We spent a lot of time on this. So that's why you might have felt like that the the overall quality Yeah. Is better.
Yeah. My my my old joke was because the way that you trigger it is you you actually type it in the search bar. And when I was trying to normally do search, it would sometimes accidentally trigger the SKI. And I was like, my my joke is like most of my AI usage is just accidental because I actually wanted to just search. But then then I started just using it more.
And then the the kind of questions that you ask changes.
Yeah. Use it to like find people's phone numbers, stuff like that. It's like, hey, what's
I use it to find my contracts. I have so many contracts, right, from all my sponsors and, like, venue things, like yeah.
Yeah. Yeah. One of the use case that I I would say that blew my mind, I I was looking for, like I I was at a conference. They shared with me, like, a a PowerPoint link, and it was, like, six months ago. And I couldn't find the deck because I wanted to reuse some of content and everything.
Couldn't find it for whatever reason. I was, yeah, I'm pretty sure they shared with me, like, a Point link or something like this. Can you find it? So facing the context there and the link. Like, couldn't yeah.
I save, like, probably thirty minutes, like, searching searching through my emails. So it's pretty cool.
It's to you. So it's it's it's Yes. Because there's no way you can fit all your email into a context window. Yeah. Right?
No. Any anything else that's more complicated than
So we have to do some pagination. Because if you do like, let's say I'm doing that. Like, oh, I'm pretty sure I had a conference I attended where they shared in, like, a link with me. In my case, I don't do, like, plenty of conference, but still, someone like Rahul, my CEO, is basically doing a conference every three weeks or something. Not kidding.
But the
the use case That that is his job.
That is his job,
and he's fantastic at it.
And it
damn. I'm learning so much from him. But, clearly, this, I would say, depending on the use case, I mean, of course, you have more than forty, thirty, like, even hundreds of emails that can semantically be close to your answer. So you need to go through that. So we had to implement a pagination search.
So, like, semantic search for, like, the first, I would say, 40 deep search. Not that one. Okay. Next 40. Next 40.
So I'm kinda like using this agentic loop. And while you don't have find the answer, continue and even extend in, like, the semantic search proximity until you find the right one, because it might be buried page two of the of the the search page, technically.
How did you design the tools to get to the agent? Just maybe give people overview of, like, the framework, what it looks like. Like, how are you structuring these interactions? Is there just one superhuman agent that does everything?
Or, like,
do you have separate ones?
We have separated tools, clearly. So even an agent, like, I would call it a tool I would say tools. So there's a bunch of tools, tools to detect your availability, tools to understand who are the people you interact with, a tool to write an email, the tool to like, so every single action is very tool specific. So it's not a magic beat tool that can do pretty much everything. It's a set of small tools that are used within the agentic framework.
Like, there's a first step that is like, hey. What is the best tool to do this? Kinda like building a plan. Like, for each step, what is the tool and then making the calls.
Yeah. I think now the tools versus skills that Anthropic talked about is, like, the hottest thing of how much you wanna put, and there's like the MCPs discussion. I'm curious how you evaluate the tools too. Like when you build them, it's like, do you think about how to name them? Like how to give the description?
It's like how much work have you had to do to nail it?
I don't think we spend that much time into and, again, like, I will defer to my three engineers working on it, which is interesting. We can talk about, like, the the amount of people you need to work on those stacks when you want to be serious. And and I have fantastic people, so I feel blessed. And most of the time was trying, like, the different adjunctive framework, trying to understand the different models, the ones that are solving which type of problems because every single model is good for something. Sunet was really great for, like, agent head end off.
Like, the laziness was really great. OpenAI version of it was not that good. Now we have Gemini coming in in, like, in the room, like, last week. Like, okay. That one is is cool as well.
So I think we are, I guess, everyone has a feel like a a way to, for one, switch easily from one router to the others.
Like model routers.
Everyone has like an LLM proxy to some to some extent and like an engine proxy to to to implement different stuff, which is becoming interesting because the way to tweak them and tune them is different. So it's still easy to switch from one attriting framework to the other. But at some point, I think it would be harder and harder, and the stickiness of them will be tricky. But to answer your question, we didn't spend that much time on the tools themselves, I believe.
How do you think about evals? Are you just eval ing one email draft at a time? Are you eval ing a longer workflow? Just run us through, like, yeah, when you're testing Gemini, like, how do you decide what it's good at, what it's not good at? What's, like, the eval structure?
At first, we had a relatively naive approach, query answer, query answer, and having, like, a set queries. We, over time, evolved into, like, thinking more about, like, the different dimensions that we want to target. Agent and off is a very typical type of problem space that you want to make sure you select the right model for. So typically getting a bunch of queries targeting hard handoff that we've identified by through dogfooding or whatever, but trying to target a set of what we call canonical, I would say, queries along that dimension of, I would say, that specific problem space of agent end of. But, like, there's more.
Like, there's the deep search, like shit ton of emails, and you want to find that needle in the hashtag. That's a different type of category. So you need to have canonical queries that are targeting that type of dimension. Because every single user will have their own way to to question their own dataset, and we cannot replicate every single dataset of of people. The good thing is we have a bunch of users, like Rahul, like myself.
We receive, like, a shit ton of emails. Not on my French, by the way. I don't know if it's okay for the
show. But
he receives probably, like, 502,000 email a day.
He's still part of the onboarding. He's like, I will send an email to Rahul and he will reply. I'm sure it's not actually him.
Sometimes it's him. He's reading, like, pretty much everything. He's I I don't know how he's doing it, but he is really, really paying attention, especially at the tone and why something is like going sideways and everything. He really associate the brand and tone of like the people talking the company within himself, which is kind of like bringing us to the to the next level as well. So thinking about all those dimensions is really key.
So, like, even if you have, like, an eval tool, like, the way you structure your different queries to target those dimensions is important. And then we have those specific queries, like the route queries, typically. The one we we joke about and the one that was one of the first that we used as a as a way to calibrate our quality was weird stories, but he did some, like, five years ago, some refurbishing in his house, and he had this table, specific type of wood, and he was discussing this with the contractor. And he wanted to have a Ask AI find that email and the type of wood that was discussed in the thread with that guy five years ago. And until we nailed that query, he was not satisfied with the deep search approach.
And this is when we're like, oh, damn. Okay. So that's a different set. But we're also talking about dates. Like, another one I would say dimension is dates.
What is last quarter compared to today and everything? Large language models are not really good with dates. So, like, how do you manage that? So these specific queries for that. So we're like, oh, okay.
So there's a dimensions that we need to care of. So now we structure the Olivas. And as you are asking end to end. What is the query? Whatever happens there?
There's like an answer. Was there like a good agent end of? I would say dates, were they nailed or not, and etcetera, etcetera, etcetera. So it's pretty intensive in terms of brain power put in the quality. Again, because Superhuman is a high perceived quality type of product, so we had to invest that amount of time there.
Yeah. High real quality. It's not just perceived.
No. But I I think this is this is important because what is quality?
Don't know.
The the the feeling. Like, if I buy a car like that is a Toyota, it's good quality, and I get the quality for my box. If I buy an Audi or Porsche, I expect a different grade. So maybe it's grade. Like, the grade is different, but and it's high grade, but high expectations or high amount of time spent on quality.
Yeah. In PMing, there's this concept of the high expectations user. And and Rahul was one one example of those. And I was just wondering, like, who are the most outlier extreme people? How are they using AI in their email?
You know, just just in general, like, the most extreme examples that you've come across, obviously, because that's how you work.
Oh, that's a good question.
For example, you had how much time do I spend in Waymo's last month? Right? Which is basically turns your email into an, like, an accounting system because it's kind of a source of truth. I don't know if I would do that in Superhuman. Is it reliable?
It is reliable. Wow.
And when you think about, like, the amount of work and we're working right now with Anthropic to basically do, kind of like building on the fly small kind of a key note of Lambdas that will build the code to do the aggregation. This is an easy example.
This is like a code execution thing.
Yes. It's a code execution piece. But this one is relatively simple because you just have to have the agent extract from the email. So select the emails from Waymo. From the Waymo, extract the time, the duration of the trip, and then do the aggregation.
But that's not easy. Like, data aggregation is not easy, and LLMs are not good at math. So, like, that there was some support about it. And right now, we're discussing about, like, extending this approach to more.
It are you operating on the email file itself, or is there a fundament is it like a role in a database that you're just writing a SQL query?
No. The aggregation is so we don't extract that data on the so when we ingest it you know, like So we ingest the data. Yeah. Okay. We ingest the data.
So we rely on Gmail and Outlook, of course, because they do they are doing, like, some great stuff that we don't wanna do, spam detection.
And superhuman will never do it.
And probably. Probably never do Probably.
Which is being a IMAP server
or Exactly.
Like, do I wanna do that? Probably not.
Probably not. Maybe Hey. Hey Haymail did it?
Yeah. They have an but, like, is it something where we want to spend time? Is it valuable for our end users? Really? Not sure.
They live in an ecosystem. They will live in a different company. Outlook. Yeah. Yeah.
So, like, they have Outlook, and they have Gmail. It's already there. So, like, if we can just plug and make that better, I mean, it's it's it's it's good there.
I mean, in in some case, Superhuman was the original rapper company. If you peep if you think about GBT rappers, this is the Gmail rapper, the Gmail rapper. For at first, it was LinkedIn rapper and not Gmail rapper.
I don't if
it's for it than Gmail itself.
So It's very true. It's it's very true. That said, you can question, like, what is, like, an SMTPC server for real? Like, it's
It's a server that conforms to a spec Yeah. With some database? May maybe not even.
Maybe not even. Yeah. Maybe not even. I mean, they are doing, like, way more stuff. Like, they have, like, a crazy like, especially Gmail, like, the search capabilities, of course.
Like Yeah. I would say crazy good and all of that. But
To do what you do, you need a server side clone of my Gmail, and then you need also a local cache.
We didn't we need local cache. We work offline. That was one of the thing that we did as initially, beside the UX, behind the beside the the speed. We have everything local. One of the reason is we want to be fast and on the like, every interaction should be under one hundred milliseconds.
Yeah. I mean, with network, you cannot you just can't. So everything needs to be local. So, yes, so we have, like, a copy of emails local on device and works in the enterprise world because
Is right?
Interestingly, for mobile, it's it used to be Realm.
Yeah. RealmDB. Yeah. Yeah. Yeah.
Is it Facebook tech?
Mongo. Mongo. Has been acquired by Mongo. Yeah. Yeah.
But and now it's, like, somewhat sunsetted, so we need to find a different way to do things now. Might be SQLite. But yeah. So on device stored. But that was like the old search where we had basically, like, a database with rows of the emails.
But everything that is AI, like, we have all the embeddings and all of that. So we have a hybrid search, and we use I don't know if we can name brands, but we use Turbo Puffer on the back end to store, like Yeah. I think Turbo Puffer
is relatively public with their customer list, so I don't know.
No. Yeah. No. I think we're We'll
let the API department. They talked
they talked about it anyway, but but it's a, I mean, stable infrastructure. They do things pretty well. It's fast. Yeah.
So I'll briefly comment that I know any number of local first database companies that will love to work with you. If you're saying that you're on the market for a Realm replacement, they will come and talk to you.
I mean, I'm more than happy. I'm more than happy. That's, my AI my my my mobile team, like, they're really looking for something different.
Wanted to be superhuman's database. Okay. I wanna just, like, focus on the AI side. Right? Sure.
So people want to know where is their inference running? What are you sending over? What can what can the provider see?
Like, it depends. Yeah. It depends. Depends on the use case, depending on the type of model you we wanna use. So there's some stuff we run on inference company with OpenModels.
We there's some stuff that we run with OpenAI, with Anthropic. So it's it's pretty diverse. It changed because also based on the quality of the models. We're a GCP shop.
So lots of credits for Gemini?
Yes. So we have an incentive too for probably, like, you spend some dollars there.
I mean, it's nice that they're also a leading model anyway. So, like, you're not actually compromising
some pretty good day. Stuff there. But we use Besten to run some, I would say, some LEMA, some BERT model for classify classification. There's we're doing probably some discovery discussion with some YC companies about model on device as well because Yes. They work offline.
Yes. And interestingly, those companies, they started to do on device, mostly for cost reduction. That was their pitch. We'll reduce your cost. I mean, we don't care that much.
Our people, our users, they want quality, and they are okay to pay for that quality. But we want to solve for offline. Like, we if you're offline, semantic search doesn't work as well. So so we are discussing with the
What are your design constraints for offline inference? For example, right, like, deep seq v 3.1 would be, like, 600,000,000,000 parameters. I don't think you wanna take out 600 gigs.
It's so we and and people are are somewhat complaining about, like, our footprint Yeah. On on the device. Like,
two two gigs already.
Both in memory memory and both, like, on device because we store local emails. Like, we store like, when you install Superhuman, we download the last 30 of emails so that you we can do search when you're offline at least for the last thirty thirty days, but we keep that history. So it's starting at thirty days. And if you're like a customer for like two years, technically, we optimize for two years of email in your device. So that's interesting.
On the local model, any thoughts on like every app is gonna have its own model versus you're going to have a device model that people
run? I mean, it's a
lot of space. What would you prefer? I'm curious. Would you rather have the user just take care of the inference and rely on that, or do you want to own the whole experience?
Superhuman will want to own the full experience. Like, we're pretty picky in the way things are, I would say, happening. So but at the same time, like, if we talk about mobile, you want the mobile experience feel like your device. So we are basically not doing React Native. We are doing Swift.
We are doing Kotlin because we want the app to feel like the the the user experience in generally in the in on iOS or on Android. So but for the models, that's a good question. I would love the device provider to be better.
Right.
I mean, we can question, like, local devices. Right. Low local like, iOS has done some work there, but it was underwhelming so far. They're still working at it, and that's why we have, like, a YC companies that are spending time there and doing some some cool stuff.
Yeah. Amazing. Interesting question on Base 10. They're a very different cloud inference provider for open models compared to, let's say, the Fireworks and the Together AIs. The general pitch is that they don't charge by token.
They charge by box effectively. Anything else that's interesting working with them versus the other inference providers that you buy?
They're easy to work with. Yeah. I mean, that's when you're a startup, you want to move fast. They are really easy to work with. They know what
they're doing. Priority is, like, what? Cost? Speed?
For us Yeah. It's quality.
But So it's quality. It's open models.
It's all
the same quality.
We would always start with the highest and more expensive model Yeah. To get the right quality. And when the quality is nailed, then we can spend time trying to optimize.
Right. But all these providers, Base Ten, Fireworks Yep. Together, all these, they all have the same access to the same models. Fair. So unless they quantize heavily, which all of them say they don't.
So in that case, like, the the fact that it's a box, you control your cost way better.
Yeah. Yeah. So So it's like fixed capacity.
It's fixed capacity. So, you know, when you are so when I discuss with my CFO, I'm like, hey. When it's token based, it was like, the exercise is way more trying to understand, like, how would it be adoption and all of that? But that's serverless.
That's serverless. Sure. You said case, serverless, it scales up, scales down?
Fair. But, like, the the the cost control is becoming like a thing. It was a thing before the acquisition. Now that we are part of a bigger umbrella, like, understanding your cost structure and, like, being able to make projection that are closer to the reality is more important. Like, all pre IPO ish companies, you want to really understand where you will be, like, in three months, six months from a cost standpoint.
So baseline for that is pretty cool because you you have more latitude to stay within the the bracket of, like, a box, basically.
I was thinking about this. I you know, a lot of people think about cost in terms of dollars per million tokens. Sure. Right? And I think that that is actually amateur thinking.
This is only only the kind of pricing you care about if you're a solo developer. But once you're in in a large scale like you guys, and and so also something I learned about cognition, you should actually cost care about price per trillion tokens because we spend multiple trillions per month. And when you unlock that scale, you unlock different ways to spend. That's not a serverless token based pricing. So, basically, I think base 10 makes a lot of sense on the price per trillion.
Yeah. I didn't look at it that way. It's it's pretty interesting. But no. No.
No. That's fair. And, I mean, we built, like, so many different models trying to understand, like, the cost per million of tokens, and then, like, you have to infer, what like, is the average number of tokens? Because we treat every single email. That's really short emails, very long emails.
It's like you have to understand your data, like what is the median and all of that to to make your projection. And it's always there's always some magic. The reality is, like, you don't have the time to I mean, I'm an advocate or, like, let's move fast. And if the if it's successful, it's great, even if it's it's expensive. So rather than trying to optimize the cost too early, like, just go with something that you control and fast, and you'll have time.
I mean, it's a good problem to have. Yeah. Success is a good problem.
When do you think it's gonna break from, like, a cost perspective? Say you were to, like, draft every single email that I get, I'm sure you will lose money on the $40 a month.
Yes and no. I think that it's a matter of how more productive we make you. We have some customers that told us, initially when we were talking about the different models and everything, they're like, take the better model. Like, I'm ready to pay, like, $200 a month, but, get the med the best model. Like, I don't want half crap because it's less expensive.
So, like, always give me the best.
Because these are all, like, high value CEOs
and VCs. I mean, one hour of their time Yeah. Is worth, like, 10 times the the the amount of the subscription.
So it's a why isn't there a $200 a month subscription? That's a
good question. Yeah. I'm not in charge of the pricing and packaging.
Maybe okay. Maybe once an an example would be like, well, what's one thing that you would like to do that you cannot do with today's models even though you try pushing quality? Then peep your customers are telling you, actually, we really want this or maybe Rahul is telling you that he really wants this.
I don't know. Yeah. I don't know. I think I think we have the means. We have the means to do, like, pretty much everything that we wanna do.
Like, it's a matter of executing
Yeah.
And doing it right.
The way I'll put it is, like, if you can articulate what you cannot do today that you think you should be able to do and your customers would would pay you for it, the model less will make it happen. But the problem with that you have and the problem that I have with Gong is we cannot articulate what it is. We will know if it's better, but only once it exists. No. That that that's a that's a
good framing. And the the other piece that I think it's pretty tricky is that there's a transformation that is happening in the user experience. Like, even the way we are thinking about the user interface right now, it's totally switching. Like, the way we think about emails right now, it's still like some sort of like a to do list. It's a table to some extent with rows.
What would it be like in a year? Because people would be more and more interacting with their systems through a conversational aspect. Like I see my kids. My kids, they don't type on their phone. They talk.
Ah. I mean, I I all my kids. I have three kids. All, they talk with their phones. Working college and middle school.
Okay. On WhatsApp?
WhatsApp because they're European, and they need to talk with the family. The reality is like Snapchat, it's like TikTok, like whatever, like Instagram, like they communicate over Instagram. I'm like, that's not an image tool or something.
I feel like a boomer.
Yeah, I am. I'm definitely am. But what is interesting is that the and and we can debate about, like but, like, we, as a human species, like, we started to write because we didn't have, like, enough storage for the stories that we were telling to each other. So we had to write to store those stories. Now, like, all the content can be stored in YouTube, in TikTok, or whatever.
It's like, what's even the need to write? What's the need? Because everything can be vocal. And I see kids now. Everything is vocal.
They don't read article. They want a TikTok video talking about the article. So coming back, and I'm sorry, like, I'm getting, like, pretty high here, but being a bit more grounded, what does that mean about, like, the future of the user experience for email and communication? Will people still type, or will they just talk to emails and they want to hear an email? And this is where it becomes interesting because Rahul, as a CEO, maybe next year, he doesn't want to write to you with the new feature.
Maybe he wants to talk to you. And then the way you will have received our marketing campaign about the new features will be, well, we said, discuss to you or talk to you with his voice, not just voice and tone in terms of, like, writing, but, like, really, like, you in your car commuting, listening to Rahul talking about about that. So coming back to the what cannot be done right now, I think, like, the the main problem is, like, nailing the new user experience. I mean, OpenAI, now you can do stuff with emails. They're trying to do some stuff there.
Like, all those chatbot, they try to be like this basically, the new OS to some extent. So how do you interact with those new apps? So what is an app even in this new world? So that's what is, like, really interesting. And that's why I'm glad to work with Rahul because the guy is so freaking visionary.
And if there's one company to nail it, that's not a lot. And I believe, like, Superhuman is one of them.
Yeah. I think the inbox is, like, the ultimate private data source. I feel like even when I see all these companies that are, like, you know, talk to, like, your AI clone to get advice or things like that, I feel like so many times, man, I'm just writing the same thing over and over. How many founders email me asking about help for X, Y, Z task? And the answer is almost always the same.
And there should be a way almost for superhuman to be the adviser on my behalf in a way. It's like, you should be able to predict
what I will respond to this email. It's called Autodraft for respond. We're we're still we're testing internally because, like, there's if especially, sorry to to catch you up, but, like, same same for me. Like, how many companies are reaching out to me to pitch whatever, like, AI frameworks or, like, AI tooling or, like, whatever? And my answer is, although I don't answer because I receive, like, a lot of them, honestly, like, thank you.
I don't have the time and everything that suits cool, but, like, because I want to be polite. Like, right now, like like, it's automatically generated for me because they learn that I'm usually don't care. Yeah. And that's my answer. Or if it's someone that is pitching me for like, hey, I want to work with you guys and everything, someone that is applying, my answer is usually, oh, please reach out to HR.
I'm CCing HR and everything. So now we are I was able to understand how you reply typically. But it's always like if it's covering only 80% of your use cases and you need to discard 20%, where is, like, the cost benefit value? Is it annoying to have, like, 20% where you're like, oh, discard. I want to write it myself.
Is it good? Like, what is the limit? Ninety ten? Eighty twenty?
I think it's like AI plus the snippets that you have. I think that's kinda like like, I have snippets for a bunch of things like vendors. I have this, like, super long snippet. Thank you so much for reaching out about your company. Sounds like a great product.
We're not currently in the mark, blah blah blah blah blah. It goes on. And then the response is like, thank you so much for your thoughtful response. And I'm like, great. Get it out of the way.
But I feel like if you could use that PlusAI to do the small Yes. Kinda like last mile Yeah. Thing, I think that would be enough. You don't really need a GI.
I'm excited for q q
'2, something like this. Dude, I pay $200 a month to OpenAI to Anthropic. Like, I'll I'll give you $200 a month if you, like, make me not write the same thing
over and over. Deal. I think more generally, what he's trying to get at and what Superhuman is starting from a very good basis but not there yet It's kind of like AIEA. I don't know if this comes up a lot. Well, I have people I work with who do read my emails and respond for me.
Yep. And they have memory and they they know my normal preferences. They have human judgment, which LMs don't have. Is that something that you would want to build, or do you think what you wanna leave to others?
That that's that's the goal. When we kick off really, like, the revamp of our AI world and what AI means for, like, superhuman, Raul did a pretty good, like, pitch on it, and there was, like, a pretty nice video. I think it was in March for the launch of, like, the new AI. That's the vision. Like, the vision is like, you have an EA.
And most of the people who are using superhuman, CCU, founders, and all of that, so pretty fast, they need someone to help them with their emails. And we want to do, like, most of that job. So we're getting there. We're getting there, but that's that's the goal. That's the goal.
Like, the the first thing, like, answering your availability. Right now, we can do it. I mean, right now, it's in beta. Mhmm. But right now, my emails, like internally, when someone is asking, hey, Louis, can we meet next week for lunch?
Automatically, I will have, like, three slots proposed in a draft, and I can just, like, send the draft that is prepared for me. Yeah. It's still up to you to decide whether or not you want to send the draft.
That's the thing. I I don't want to be involved.
And this is where Yuria will always be better than NLLM because she knows the type of people you are okay to have lunch with. Or maybe they have the context because
Yeah. Sometimes you're busy, but you're like, oh, VIP. I will move this. Exactly. You know what mean?
I get into And And your calendar is not gonna know.
I mean, we're getting closer because we know how much time you interacted with that person, but, like, how much time you interacted. It doesn't mean that maybe last week you had, like, a bad discussion with them, and now now you're not friends anymore for whatever reason. But you, EA, would know. So there will be, like, always limitation to this, but and that's why we want two people to always be in the loop. And maybe it's your EA that is
in the loop. It's so helpful when it's when I'm not in the loop. Yeah. We can batch it and, like, I have my once a day call with with the EA. But, yeah, obviously, that will happen.
You know, a lot some some ways that other people are quite, pursuing this, like Notion's trying to go after it. Right? And they have Notion Mail, Notion Calendar, then, obviously, they really care about AI. Some other people are doing this interesting thing where they buy an EA company, like, a a company that already does virtual assistance and then just monitor what they do and then just first of all, Superhuman can provide me an EA that is a human and then slowly replace parts of it with AI. I'm what you think about that.
That's a more aggressive approach if you really wanna
I mean, that's probably the best way to understand how an EA is working and, like, the type of work that they are doing and everything.
Your own data.
Yeah. I mean I mean, that's that's intense. That's intense. But, like, sure. You have the money.
And you you pretty fast understand what are the type of workflows you want to automate first. So, like, having that data would be like Yeah. I would say pretty pretty interesting.
One of his portfolio companies, they bought Lawful. Legal for yeah. Yeah. Did you think do you think that's an accurate description, or am I glorifying it too much?
No. It's an accurate description. It's like it just behaves as a law firm though.
Right. Just treat it as a law firm and then internally start to optimize.
I mean, you have now so many customers that there might be you might need a lot of VAs too to do it for for everybody. But I'm curious. I think like the yeah. The memory is kinda like the killer feature of the EA. It's like understanding in real time.
I'm curious, like, now that you're like within superhuman the company, not superhuman mail.
Yep.
Do you feel like there's, like, a lot of advantages of being email plus documents plus being embedded in everything? Like, do you feel like that helps closing some of these gaps?
Yeah. So for example, like, Coda is an entering senior I was a piece of software, so CODA is like an ocean equivalent.
Yeah. We used it at Amazon.
Yep. It's a pretty good one. And a lot of, like, enterprise companies start to, like, use CODA more and more because of the flexibility and everything. And CODA has this concept of, like, CODA packs, which is integrations, glorified integrations, if I would I can say this in this way, but they're ingesting the data. So like the data is there.
So like every time you have Coda so we have technically an ingestion pipeline that can aggregate all the knowledge about you in the company, which is great. And now if you add Grammarly, Grammarly is ubiquitous. Our, I would say, users of Grammarly, Grammarly knows that you're in Google Doc. Grammarly knows that you're, I would say, crafting a, like, a post on LinkedIn. Grammarly know knows technically, they can know.
Doesn't mean that they they use the data, but they're everywhere. So, like, when you have this, I'm everywhere, oh, you're getting into your email, but I know that you are currently, like, on Jira with that context. So all of a sudden, I can pop up, like, some of the context. I know that you're writing to that person. Oh, it's about this.
I can expand and, like, augment your email because I know where you are coming from. So the data will be there through CODA. Grammarly knows basically where you I say, just switching from Google Doc to Salesforce to LinkedIn, and now you're writing an email. So we have this augmented context even more, so much more precise, compared to something like ChatGPT, for example. They don't know where you are because you're switching windows.
You're coming from to, I would say, to GPT, from Salesforce to to GPT. They don't know where you were. They wait for you to pass to the content to get the context. If you're Grammarly, I know where you're coming from. So, like, when everything will be converged, and we've been acquired only, like, three months ago, but when everything will be converged from a contextualization standpoint and knowledge standpoint, we know way more.
So we'll be, like, way more accurate in the way to help you.
Might be predicting fourth acquisition, but wouldn't it make sense to have your own browser?
That's a good question. I think there's much more to be done on the productivity space before, like, I would say, solving a browser, and everyone is trying to do a browser. Yeah. Atlassian, Perplexity, OpenAI. I'm still sad that ARC is not, like, getting into development anymore because of Dia.
But Dia has been stopped.
They're rebuilding ARC in Dia.
Yeah. But, like, it's it feels like it feels very unstable now. So, like, more and more people are basically saying, like, okay. Let's go back to Firefox. I mean, more and more people are doing that because, like, there's so many browser.
Like, you're like, you want to wait for the war to be done and to have, like, the clear winner.
No. No. No. No. No.
No. No. No. Disagree. I disagree.
You should should go all in. What what are you using?
I use Alice.
Yeah. Alice. Yeah. I'm I'm also Alice now. Oh, no.
Interesting. I'm still on Arc.
It doesn't have profiles still. Yeah. That's the biggest issue.
I based on the different emails I have, logins I have, I switch between Atlas and Chrome and
ARC. Interesting.
Yeah. Yeah. My personal one is on Chrome.
But I'm just saying, like, well, okay, if that context matters to you, right, You've CODA and all those things, then then Grammarly others, you might as well have your browser. This is the season of no one no one will get upset at you for saying, oh, we have a browser. Like, will be like, yeah, it makes sense.
Or it will be like, oh, no. One more.
But but it's the superhuman one, and that's a good brand.
That's interesting. I I foresee, like, like, browser to disappear completely.
Like, I I'm like Oh, okay. That's the title.
I mean, my main, like, central, I would say, piece of software that I use in my productivity tool is Raycast. Yeah. I'm I mean, I'm a Mac user, so I use Raycast. For the people that don't know Raycast, it's basically like a way better spotlight on Mac. And I don't need bookmarks in my browser anymore.
What what is doing a browser beside providing you a view on the website? Nothing. So it's just like so even, like, to some extent, Recast should be, like, just a web view because what I do with the Recast Then you're
turning Recast into a browser.
Is that a browser if it's just rendering HTML? Yeah. Okay. So like Right. Everything is browser.
So, yeah, if it's only like a rendering HTML
What else do want? You want JavaScript? Do want I don't know.
Local storage? You want Like, local storage is one. Extension. Like, you you need a browser, like, to to have, like, a local extension. But no.
To to have, like, a local storage that is, like, pretty massive, like, superhuman. But, I mean, what's left? If you like, everything that was making a a browser a browser before, which was, like, bookmarks, like, basically, the the the last history that you had, maybe, like, cookies and like, what's if you get rid of that, it's just a view a web view to some extent.
Yeah. It's a clean application platform with that open app store, you know, that there's a there's a mark in JSON line of well, the operating system is just a poorly debug set of device of device drivers for the browser. The browser is the actual application interface.
From the person that made the browser.
Yeah. Yeah. Course. Yeah.
I I think the browser will be, like, more and more thin. I believe they would be, like, thinner and thinner, but they will they will disappear. Or they would be, like, just Yeah. Embedded in the OS eventually.
Yeah. So One more technical sort of thing, and then we can go to sort of organizational things. You mentioned understanding the person. You know, one part of memory is just like the knowledge graph. And one part of knowledge graph that really matters is the entities that I deal with.
Right? Like, I deal with him for for four years and and we have that context. Basically, what exists today in superhuman and maybe what is possible in future. Right? Do do you for example, do you use a graph database or something like that?
Not yet. And it's interesting because you are mentioning, like, what's missing right now. I think that this knowledge graph, like, you know, oriented database, I'm not there yet to some extent.
But have you actually tried, or are just saying that?
No. We didn't try.
Yeah. That's the thing. It's not fair to say they're not there yet
if you haven't tried. Correct. Even from a taxonomy standpoint, when you think about those entities, what are those? If you are verticalized People, companies. Yes.
But, like, then you start in the you'll start talking, like, about projects. But is the project, is it a task, is it an initiative, is it a hierarchical aspect to those? How about how deep is, like, the tree?
These are all valid questions.
I think it's very Like,
you know, superhuman's history is reported where, like, the person is, like, the core of the Correct. The universe.
No. No. But there's some obvious entities. Yeah. But, like, if you think if you want things to be really personalized, these entities are, like, very, very subjective.
Like, I'm a user of Obsidian. Yeah. So I'm a note taking nerd. And for the people that use Obsidian, it's Another local first app? Yeah.
It's another, like, local app in which you build your own workflows and where you will basically, through templates, define your own entities that make sense for you. And there's no two graph that is similar, even if you're using the Note app, say, for the same thing. So trying to infer, like, a generic knowledge graph that can be reused with, like, dedicated entities, people, task, project, and everything, it's harder than it seems. Interestingly, like, we were thinking about it when I was at product board. Product board, we have, like, the road maps of, like, so many tools.
Based on that, you can probably infer some taxonomy about what is a SaaS product. But even trying to generalize this into, like, a tree that can be repeatable for people, it's hard. There's some common stuff, authentication, authorization, billing, user management, dashboards, whatever. Every SaaS company has this. But then when you come you you enter, like, the domain of the the company, totally different because their features, their, like, surface area is is very different.
So, like, even there, trying to, from the knowledge that you have, abstract the entities that would be the same for everyone is not easy. So it means that then for each user, you need to have an unoptimized graph that is subjective and dependent of the people. So you need to build the graph based on the like, just the data, and you don't you don't have, like, your real way to to optimize for it. But you're fair. Like, you're right.
We didn't try. But also because Many people have failed. Right? It's fine. And and I don't even foresee a path where that can be surfaced into, like, more productivity gain.
At the end of the day, what is the problem you're trying to solve? It's super nice from a technology standpoint and, like, even like a thinking process standpoint. Like, what is, the ultimate data model for productivity nerd and all that? But what are you improving from an experience standpoint? Is it, like, the accuracy of your draft that
I'm playing for you? AIEA to remember everything I've talked, everything I've done, everything I've talked to, everyone, every conversation I've had, you know.
Yeah. But then it's Jarvis, and it's like almost AGI to some extent. So
You have the context, then no one else has.
Yeah. But like the amount of compute and the amount, because you need to recompute, like, your graph every time you receive new stuff and everything. So it's an interesting space. I think so to your point, we probably, as an endpoint solution, we probably won't be the one solving for that. I think that there's companies that should focus on this and be like, hey.
I'm the engine that will ingest everything that you're doing, and we build a graph. And the graph would be, the best graph ever, and it will be, like, for each account or each tenant, we'll build the graph for you. That would be great. But is it something for JavaPuffer? Is it something for, like, those vector database companies to to solve for?
Maybe. I don't
know. So for what it's worth, I'm actually dating someone who's doing upside, and they are mining emails for the c r basically, like, CRM population and building a knowledge graph from emails.
Interesting.
So they basically, they're happy that you're not doing it.
Because I'd love to have an intro. Because, obviously, if you
do it, then you you you are a very serious competitor.
No. But it's I think it's not easy. Yeah. So I would love to discuss Sure. Like, I think we would be probably more consumer of the outcome rather than the builder of that of that layer.
Yeah. I think the other big consumer, obviously, would be OpenAI. Of course. They clearly want to eat everything inside of ChatGPT.
I mean, this is a cool exit strategy for the such a company.
For for them. Yeah. Mean, like, do you want to build an superhuman app inside of ChatGPT? Or I I feel like the the answer is no. Right?
Oh, the the answer was, like, ChatGPT, like OpenAI and Superhuman, are competitors. Okay. Like, this is what we fight against to some extent. We have a different approach, I think, but and especially this ubiquitous grammarly presence, we are everywhere and everything. I think we'll be we want to be more proactive because we are where you work, we can be more proactive compared to CHA GPT that is waiting for you to do things to help you do the thing.
So there's reactive versus proactive. I think we're more on the proactive side. But that's the competition. Like just, I would say for notes, but like when Rahul is questioning the quality of our I would say, AI queries on Superhuman, he's comparing us to Gemini, he's comparing us to OpenAI. So that's the competition we are fighting against.
Yeah.
I mean and and speaking of which, Gemini, the chat app obviously has privileged access to all of Google. They can always access to
us and, like, the the search engine is crazy good.
Break them up. Rahul, break them up. Alright.
Yeah. Yeah.
Awesome. On a more broader side, so you mentioned you only have three people working on AI. What's kind of like the coding AI adoption at Superhuman on the engineering team?
Yeah. Interestingly, like, our path was so we we started to really think about it, like, in q one. Like, a bunch of, like, people using some stuff and everything. We didn't have any data, just anecdotal feedback and all of that. The first thing we've done is, like, cut the red tape.
We're like, hey, folks. Free for home. I will approve the budget, like, in one hour. You can try anything you want and deal with the security team twenty four hours turnaround to get things approved from a security standpoint because you don't want to do some crazy things. Huge q one was like everyone was trying everything.
It was really interesting to see how things were, like working super well on the front end, a bit less on the back end. We all go shop on the back end. And everyone working on iOS and Swift were like, not that good at the time. But a huge adoption in terms of tooling, also on the product side, a lot of v0, I would say. For Next.
Js? No, v0 is a it's kind of like a bot bot
yeah. Those like Because they they build Next. Js sites, right, or apps.
Yes. But they just use it for
We just use it for, like, prototyping. Ah. To be, like, as close because we have a a founder that is very picky and wants to review the design. And, like, a design on Figma is great, but, like, when you can click and do, like, real stuff, it's so much better. And Figma is not there just yet.
Figma has Figma
Make. We interviewed them. Yeah.
It's a sure. It's It's getting better.
It's getting better. But I ask a PM, they use VZero or whatever, like a tooling like this because it's Not livable? Superhuman is like VZero. VZero is a standout. And again, it was like free for all.
Try whatever you want and whatever.
Free market.
Right? So free market. And free market
V zero one.
V zero one. Always winning. It's still a free market. Q two was more about, okay. Let's try to understand where this is working, where this is not working.
So compile a huge list of wins and an area where, like, to do this, not good. Wow. To onboard in a new, I would say, code area, amazing. I used to spend, like, a full day to understand all the entry point, the dependencies on the code stack that I didn't know. Now I need, like, thirty minutes with quote code, and I understand how things are working.
Even for me, like, I'm not in the code anymore. But, like, instead of, like, asking my engineers, like, how are we managing, like, the refresh tokens with Gmail? Like, now I just, like, put code, and I'm using WARP. I'm Warp? Warp.
Yeah. Warp is good. But, anyway, Warp, put code, like, how how this shit is working? And boom boom boom boom boom. I'm providing, like, the links to the right files, explaining you, like, the the high level concept and everything.
And I don't waste my engineers' time to just answer a question. So pretty cool. So that was Q2, and we started measuring. So every PR, we have to put a label. I used AI or I didn't use AI.
And if I used AI, it was productive or it was not. So trying to understand the lay of the land. Roughly said, I think we have like 80% of people that are already flagging the PR. Out of that 80%, probably 90%, I would say, of AI usage. So it's all declarative.
We're we're not, like, plugging any tool to measure, like, the real number of tokens and everything. And out of those 90%, again, 90% of positive impact. But it's not always in the code. It might be, like, just the discovery, understanding, like, the lay of the land or, like, stuff like this.
So 81%.
Like, 90 times So technically, like, yes. It's like a ninety ninety of 80. But by by inference, I I would like if I caricature, I would say 80% of usage and happy usage.
So, like, roughly 80% of lines of code written in Superhuman. But probably not
a wise line of code. Probably more
than that. Yes. Yeah. Because of the hours.
It's the the discovery. And, like, most of the time you spend is not writing code. It's, trying to understand what you need to solve for, and this is the part that has been reduced. In terms of real KPI, and AI is not only the only reason why we have accelerated, but in Q1, we were roughly set at four PR per engineer per week. Q so that's in q one.
Q two, we were, like, closer to five PR per engineer per week, and q three, we're closer to six. So the global throughput and, again, PR per engineer per week, we can debate. Yeah. Yeah. Yeah.
But that's a throughput measure, and it increased quite a lot. But, again, AI is only a piece of it. Technical strategy, clarity of what you wanna do, organization, there's a lot associated to that. So we feel pretty good.
One question that a lot of, like, the AI leadership people I talk to have is, like, am I supposed to, like, ask more of my engineering team now? Like, am I supposed to, like, you know, hire less people? Should we ship more as a company? I think most the the thing about AI is, like, you can do a lot more, but most companies are not built to do a lot more. Like, you know?
Especially, like, if you ship a 100 more features, you don't really have marketing to market a 100 more features. You don't have support to learn a 100 more features. Like, how do you think about structuring teams and, like, the expectations
of it? That's interesting because Superhuman historically was very lean in terms of organization. So, like, Superhuman may, like, we have 50 crazy. 50 engineers.
And your user base is roughly How many million?
Yeah. Like, less than that. Okay. Like, paying users probably 100,000 Okay. Something like this.
So it's still, like, a relatively small supporting a lot. But it's yeah. So it's, I would say, a small team, pretty senior. And the average tenure is probably four years. So I come to New York.
Fully remote as well, which is interesting. So my AI team is distributed between Patagonia and Canada. So access to a different pool of, I would say, right people, not trying to compete in The Bay because people want to go to Anthropic, they want to go to OpenAI. And like those guys, like, they have like the
They pay too much money.
Yeah. I mean, it's not the same, obviously, competition. So we find the people where they are and people that don't want to move to The Bay and all of that. And there are some great people there. Anyway, long story short, relatively small teams, and we increase the capacity.
We try to not move too fast because we're qualitative. It's kind of like a vicious circle. Oh, we can do more. Let's do more. But all of a sudden, the number of bugs coming in is also growing and everything.
So we try to be conscious. Now we're working on the Grammarlythe new superhuman. So there's also an incentive to invest a bit more because it's a product that is working. And Chichir is really willing to implement a model that is called the compound startup. We are still a startup within Grammarly.
So we have our own P and L. We have still Rahul as a founder. The only difference between now and before is that our board is chiche on the exec team at Grammarly, you're slash superhuman. But we need we want more people. We want, I would say, superhuman to to have, like, more reach and, like, to to do a bit more.
So now we are kind of, like, scaling that, and we are adding, like, more capacity. So AI is helping, of course, it's but also helping, like, for the onboarding. It's helping for, like, a a lot of that. But we are we are adding some capacity.
Yeah. Yeah. I I think, like, you know, the mainstream maybe pushback on it. It's like, hey. Like, you used to pay me x to do four PRs a week.
So am I getting paid 50% more than I shift six PRs a week? I think that's the thing that that's why there's a lot of pushback around AI as well from people. It's like, hey. Look. You know, I'm using this and you're getting more out of it, but I'm not getting more out of it.
I think it's like the usual, like,
know, huge I would disagree with that. I would disagree too. Yeah. I'd like
I I disagree too.
I mean
I'm saying, like, when you listen to people outside of our bubble Yeah. There's, like, a lot of, like, this discussion around, you know, where the value is accruing and,
like So so, basically, if you only look at it as you're paying for output, was the previous payment wrong, or was the current product wrong? One of them is wrong. Exactly.
No. No. No. That that's that's that's an interesting point. The way I see it is, like, engineers are well paid.
Like, we are, like, a very fortunate, I would say, part of the population. Our salaries are probably I would say, pretty good and part of, like, the top, whatever, like, 5% in the country or, like, even in the world. I think that when he talks about, like when we talk about, like, the Maslow pyramid, like, engineers, at some point when they're, like, pretty senior, they don't rush, like, for, like, 10 more k or 20 more k or, like I mean, if we talk about, like, millions and everything, sure. But that's, like, the 1% of the 1%. For the rest of the population like us, I think that just, like, the joy in the dopamine is coming from what you ship.
So, like, having this ability to ship more value and have more customers, like, being happy with what you do, like you end your day and you feel like, damn, that was a good day. So I think that the discussion is not about like the money itself, but like, oh, damn, I am in an environment where I ship fast. I can have like all the tools that I request within twenty four hours. I can basically be the best version of myself, and I have fun in a good team. You don't have a lot of attrition when, I would say, you have an environment like this.
So like, sure, money. So you need to pay people fair way I'm not saying a fair amount. But if you're just fair, people tend to stay if you have the right environment. And, like, helping them to go from four PR a week to six, they're like, shoot. Like, I'm so much better than beginning of the year.
That's so cool. And you don't have that everywhere.
Yeah. I'm with you. I'm curious to see more of the the scores evolve. Awesome. Any parting thoughts?
Just generally,
your take on AI on the software industry. You've been in this for two decades. Do you think that people should still learn to code? Do you think the junior developer is screwed? Any any any of those opinions that are common topics?
Yes. Of course. Of course. You need to learn to code. Like, I see this about, like, kind of like the switch from, like, assembly to c.
Like
Yeah. It's a higher level.
It's just another level of abstraction. But at the end of the day, you still need to understand how a computer is working. You need to understand how memory is working, like, like, swaps and all of these things happening on the on the on the server, like, how server is working, like, serverless between quotes. It's always a server of someone. You need to understand the fundamentals to be good with AI.
I do believe that AI will do only one thing. It will separate faster the good engineers from the bad engineers. If you're a good engineer and you're using AI well, you will be an amazing engineer. If you're a poor, lazy engineer and you don't want to understand things that you're doing, AI will make you even worse because you will have the feeling that you get it, but you're not being you're going like behind the magic, behind the curtain, behind things and how they work. So I think AI is a blessing for our job.
Any final call to actions, hiring, people things you want people to do in trying the product and give you feedback on?
Of course, try the product. Of course, complain to me if things are not doing, I will say great, and and they are not great. Yes, we're hiring. So we're hiring product engineers, people that have a strong appetite for, like, the user experience. Because I do believe that in the world where the technical moat is not that a moat anymore because, like, startups in two weeks, they can build something that is close to what you're building.
The difference is, like, the how you think about the user, the flow, and all of that. So people that have this appetite for nice interface, beautiful product that people love, this is the type of engineers we want. Good engineers, that's the baseline, of course. But, like, with this, like, spike into the user experience, even if you're a back end engineer. Back end engineer, but you care about the latency because it's having an impact on the end user and all of that, this is the type of engineers we're looking for.
And we don't care where you are. So you can be in Patagonia, as I said, or you can be like up north in Canada. We try to limit things to like Americas, basically. Yeah, just looking for bright, gritty people that want to have fun. We're seriously fun.
Cool.
Thanks for joining us, man. This was fun.
Thank That was cool. Thanks for having me.
The Future of Email: Superhuman CTO on Your Inbox As the Real AI Agent (Not ChatGPT) — Loïc Houssier
Ask me anything about this podcast episode...
Try asking: