EB-9: HF0 - Transcript
[00:01.965] 👩🎤 Ate-A-Pi: Hey Evan, what is HF0?
[00:06.25] 👕 Evan SC: HF0 is a monastery for hackers. A monastery.
[00:11.582] 👩🎤 Ate-A-Pi: Monastery.
Monastery, maybe you can go into that. Is there other meditation practices? Are there, you know, what do hackers do?
[00:18.902] 👕 Evan SC: Yeah.
[00:23.854] 👕 Evan SC: So it's not so much that there are meditation practices, but it's that if you were to go to a monastery, what you would experience is that an entire system has been set up for you to be able to drop into a state of extreme focus. So for example, right, like at a monastery, you go, there's like a simple bed for you to sleep in. There's meals that you can eat. And then all of the rest of your time.
you're expected to just be meditating. And so what we do is we provide that type of environment for hackers so they can just come, drop in, and they can just get into their flow state, into their most creative coding state, in the same sense where every aspect of their life is taken care of.
[01:01.552] 👩🎤 Ate-A-Pi: and
[01:14.969] 👩🎤 Ate-A-Pi: So a hacker or a coder can basically come to HF0. And it's an eight-week program, 10-week program.
[01:28.85] 👕 Evan SC: Yeah, so I guess more logistically, it is a 12 week residency. It's also, we are also a fund. So we, you know, we, we take equity in the companies, we give half a million dollars to these companies and then they come and they participate in this residency and so we have founders living in our mansion and that's where we create this monastery like environment so that they can drop it to the most 12 most productive weeks of their life where they're in as much of a state of flow as possible.
[01:33.625] 👩🎤 Ate-A-Pi: It's a 12 week residence.
[01:57.473] 👩🎤 Ate-A-Pi: So the mansion is in Alamo Park in San Francisco? Is that correct?
[02:03.05] 👕 Evan SC: Yeah, the mansion, it's on Alamo Square Park, right down the street from the Painted Ladies. It's a historic building. Previously was the Archbishop's Mansion and has gone through many different transformations over the years. There was a group called Factory that was in there for a while that did some really cool stuff with like custom furniture. The wallpaper's in there crazy. They just turned it into like more of a tech forward space. And we've continued to iterate on that by building it out to be this perfect environment for people to come and code in San Francisco.
[02:34.195] 👩🎤 Ate-A-Pi: And there are three batches every year, is that right? Two, three.
[02:38.45] 👕 Evan SC: That's right. So we do three batches a year. Right now our schedule is that we have two before summer and then we have one in the fall. And each one is 12 weeks. We usually recruit for the next one during the current one. So in a couple of weeks I'll start recruiting. But we are always interested in meeting great teams. So yeah, always happy to have chats with people who think they might want to join.
[03:05.881] 👩🎤 Ate-A-Pi: It's known as one of the most competitive programs. I think I've heard some numbers, something like 2,000 applicants going to 700 Zoom interviews, going to 100 kind of in-person. So you bring everyone in-person to SF, you fly them in for interviews in-person, and then that goes down to 10. Was it 10 teams that end up in the residency? Is that right?
[03:31.862] 👕 Evan SC: Yeah, exactly. Exactly. So it's there, there are around 2000 applications that we'll get. And that ends up boiling down to these 10 that actually do the program. But actually, out of the applications, we really are only taking about five. A lot of a lot of our teams come from referrals, a lot of them come from other founders, referring in teams from who they know. And so it is it is extremely selective. It's it's very hard to get in. And yeah.
[04:00.359] 👩🎤 Ate-A-Pi: Um,
[04:05.037] 👩🎤 Ate-A-Pi: So and what is typically what is the age range? Like do you get kind of like, you know, I think Y Combinator tends to, you know, clump up towards the much younger side of the scale. Like, do you have like a typical, you know, age range?
[04:24.586] 👕 Evan SC: We actually, you know, we like to say that it's the best place for people to come take the biggest swing of their life before they have kids. That being said, we've had teams who had families as well in the last cohort. I think four of the founders had families. So there are some folks who will come and have families, but on average, we're looking at people in their late 20s, early to mid 30s, and some into their early 40s as well. It's definitely not just all the young hackers, but we do have some.
You know, we have some of some of the college dropout types as well. But yeah, pretty, pretty wide ranging in terms of the age, but definitely, um, not just the earlier end of the spectrum. I think, um, we're, we're certainly, and we're especially focused on repeat founders. So, um, for example, in this cohort right now, for the founders who are out of the, for the teams out of the 10, the founder previously started a billion plus dollar company. So we have like four repeat.
unicorn founders and that's something that we focus on a lot in terms of who we end up working with.
[05:27.649] 👩🎤 Ate-A-Pi: Amazing. And you also have like something like a quarter of them are international, 25% something like that. Is that right?
[05:35.542] 👕 Evan SC: Gosh, it's got to be higher than that. I mean, in this, in this cohort, I would say it's probably, I'm having a hard time even thinking of the ones that are fully American. Um, I think there might be two or three companies that are, that are American right now. Now that being said, a lot of them are Canadian. We ended up working with a lot of Canadian companies. Um, and, uh, but yeah, we have a strong international focus. Like our whole team are all bilingual Mandarin English, and then also, um,
[05:55.181] 👩🎤 Ate-A-Pi: Right on.
[06:04.886] 👕 Evan SC: Two of us speak Spanish, we work with a lot of Latin American companies. So we have, we have like a pretty strong global base.
[06:14.305] 👩🎤 Ate-A-Pi: Right on. And so how many companies would you say in this batch are kind of AI or AI focused? Or is it just like pervasive? Everyone has an AI component.
[06:32.364] 👕 Evan SC: So everyone has at least some AI component, but I do think for some companies it might be limited to like they're just applying AI tooling in their builds of their company. If you're not using AI at all, like, you know, it's, you're probably going to lose. So there's just so many shortcuts to be had with AI at this point. But yeah, but no, but let's see. In terms of the companies that we have right now. It's
There are...
[07:03.254] 👕 Evan SC: There are two crypto teams, I think maybe even three, but actually of those crypto teams, two of them are also using AI.
[07:12.845] 👩🎤 Ate-A-Pi: right on. So and you know, H of zero at one point, I think like 2021 when it was first founded, there was a lot of crypto companies at that point. I think, you know, in that, I remember in the previous era, like around that time, it was, I think, I think there was quite a number of crypto companies, but the mix has changed over time.
[07:36.462] 👕 Evan SC: That's right. It is, we really follow the founders. I mean, we don't, we're not oriented, even though I'm really interested in a lot of AI topics, our orientation isn't to chase a particular market. It's more just like, what are the smartest founders? What are the most dynamic founders working on right now? And how can we work with them? And it just so happens like AI is a big why now, there's a lot of new stuff that can be built with AI. And so a lot of the best founders are understandably.
building with AI, but if we have an amazing founder who comes through our doors and there's nothing to do with AI, that's not going to be a reason other than that they could be getting, you know, maybe there's a disadvantage to not using AI, but if they have a good reason for it, that's not something that we're going to hold against them.
[08:20.985] 👩🎤 Ate-A-Pi: So I mean, when you're probably looking at like 6,000 applications a year. So when you review those applications, what do you notice in terms of like, you get a very good market kind of feel for what are teams thinking about, right? Like not only the ones that you end up selecting, but the ones in the, because even the ones that you don't end up selecting, they might go on to actually build tools or build something else.
[08:38.126] 👕 Evan SC: Hmm.
[08:51.802] 👩🎤 Ate-A-Pi: from that pool of applicants, what do you see in terms of ideas about AI? What do you think about that? And how does that match with your own interest in AI?
[09:07.266] 👕 Evan SC: Well, I do see a lot of different patterns in terms of the applications that we get. And I see it evolve over time. You know, there's different trends. There was previously a huge trend, it still continues, but like an example would be like semantic search for enterprise applications, right? There's just like a lot of things in that dimension. Or another one would be, let me think.
Yeah, different, different types of like AI for like voice based AI. Like before I was like really seeking out voice based AI because I couldn't find a lot of companies that were working on it, but then over the past couple of months and the last couple of cycles, there's like a big proliferation in teams working on different voice based AI stuff, which I'm super excited about. But also if I see too many teams working on similar spaces, it does make me less bullish, like I'm looking for teams that have a particular angle that's a bit different from what everyone else is doing. So there are.
for sure there's companies that like, if I would have seen them a year ago, um, they would have stood out a ton in the application pool. But now that there are people have caught up and they're just sort of more mindshare towards whatever that thing is. Um, there are certain areas I'm always interested in. I think anything related to role playing or art generation or, uh, you know, kind of like character based experiences. I think for me, those are some of the things that kind of pushes the cutting edge of what.
some of these generative technologies can do. I mean, even things like this, like this avatar that I'm doing the interview with, like that stuff I always get interested in. A lot of it's, when I dig a little bit deeper, not super interesting or like, you know, I think when I say that, I mean, it just looks like other stuff, right? That doesn't feel that different. But some of them have been really stand out. And then there's been some really cool companies that I've gotten a chance to take a look at.
[11:00.909] 👩🎤 Ate-A-Pi: So how about AI infrastructure? There was an AI infrastructure trend where people were helping to deploy APIs on GPUs. That wave is kind of like maturing. Companies are now going on to raise their series A, series B. And now that trend is no longer something that is that promising, because there's a lot of companies in that space already. So have you seen multiple? Yeah.
[11:27.23] 👕 Evan SC: Yeah, that's actually that. Yeah. No, I'm actually glad you asked. That was the thing when I was trying to think of, and I ended up saying voice. I was thinking about AI, AI infrastructure. Yeah. It's, it's evolved a lot. Um, in the beginning we were like, anyone who can build anything AI infrastructure. Like we were actually really interested in that. Now it's like a ton of companies are doing different flavors of AI infrastructure or different types of agentic platform does, you know, tools to build agents. Um,
And I've started to feel like in some cases, it's actually maybe a bit premature. I think a bunch of folks kind of jumped into, oh, we need infrastructure to build out this next generation, but now we're starting to see, well, what actual applications are people using? What actual applications are ready for enterprise? It's actually evolving a little slower than you'd like to see it evolving in terms of how the rubber is meeting the road of the real enterprise use cases.
So like a semantic search or a chatbot that you talked to about stuff in your business and things like that. Yeah, for sure. No problem. But then where's the gap between those types of very obvious applications that don't necessarily need like this heavy duty infrastructure, or maybe they need something that's like more of a letter opener, very specific use case versus the more interesting stuff where a lot of these folks just build their end up building their own infrastructure solutions.
I'm like, and there's different, different levels of abstraction. I mean, I think that, you know, the, the layer of like the pine cone and the we've yet and that's gonna, that's kind of important, but then do you need. A software framework that's going to help you organize your LLM calls and stuff. Like, I don't know. It's, it's, it's sort of unclear to me where the winners will be. I've personally backed off a little bit of like placing too many bets in that direction, just because.
I think we still kind of need to see what, how fast the use cases will actually evolve.
[13:25.721] 👩🎤 Ate-A-Pi: Indeed. Let me segue a little bit here. You had an experience of founding Teespring, which was a startup, I think, from 2011 onwards. I think it was like Y Combinator, 2013 batch, something like that. And it was a t-shirt printing startup. You can just jump in and correct me any time.
[13:51.857] 👕 Evan SC: Yeah.
[13:53.861] 👩🎤 Ate-A-Pi: And it was huge. It was a massive company. I think it became like a billion, billion dollars in revenue or something like that. Um, and, um, let's say, let, let me, let me do a counterfactual here. Let's say you are founding Teespring today, right? We have access to all these AI tools. Um, how would things be different? Um, you know, in terms of like from, from day one, you have access to all of these AI tools, what were the major differences from, you know, when you envision back to your experience then?
[14:07.31] 👕 Evan SC: Mm-hmm.
[14:23.762] 👕 Evan SC: Yeah, it's interesting to think about that. I actually don't know that the product would be all that different. I mean, my co-founder has gone on, Walker, who started Teespring, he's gone on to build Forthwall, which is in a similar vein, trying to help creators build their merchandising platforms. And so he's been thinking about, well, how do we actually integrate AI into the product experience? And it's been sort of like, okay, well, we could maybe have like a co-pilot type of thing that helps you get your campaign started and stuff like that. And I do think there's an element of that. Like...
A company that comes to mind for me, there's CoFrame. That's one of the HF0 companies that's doing this idea of living internet. So this kind of constantly evolving web pages that are dynamically shifting. And so you can imagine that the end product experience of Teespring would be more of like a living website that's just kind of evolving and that does all this automatic optimization of itself. But for myself as an individual...
Oh man, do I wish that I would have had the access to the tools that exist today. Like I feel like my coding potential, like, I mean, just the way that I program now, it's like so much streamlined, but like, you know, I use cursor mostly, um, just being able to just do command K and then to say, oh, I just need, you know, a red X that does this, or I just need to like the, some of the parts of programming that were like the hardest and most annoying things.
are now just like the most trivial things. It's crazy. It's like the equivalent of like a calculator, right? Like previously, if someone told you like multiply 972 by 43, you'd be like, okay, shit, I'm going to be sitting here for a few minutes like working this out. Now it's just that's like the easiest question. But like if you want to figure out like a basic algebra thing, like you still maybe need to do it yourself. So it's kind of like, it's interesting that I think that would just have sped us up a lot. Also, you know what I wish existed back then?
Next.js and Vercell. That also would have been so dope. Huge fan of those platforms. All of the different infrastructure that we did, we totally could have just deployed that way. And also React even. React was just kind of nascent at that time. And so we weren't on React in the beginning. We were like, how do we build this dynamic web app? The thing that actually lets you put the graphic on the t-shirt, rotate it.
[16:45.27] 👕 Evan SC: There wasn't like a, we were like, to give you a sense at that time, we were, we were considering using flash. Like I think we did use flash as, you know, for like the first version of it. So like that would have been amazing to have like today's stack. Um, but like, yeah, definitely the AI tools, I think that probably would have saved me at least half of my time. Um, that I just, just from a development perspective.
[16:52.853] 👩🎤 Ate-A-Pi: Wow. Ha ha ha.
[17:06.497] 👩🎤 Ate-A-Pi: Right. Especially because many of those, a lot of questions back then, even Stack Overflow would not have answers to. And you would have to perhaps figure it out. Trial and error. Yeah.
[17:25.182] 👕 Evan SC: You would figure it out. Hours. You're just, yeah, like, you know, you just keep like trying different stuff, looking at the terminal. Okay. That didn't work. Okay. That didn't work. Maybe try to look at the source code and try to understand what was going on there. I mean, that stuff still, there's still going to be moments like that in software development, but like a lot less so, and then just being able to ask chat GPT for literally like an analysis of anything that you, that you might be wrong on, it's also just, yeah, it's just, it's just so helpful.
[17:53.461] 👩🎤 Ate-A-Pi: Yeah, I think back and like, does that also mean though, that the amount of competition for those ideas would have increased, right? Because the pool of people attacking those problems would be commensurately larger because you have more people with access to kind of tools which were blocking them off.
[18:22.542] 👕 Evan SC: Absolutely. Yes. I think that we're entering the era of more competition and more of the winners is going to come down to your individual performance as an entrepreneur, as a developer. It's like, how much can you just build? How far ahead of the competition can you sprint in this world where a lot of things have been unblocked? And having a clear go-to-market strategy, having a sales...
a sales strategy or a way of getting your product out there is also going to be more important than ever. Because that you can't just solve that with AI. You can't just snap your fingers and suddenly have, you know, can't chat GPT, can't send you a thousand users yet. I don't know, maybe someday you'll just be able to pay some agent to go out and farm users for you or something. But for now, no. So I think that it is more competitive than ever. It's more important than ever that like as an individual, your craft and your...
daily routine is as you're making as much progress as possible. And that's a lot of the reason why we're building HF Zero is to help, help people like basically perform as almost like elite athletes in this realm of entrepreneurship and, uh, and software engineering and, and seeing how far they can get.
[19:44.121] 👩🎤 Ate-A-Pi: H of zero takes in founders with sometimes just an idea, sometimes a team, sometimes with a product, right? Like you have all variations or you apply with an idea in mind, but then you can change it, you can pivot during the 12 weeks.
[20:02.694] 👕 Evan SC: Yeah, so as far as like the sort of stage of the companies that we work with, we're either working with teams that have a repeat founder who's previously built something really substantial like a unicorn company or a team that has a lot of traction and is already off to the races with their idea. So in the case where a founder doesn't have as much traction, it's usually because they built a billion dollar company in the past.
And in that case, yeah, we'll sometimes help them. But a lot of times these people, like, I mean, they, usually there's a pretty clear plan and a pretty clear idea. Sometimes there's a little bit of pivoting that happens during the program. But most of the time it's, it's not, it's not that much. It's more like, hey, we're coming in to work on this thing. We're going to be sprinting on this thing. And there's already like a fairly, a fairly clear idea of what that looks like.
[20:53.269] 👩🎤 Ate-A-Pi: Right on. I think you've alluded in the past to the importance of building a sticky product to retain customers. How do you think the kind of newer AI tools can be used to personalize user experiences and foster deeper engagement?
[20:54.776] 👕 Evan SC: Listen.
[21:02.517] 👕 Evan SC: Mm-hmm.
[21:14.158] 👕 Evan SC: That's a great question. I think that retention is going to look different as our application of AI advances. And as it gets closer and closer to actually being able to deliver us this sort of personified experience, I would say. It's almost like, I'm talking to eight right now. So like, if I start to develop a relationship with eight, and I actually think of eight as a person,
person that has, you know, software running them, then that's a different relationship than I have with like Insight Meditation Timer, which is an app that I like frequently use, but like it's different from feeling like there's a person in my life. And so I think that it would be wise for founders today to imagine a world where what it takes to actively be a part of someone's life starts to shift from just being.
an app, a button, like, you know, on your iPhone, there's so many apps, so many buttons, so many different URLs to actually feeling like a person that you know and having that be the primary interface. And the reason why I say that is not because I think that chat interfaces are the best interface for most kinds of interactions. You know, I think that a lot of, I think Gary Tann posted about this recently, like, you know, UX is still here to stay and stuff. Totally. Like UXs are still better, a better interface.
but can they provide the same level of stickiness and personalization? Can it make me want to interact with it as much? I think there's a possibility that as AI avatar technology continues to improve, it may get to the point where I don't prefer to interact with an app. I prefer to talk to a person that isn't a real person that I know and that I trust and that I have more of that feeling with. And the kind of...
straw man or like, you know, this imaginary scenario that I paint for this. It's like, if you have an assistant who can do a task, would you rather ask the assistant to do that? Or would you rather just go to an interface and do it? And I think for those of us that work with EAs and stuff, we know that like, there's this weird line where it's like, there's even some stuff that like, even if like, we could probably just get it done faster by going to the website and just logging in quickly and doing the thing, we still just tell our EA like, Hey, like, can you do this for me?
[23:38.562] 👕 Evan SC: Cause we trust that they'll do it. It's easier to have everything in one place. And yeah, anyway, maybe not gonna keep riffing on this, but it's something I'm pretty excited to see. I don't think it's happened yet though.
[23:50.825] 👩🎤 Ate-A-Pi: Yeah, I think one of the things for me is, as you said, there is this boundary condition that needs to get passed on when you hand off the task. And I think the hurdle is still not being met for most AI applications, where you can just hand off and trust that it's done. And so you still have this human in the loop experience. And you are the human in the loop.
[24:14.367] 👕 Evan SC: Anyway, yeah.
[24:20.557] 👩🎤 Ate-A-Pi: The user is the human in the loop. You know? You are the human in the loop. Yeah. And so you have to be the one verifying. And then it creates this thing where I think it makes it, it makes the tools only usable by actually experts. Because what happens is,
[24:26.866] 👕 Evan SC: Take me out of the loop!
[24:48.245] 👩🎤 Ate-A-Pi: If you're a mediocre or low effort coder, and you try to use the tool for something, you will get stuck somewhere because it doesn't give you the exact answer. It gives you an approximate answer. And once you get stuck somewhere on a more complex task, and you don't know how to decompose the task into individual sections that you can verify one by one, and then you're sunk. Because then you don't know. And then once it introduces problems,
introduces bugs in other places, runs out of context window, and then this cascade of issues starts to happen. And then you're kind of like, okay, I'm stuck here. I'm out of this. And I think it's like, and it's almost like there's an intersection between your intelligence and the intelligence of the tool, because GPT 3.5 would really send you off into tangents.
[25:24.462] 👕 Evan SC: Mm-hmm.
[25:45.769] 👩🎤 Ate-A-Pi: If you're a Python coder and you're trying to code in Rust, and then GPT-355 would give you something, and you're like, oh my god, it's not working. And so you have this kind of intersection between your expertise and the intelligence of the agent or the AI. And you need to have sufficient expertise to actually verify and be the human in the loop, or else the tool is actually not
[26:03.987] 👕 Evan SC: Mm-hmm.
[26:14.893] 👩🎤 Ate-A-Pi: not that useful for those tasks. And it might actually, a lot of people have, there's also, I notice people churn from using GPTs because at some point they realize that they kind of default to using it for everything. And then they realize that the tasks they are actually good at, it actually takes longer. Right, like if you try to run a GPT-4 on something that you're good at, you'd be like, hey, this is okay, but you're still.
[26:16.707] 👕 Evan SC: Mm-hmm.
[26:33.966] 👕 Evan SC: Mm-hmm.
[26:41.345] 👩🎤 Ate-A-Pi: you're actually creating more work for me because I have to verify now. And verification is often more difficult than actual generation. You can generate crap, but then to verify and make sure that something's correct, that actually takes quite a bit of effort. So yeah. So maybe we cross that at some point.
[26:45.398] 👕 Evan SC: Ready.
[26:58.99] 👕 Evan SC: That is a good point, yes.
[27:07.797] 👩🎤 Ate-A-Pi: You mentioned one of your, I think you mentioned one of the firms that you've been working with has this kind of like system that kind of adjusts to the user. So one of the things that I've been wondering about is this concept of automated A-B testing, kind of like your website, UI, the human-computer interaction that is generated by your application, automatically modifying itself to the user, right? So...
[27:18.571] 👕 Evan SC: Mm-hmm.
[27:36.429] 👩🎤 Ate-A-Pi: Have you seen kind of that kind of initial stages of those ideas being presented anywhere?
[27:44.098] 👕 Evan SC: So just to confirm, you're asking about CoFrame, which is the living interfaces company, and whether or not I've seen those things being implemented out in the wild or just whether or not the companies are building this technology.
[27:50.238] 👩🎤 Ate-A-Pi: Yeah, yeah, yeah.
[27:59.205] 👩🎤 Ate-A-Pi: Either one.
[28:00.598] 👕 Evan SC: Sure. So I mean, there's a bunch of, there's a few companies I've worked with. I think this is a really fruitful area. Like Coframe, there's another one called Amers that was building like a e-commerce website builder where you just type in, like, you plug in your Amazon listings and then it just like generates the website. And this whole dimension of like content that kind of just creates itself or regenerates itself.
I think has been happening. I actually have a very insightful friend named Hikari. He started a company called Omnikey, which is like a ad tech company that uses AI to generate ads. He started this company like probably, I wanna say five years ago, six years ago, when he started, people said he was crazy. To be honest, they were like, oh, like AI will never be able to generate like a good ad or like better copy than a person. You know, now that you would almost just assume that it's the default. But I think the truth is it still isn't.
default. I think it's still mostly people doing these repetitive processes and it's only companies like Omnikey and CoFrame and Amers that are actually rolling out this new wave of personalization to those that are forward thinking enough to adopt it and that actually have the expectation. Also, the capabilities of these agentic operations are still limited.
Here's an example, right? So Coframe generates all of these different taglines and copy for your website, but it might not be thinking about these intangible things. Like it's completely optimizing for the best conversion rate. So then what if it like creates a tagline that like converts really well, but actually somehow your brand department would look at that and be like, ah, it's not the best. Like.
This is the kind of thing that I think will be solved in even maybe six months in a year, right? Like just better LLMs that we can ask like, Hey, does this match the vibe of this website, but like maybe even GPT-4 like wasn't quite there with like being able to match the vibe necessarily. Um, so I still, I
[29:59.411] 👩🎤 Ate-A-Pi: Yeah, you don't want the bot offering a 90% discount on its own because it converts really well. But then it's not really the intent. It's like, wow, look at the sales today.
[30:05.635] 👕 Evan SC: Oh, this is pretty great. Yeah, yeah. Yeah, exactly. I think that's a great, great point. Yeah. So basically, I would say there, this is definitely coming. It's definitely starting to roll out. But yeah, it's still in the early stage where these customers are just kind of getting in there.
[30:33.478] 👕 Evan SC: world where we both go to the same website and we just see very different websites that have to, you know, unless we're very similar people and then maybe we see almost exactly the same website. And yeah, it'll be interesting to kind of see how that tracks as well as like as some of the internet privacy regulations continues to get ramped up. It's sort of like this cat and mouse thing of like, oh, they might not be able to know as much about me.
But then maybe there's some websites sharing data with each other and eventually getting it to where it's very personalized for us.
[31:06.909] 👩🎤 Ate-A-Pi: Yeah, there was the Air Canada case. I'm not sure if you're familiar with that one. But Air Canada implemented a customer service chat bot. And the chat bot had a conversation with a customer. And that customer had to go to his mom had passed. And he needed to go visit her. But
there was something wrong with the flight. And then he said, if I book on my own, can I get a refund on this? And the chat bot said, yeah, sure, fine. Go book on your own, and then send us the ticket a couple of weeks later, and it's fine. And he did that. And Arkhanda then said, no, we require you to get permission from us before you buy the ticket. So we can't do this full refund in arrears. It's not going to happen. And then he took them to court, and they lost. So yeah.
[31:57.61] 👕 Evan SC: I mean, the bots had a good, so yeah.
[32:00.285] 👩🎤 Ate-A-Pi: Yeah, so he took them to court and they had the records of the bot and Eric and Eric and at that point claimed that the bot was its own entity and that Eric and I could not be responsible for the bot and the judge then said like, no, the body, the body is not an independent entity with the, with the sentient rights to do whatever it is to say, whatever it wants to say. So Eric and I, you shall, you shall honor this, you know, refund.
[32:07.456] 👕 Evan SC: Mm-hmm.
[32:11.004] 👕 Evan SC: I'm sorry.
[32:15.842] 👕 Evan SC: Hehehehe
[32:22.726] 👕 Evan SC: Great.
[32:30.389] 👩🎤 Ate-A-Pi: Right, so and you know, that's a very small example, but I think, you know, to some extent, it's what companies face where you have, as soon as you put a, some agent, you know, whether human or otherwise, the company is responsible for whatever that agent says, pretty much, like, and you can't simply get away from, you know,
not honoring whatever the agent says. And so this matter of like, it only has to say exactly what it should say. And then you notice all of the enterprise use cases then narrow considerably because it has to be 100% fact-based. So then they go down to retrieval augmented generation. And let's not have it give creative answers. And then let's just have it exit out into, hey, I'm sorry. I can't help you with that.
[33:19.972] 👕 Evan SC: Mm-hmm.
[33:28.649] 👩🎤 Ate-A-Pi: do a refusal on anything else.
[33:33.27] 👕 Evan SC: Yeah, it's sort of like this split of there's the applications where you can just go full send and let the AI run. This is for me why I'm so interested in persona AIs and character and role playing because that's a dimension where it's like almost like the crazier it is, the better, right? They're just like, let it go. And then you have the stuff where it's the opposite. We actually need to be super thoughtful.
But there also could be, I think there could be a middle ground. I mean, like you could have a company like an Air Canada that then just adopts the policy of we're going to try to train this bot to not do these types of things. And if it does, then, then we're just going to honor those, like, you know, people trick it into giving, getting free tickets, then we're just going to give a few free tickets. And, um, maybe we haven't quite gotten there in a lot of, in a lot of cases, but yeah, I think you're absolutely right. Like basically what happens is it's like almost like a retreat into, uh,
into traditional dialogue systems and more like tree-based dialogue systems that happens when you need to end up giving these like consistent replies. And yeah, and that's still something that I think I'm interested in seeing that being solved in a better way where you have this sort of perfect fusion of rule-based systems plus LLM-driven creativity slash flexibility, but still with just like clear rules.
I'm sure there's probably a lot of startups working on that right now. And at the same time, I don't know that I think it's been fully solved.
[35:08.317] 👩🎤 Ate-A-Pi: Yeah, I think some of the major issues are the hallucination. The hallucination is also what provides you creativity, but you need to have certain boundaries. And I think one of the weird things about LLMs is there, it's like, it's kind of a mishmash of neurons. You never know what you're gonna get.
[35:17.205] 👕 Evan SC: Mm-hmm.
[35:36.925] 👩🎤 Ate-A-Pi: It's like a forest scump of intelligence. You know? So how about the, let's talk about the H of 0 process, right? You have to review a lot of material, I guess, like applications from founders. Do you use AI in any of that? Do you run any of that through a summarization process or?
[35:40.459] 👕 Evan SC: Mm-hmm.
[35:52.12] 👕 Evan SC: Mm-hmm.
[35:57.207] 👕 Evan SC: Yeah.
[36:03.835] 👩🎤 Ate-A-Pi: in order to get something very quickly and then very quickly review your first couple of rounds or how do you use your AI in that process, right?
[36:14.538] 👕 Evan SC: You know, interestingly, we haven't really started using it extensively. I do use it occasionally to run it. I run it, I use it for analyses of our application pool. Like actually what I use is Numerous, which is one of the HF0 companies that made a GPT plugin that works directly in Google Sheets. So you can go into Google Sheets and then you can just, you can run like different kinds of
repeated queries and a lot of times it comes down to, I just kind of like asking the same prompt across like a bunch of rows. So that's been super useful for analysis. For the actual application process, I actually read them all by hand. It's one of those areas where I thought about using GPT to try to extract some kind of information, but when I really...
thought about it, at least at our current order of magnitude in the kind of like 1000, 2000, 3000 range. It just makes more sense for me to actually just read them each and take a look. Now, I want to be totally transparent in case anyone is listening to this who maybe is applying for HF0 about how I actually evaluate these applications. Like there are certain questions that are extremely important, other questions that are less important. I don't give the same
[37:36.19] 👕 Evan SC: what's the most impressive thing that you've done in the past and traction. Like if you have a really good answer to either of those questions, I start taking a deeper look at the application. There's not really a lot that you can do to like stand out on just like product idea, for example. And also I think a lot of people end up writing like product is like too long, like that they'll like think that they need to write a whole essay. Like I'm going through these, I'm reading these all by hand. I'm going through them very quickly. I'm basically just looking, do you have meaningful traction?
Is there something really impressive that you did in the past? And another thing that we do is we will check out people's get hubs and Twitter accounts. Um, if somebody has a get hub with like tons of actions and stars and stuff, that's something that we look at very strongly. And also if you have a Twitter that's popping off also, like I'll definitely take a second look at pretty much any team that has, uh, at least some amount of like, uh, Twitter following going on, because it means that people are interested in you for some reason, at least now it may not mean that you actually end up getting an interview. But.
it's like an indicator that like it's worth taking a closer look. So, um, so yeah, right now I do these all by hand. Um, and, and that I haven't, it hasn't gotten to the point where like that doesn't scale, uh, yet. I think if we started to get to, to 10 K, then absolutely. I probably would just run like a screen on them being like, um, you know, I would switch over to a process during GPT screen for
does this team have enough traction to actually be a viable candidate or did they do something impressive in the past? And when I do those, it's a process of basically running samples, trying out the prompts, seeing what results you get. So you basically, you pick a sample and then you say, what would I want it to respond to with all of these? And then you keep adjusting the prompt until it actually matches. It's not gonna be right out of the gates, at least in my experiment so far. It took me quite a bit of tuning.
in just the exact wording on the prompt. So it actually was giving the exact match for the way I would respond to those questions. So that's how I would do it if I needed to read probably like one order of magnitude more applications.
[39:45.261] 👩🎤 Ate-A-Pi: Have you noticed obvious AI generated applications? Like people using AI to write those.
[39:57.246] 👕 Evan SC: I mean, some of these, some of the people, when I say like in three sentences or less describe what your product does, I feel like some of those might be chat GPT generated. And the reason that I say that is because they have like a short paragraph followed by like one, two, three, bullet point type format. And you just know that like GPT-4 just like can't resist that format. Like it's just like so deeply embedded in its psyche of like what it thinks of response should look like.
Um, so yeah, if I see, if I see that, then that, that's a sign. And, um, and I don't think that people should be using chat. GBT, like, I don't know our application is short. Why see they ask more questions. Maybe you need to give longer answers. I don't know for us, just like, tell me the most impressive thing you've done in the past. Ideally, it's that you built a company to X amount of revenue or like, did something else cool like that. That's easy to understand in one sentence. And similarly, like your traction should also be like.
It could literally be two words, like 5M downloads would be like a great answer to that question. Um, and, uh, and so yeah, like Chad GBC, probably not super useful.
[41:01.39] 👩🎤 Ate-A-Pi: Ha ha.
[41:09.437] 👩🎤 Ate-A-Pi: It's funny because it's kind of a game, right? Because I think in every interface, you see this kind of meeting between the old algo and the new algo. And the old algo could be search engine optimization. And the new algo is LLM-generated content. And the new algo is overwhelming.
the old algos. And the old algos are now, have to be upgraded to kind of semantic search, right? Semantic search, because it's not enough just to have the keywords, you need to have both the semantic search and the deduplication. And we're not even at the deduplication stage yet. We're still like, automated semantic deduplication is still a little bit further away.
But once we get there, then the world changes again, because then duplicative content then becomes useless. And then you have this kind of newswire services, and they have Reuters on one side, and the Reuters article gets reprinted in the New York Times, and Associated Press gets reprinted in multiple places. And then all of those kind of compress, because all of them become useless beyond the original content. Everything else becomes semantically deduplicated.
and only the original content actually makes sense. And then you actually figure out that the number of original content producers in the world is actually very small. Right, like the number of foreign correspondents actually reporting live from some place is only maybe like three or four people and that's it. And they supply the raw material that goes out for hundreds and hundreds of newspapers worldwide, but the actual numbers are actually very small. And then once you de-duplicate all of that, you're left with a very small number of people.
[42:39.796] 👕 Evan SC: Mm-hmm.
[42:59.393] 👩🎤 Ate-A-Pi: So I wonder actually to what extent our current world is a world of false abundance, right? You get this false abundance of, oh, there's so many newspapers in the world, but actually they all print the same thing and it's all coming from three people, right? So I wonder to what extent like getting rid of that false abundance would basically mean that for some time the world would look worse, right? The world would look like, oh, you know, we used to have so many newspapers.
But now we only have the New York Times, Reuters, and the Associated Press. There's only three newspapers in the world. You're like, oh, CNN, it used to be AP. I wonder to what extent we face this kind of feeling that actually the world is actually loses abundance rather than gains abundance through AI. So.
[43:30.484] 👕 Evan SC: Mm-hmm.
[43:36.026] 👕 Evan SC: the
[43:42.64] 👕 Evan SC: Mm-hmm.
[43:55.758] 👕 Evan SC: I mean, the world is still gonna be abundant. There's like, there's lots of data. There's data, there's way more data out there than there is in there still, for now, at least like just the, you know, the world's like, like I'm looking out the window right now and there's like clouds in the sky and like there's trees in front of me and there's just like all of this stuff that is just there and that isn't, there's no digital mirror of all of these things yet.
[43:56.856] 👩🎤 Ate-A-Pi: Anyway.
[44:24.714] 👕 Evan SC: But yeah, I guess what I'm thinking about is like, maybe all of those sources end up collapsing, but then also is there like proliferation of interfaces that we engage with them with because we each kind of have like our own personalized interface to all of that information that really suits us. And so then does it feel less abundant or does it still just actually feel like the
it feels good because it's coming, it's getting to me in a way that's like designed to make me feel a certain type of way about it. And then the way that you receive that same information might be different than the way that I receive that information. But I guess like you're saying if we end up all clicking through and we can just click through and get to like, no, but that was the source. And then we're all, we all realize, oh, we were all reading the same, same thing we thought. We thought these were different kind of.
[45:17.145] 👩🎤 Ate-A-Pi: Yeah, I, yeah, I wonder sometimes I wonder sometimes because the number of like, and you know, that's, that's kind of why Twitter and Elon, they're doing this whole like, you know, individual journalism thing or whatever. Where, because, because you've noticed you notice, for example, now a lot of times someone on Twitter or something or to talk picks up something first and then the journalists kind of follow like you know a day or two later.
and the reporting from something that happened on Twitter or happened on TikTok. So they're actually following behind, like the primary source is being generated elsewhere. And then they're kind of adding a layer of spin analysis on top of that, and then they're presenting it to an audience, which is, you know, it's good new primary sources, but you know.
[45:51.874] 👕 Evan SC: Mm-hmm.
[45:59.099] 👕 Evan SC: Mm-hmm.
[46:03.894] 👕 Evan SC: Mm-hmm. Mm.
[46:13.337] 👩🎤 Ate-A-Pi: It's basically coming from a small number of sources. It's not really coming from many, many sources. Right, so. Anyway, so I also noted that you have some experience with lucid dreaming. Like you had a lucid dreaming startup. So what was that? Tell me how that was like.
[46:20.014] 👕 Evan SC: Mm-hmm.
[46:33.934] 👕 Evan SC: Oh yeah, I've been really interested in lucid dreaming for such a long time. I find it to be one of the most fascinating experiences that a person can have and it's like so accessible. I mean, this is like something that can be as different as, or more different than even like a, some drug experience that you might have and yet you don't need drugs to have this experience. You can just do this naturally. And, and I think compared to meditation, the accessibility is actually quite a bit.
easier. Like, I mean, I, you know, I've done my fair share of like meditation retreats, and I've had some really powerful experiences, but I've had those experiences after 10 days of like complete silence and like just meditating, like literally dawn till dusk the entire day for 10 days. Versus with lucid dreaming, like if you kind of follow the steps and, and do the process, then we could do it that same day. And so a lot of times with the workshops that I do with people, I'm able to get them lucid.
the very same day that they have the workshop, which I think is also because I've worked a lot on the techniques. Um, now I don't know if maybe I should just give a quick explanation of what lucid dreaming is. Um, but basically, yeah. So it's like, it is a dream in which you are aware that you are dreaming. So right now, let's say I decided like, okay, I'm going to ask if I'm, am I dreaming right now? Maybe this is a dream.
[47:43.119] 👩🎤 Ate-A-Pi: Yeah, I was about to ask that, exactly.
[48:00.094] 👕 Evan SC: I don't know, I'm talking to like a AI avatar thing. Like this is weird. So maybe this is weird because not because I'm just in a weird situation, but maybe it's weird because I'm actually asleep and my brain is creating some weird situation. So I'm gonna do a reality check. And the reality check that I always love to do that I think most of the people who I teach lucid dreaming to, this is how they have their first lucid moment is breathing in through your nose and then kind of feeling like.
a sensation of the air flowing into your lungs. And then you do the same thing, but pinching your nose. So like.
[48:36.002] 👕 Evan SC: So let's say I was to do that. And in spite of the fact that I was pinching my nose shut, completely shut, I was still able to fully breathe in. Then I would know that it's because I'm in a dream. Because in a dream, you try to pinch your nose and you aren't actually pinching your nose. So then when you breathe in, you still inhale and you can feel that because one of the few things that you can still control when you're in a dream is actually your breath. So, you know, when you're in a dream, if you intentionally breathe, you're
physical body will also intentionally breathe that same rhythm. Same thing with the other one is your eyes. So like if you look side to side in a dream, your actual pupils are going to move side to side in reality. Um, especially if you're lucid. So anyway, yeah. So you had this moment, Oh wow. I'm in a dream. And then from there, it's just this crazy experience. Cause now you realize like, Oh, everything that I'm seeing around me is actually just this like simulated generated world that my brain is putting on for me at night and you kind of have.
start to have like unlimited power, but it's not unlimited actually. Like there's still limits. It's kind of like in the matrix, right? Like when Neo tries to jump over the building, the first jump, he knows that it's the matrix, but like, you know, he still falls, still hits the ground because his brain just can't like fully unlock. And I think we all still have our limits of like, and then we also hit the limits of our computational horsepower. Like one experience that I often have when I'm lucid dreaming is like I get lucid and then I start flying.
and it's so cool and I'm like seeing all these cityscapes. But then I noticed that at the horizon, there's kind of like an edge to what my brain has actually rendered. It starts to look more pixelated and more like just these empty polygons. And I'm like, ah, I'm flying off the edge of my like brain's ability to like continue to render, you know, more content. So anyway, so there aren't limits to it, but it's cool though. It's like really fun experience. And I think it can be deeply spiritual as well.
[50:22.893] 👩🎤 Ate-A-Pi: That's hilarious.
[50:30.593] 👩🎤 Ate-A-Pi: So one thing I wonder is, does your perception of that world, is that a independent perception or is that a perception driven by your understanding of your experience? So what I'm saying is, if I were a complete noob and a liberal arts major and I went in there, I wouldn't perceive the pixelization because I wouldn't know what it was, right?
while you know about this concept of rendering, you know about pixelization, so to you, is that something that is created by your understanding of your reality, which would not be created by someone who has a different perceptual understanding of their reality?
[51:19.066] 👕 Evan SC: Very likely, you know. Yeah, yeah. Like for example, like I've played video games and so I've seen when you go too fast and then the video game like can't buffer and like, I see, you know, it starts to just be like these like rough shaped polygons. So that's probably where my brain is drawing from. But maybe for someone else that there is still, I actually believe there are.
[51:19.451] 👩🎤 Ate-A-Pi: Do you get what I mean? Yeah.
[51:41.154] 👕 Evan SC: phenomena in dreams, like that it's not just a complete, like free canvas where anything can happen, that there are phenomena. So for example of this would be, when you first fall asleep, there is a sensation of falling. And so a lot of people talk about having dreams of flying or of having dreams of falling into a pit. And my theory is that this is actually, it's not always because there's a, you know,
Oh, you had some memory or some theme. Yeah, exactly. It's like the cake, but there's actually like a few, it's like, this is just what it feels like to fall asleep. This is actually the sensation of the beginning of the dream. It's like, and so then our brain, but then yes, then our brain takes those, whatever the data is, it takes that. And then it, and then it reinterpreted in symbolic terms that are unique to our experience of the world. And that's exactly like you're saying. Like it would be like, what, what if you see a whale in your dream?
[52:10.942] 👩🎤 Ate-A-Pi: It's the kick in inception. In inception, they wake up with the kick. Yeah.
[52:40.106] 👕 Evan SC: What does a whale mean to you? That's the question to be asking. It's not what, I don't believe in like any kind of like dream interpretation guide. It's like, it's specific to the person. It's not about like, what does a whale symbolize to this person that wrote it. It's like, what have whales showed up in your life?
[53:01.082] 👩🎤 Ate-A-Pi: There's actually a startup that I did a pod with a few weeks ago called Prophetic, which has a machine learning based ultrasound device that you put on your head. And it's basically an EEG reading and then it...
[53:18.962] 👕 Evan SC: Yeah, it's like EEG plus ultrasound and it's like an input and output system type of thing. Yeah.
[53:25.65] 👩🎤 Ate-A-Pi: Yeah, it's an input-output system. And they're in prototyping stage now. They raised the seed, and they're in prototyping stage. And it's interesting for me because I think scientifically it kind of makes sense. All the pieces are there. But no one's put it together yet. No one's put it together in a nice package that's usable. And
[53:44.979] 👕 Evan SC: Mm-hmm. Mm.
[53:52.201] 👩🎤 Ate-A-Pi: It actually seems because, you know, to a lot of like, um, I think Western science, the whole like lucid dreaming thing, while there are people who have studied it, it's still a little bit woo, right? It's still, it's still not yet. Yeah. It's still a little bit woo. Uh, and, uh, it will be interesting if it turns out that machine learning has basically made it easy. And, you know, many people can experience this now.
[54:03.82] 👕 Evan SC: Yeah, totally.
[54:19.332] 👕 Evan SC: I've already made it easy. If you want to get lucid, this is where I went through a whole journey with thinking about building hardware for this. I spent time with Steven Laberge who was the researcher at Stanford that was the original one to investigate this a bunch. I think what Prophetic is doing is awesome and I think that for a bunch of people, hardware could be the thing that unlocks it.
I figured out how to get people lucid like 75% of the time on their first try. And the key is actually, if they're willing to use a supplement, there is a supplement. It is not a drug. I mean, it is something you can buy on Amazon and it basically just works. If you do wake up at the right time, if you do all of the other pieces correctly and you take this supplement, chances are that you'll get lucid are extremely high.
So I think that this already exists. And then it's just, you kind of ask yourself, well, are people flocking to stores to buy this supplement because they want to have lucid dreaming experiences? Like when I post my workshop online, yeah, there's definitely people who are interested. But the truth of the matter is like, you kind of have to wake up. No, maybe this is where Prophetic can take it to the next level. Because right now they big UX problem with lucid dreaming. You have to wake up at five in the morning.
[55:38.294] 👕 Evan SC: be awake for about 10 or 15 minutes for real awake, not just like in your bed about to roll over and then go back to sleep. And if you're able to do that, if you're willing to wake up at five in the morning, especially if you're willing to take this supplement, glantamine, you could lose your dream right now. You could lose your dream tomorrow morning. Like all you need to really do is like wake up at five in the morning, take glantamine, go back to sleep. It's not actually that complicated, but still people aren't just, but the glantamine's not flying off the shelves
There's, it's just a very interesting niche, cool experience that some people want to have but a lot of people are just happy in their lives and don't necessarily even want to unlock the doors of perception.
[56:23.129] 👩🎤 Ate-A-Pi: It's interesting because I think, you know, messing around with your sleep is a tough ask for a lot of people, right? Like the 5AM wake up would wake... Yeah.
[56:33.706] 👕 Evan SC: It's a tough ask. No, I mean, after you do it, you'll be happy. Because the thing is, the day after you do this, you feel great, you feel amazing. But yeah, then the next day, you're tired. I mean, that is the trade-off of lucid dreaming. That is also the trade-off of a lot of meditation practices. If you go to a Zen monastery, they're gonna wake you up at 4 a.m. and then they're gonna make you go to bed at like 10 p.m. And then you're not, there's some sleep.
[56:40.075] 👩🎤 Ate-A-Pi: Uh... Yeah.
[57:03.446] 👕 Evan SC: deprivation that comes along with some of these like spiritual practices.
[57:09.841] 👩🎤 Ate-A-Pi: in the I think I think I've read some Japanese Zen experience books, where they have to sit on their knees for like, you know, 24 hours until like they're crying in pain. And the monks are just like brutal. They're like, No, you have to, you have to experience it, you have to you have to let go of all of the pain in your body. You know, Yeah, right on right on. So
[57:32.974] 👕 Evan SC: abstract to enlighten it right there.
[57:40.385] 👩🎤 Ate-A-Pi: So one of the questions that I face, and I've noticed this across people in AI, which is that there is a sensation that you are starting to explore areas of human consciousness when you deal with these models. There's this kind of like, hey, if the model is seeing this, am I also seeing it in the same way?
[57:43.886] 👕 Evan SC: Mm-hmm.
[57:48.54] 👕 Evan SC: Mm-hmm.
[58:00.323] 👕 Evan SC: Mm-hmm.
[58:06.696] 👕 Evan SC: Mm. Mm-hmm.
[58:11.233] 👩🎤 Ate-A-Pi: you know, if the model is able to basically put out this kind of messaging or express this kind of consciousness, even though you know it, you know it's a machine, but it's expressing something. And am I also expressing it in the same way? So for example, you know, I've come across people with
[58:26.991] 👕 Evan SC: Mm. Mm-hmm.
[58:38.169] 👩🎤 Ate-A-Pi: declare themselves like classic San Francisco autism Asperger's types. And they've started to refer to their, yeah, basically, basically every single person here, every single person who doesn't make eye contact with NSF, right? And I've started to see people kind of referring to like, oh, you know, when I read something, I'm making an embedding of it.
[58:44.126] 👕 Evan SC: Oh, you mean everyone we hang out with here? Half of San Francisco.
[58:55.039] 👕 Evan SC: Okay.
[59:05.217] 👩🎤 Ate-A-Pi: right, because some people, when they're skimming something, they kind of do this half remembering thing. And the half remembering thing is almost like an embedding where they don't quite remember the exact thing, but they know where to find it, right? And so a lot of people, especially when you deal with webpages, because a lot of us deal with a large amount of text content when you're scrolling through bug reports or et cetera, and you just flip very quickly. You're just scrolling through a code, code.
[59:06.499] 👕 Evan SC: Mm-hmm.
[59:18.562] 👕 Evan SC: Mm-hmm. Right.
[59:35.569] 👩🎤 Ate-A-Pi: You don't actually fully understand all of it, but you know enough of it so that if you have a sense of where things would be. And that's almost like running an embeddings vector search. And I'm sorry to see people kind of refer to their experiences in this language of, hey, maybe this is what we're doing or we're referring to it. So I wonder to what extent you have this feeling of that perceptual sense that
[59:39.669] 👕 Evan SC: Mm-hmm.
[59:44.725] 👕 Evan SC: Mm-hmm.
[01:00:04.597] 👩🎤 Ate-A-Pi: the AI models are doing something similar to what you imagine is happening internally.
[01:00:12.782] 👕 Evan SC: Absolutely. That's a really cool point. I love that. It's so interesting because I think cognitive science and computer science for a long time have sort of been toying with this merger of like, oh, are we thinking in terms of algorithms? But there's always been a little bit of a disconnect between the way that we know we experience the world and then what you see on the screen and code. It's not just if this, then that. Maybe at a deeper level we have some.
formulae like that, but that's not really it. It's much more like there's these other patterns and I feel like with neural nets and with vector space, we're starting to get to something that's a way better metaphor, way, way better metaphor for how we actually experience the world and like what human thought starts to actually feel like. I guess a lens that I'll bring to this is I've been rereading, thinking fast and slow.
Um, which I think is just required reading for everyone. It's like so much interesting points in there about how, how we think. And the, if you haven't read it, the basic thing is there's system one, system two, system one, or all of the, the kind of like automatic thinking that we do system to being like the stuff where we kind of have to put an effort and think, think step by step. And, um, and I think that's, that's an interesting lens to apply to maybe sort of like, what are some of the things that can just instantly happen, like a vector search, right?
that's almost like maybe a system one type thought, whereas where you need to actually run more of like a algorithm or code is kind of like system two. So it is kind of like this combination of like rag plus, plus the, plus then like having that get input into an algorithm. And then it's interesting to think that, yeah, for us, it is just to kind of look up in certain types of things that are easier for us. It would be hard for us to answer questions like,
How many dentists do you know? Right? Who's a dentist that you know? Very easy to answer. You think of your dentist. How many dentists do you know? Extremely difficult to answer. Probably impossible even to answer that question. So that is similar to those challenges that you'd have with a neural net.
[01:02:22.533] 👩🎤 Ate-A-Pi: Yeah, yeah, yeah. And yeah.
[01:02:29.745] 👩🎤 Ate-A-Pi: Indeed, indeed. And so some questions are difficult, and some questions are impossible, and some questions are just easy. And there isn't really that much logic behind that sometimes, and you wonder why. In the same way that neural nets sometimes can solve some things very easily, like recall, like facts very easily, but then have
[01:02:36.075] 👕 Evan SC: Mm-hmm.
[01:02:57.317] 👩🎤 Ate-A-Pi: much a lot of difficulty with arithmetic in the earlier days in the last year, although that seems to be getting solved.
[01:03:03.284] 👕 Evan SC: Oh yeah.
And another place where I think this is kind of interesting when you think about diffusion models and like how hard it was for them to get hands right, but faces were so easy. Like day one, you've got like a good face, but a hand. And if you think about humans progression through their own artistic journey, like, I don't know if you're an artist, I'm really into drawing. Something that's still really hard for me is hands, right? It's like, it's actually just really hard to draw hands. And-
Same exact thing in diffusion. Like, why would that be? Why would it be so hard for them to draw a hand and for me to also draw a hand? So yeah, there's definitely closer resonance. And yet I also wonder if, yeah, go ahead.
[01:03:49.009] 👩🎤 Ate-A-Pi: For me, all-
[01:03:54.477] 👩🎤 Ate-A-Pi: No, for me, as you said, the hands indicated to me that in the training process, the rewards had very high rewards for the faces. But for the rest of the picture, as long as it was kind of there, it was fine. And the results of that training process were hands.
Words, right, the words in the earlier days, until three months ago, three, four months ago, there was no model that could actually output words properly, right? And then all of a sudden, within like three months, all the models can do it now, right? And so you have this kind of thing, yeah.
[01:04:34.065] 👕 Evan SC: Mm-hmm.
[01:04:37.887] 👕 Evan SC: Well, here's something that actually.
[01:04:43.062] 👕 Evan SC: No, I was gonna say here's something that actually brings this all together because if you go into lucid dreaming, if you go into a dream, what you'll find is that faces look normal, words don't look normal. In fact, if you look at text in a dream, the way it looks if you look at it up close is very similar to how it looks when AI tries to generate it. And similarly, when you look at a hand in a dream, the problems with...
dream hands are very similar to the problems that happen in diffusion models. They often don't have the right number of fingers. They're kind of like slightly skewed. And so that's why people always say, look at your hands and your dream. If they look weird, you know, it's a dream. And so I think one piece of it might be that it's like the training data, but I also think there's something about this, about these kind of sequential logic, like memory requiring things. And when I say that, I mean like
A face is basically just symmetrical. It's like, you don't really have a sequence involved there. It's just kind of like face. I don't know. Or as a hand, you know, you're on, which finger are you on? Are you on the thumb or are you on the first finger? Are you on the second finger? And same thing with writing, right? Are you on like which line of the word is this? Are we at the bottom curve of the S or the top curve of the S? So I think it's just a lot harder to identify like where you are at.
But it is also interesting that our dreaming brand seemed to similarly struggle to render these same things.
[01:06:13.265] 👩🎤 Ate-A-Pi: And perhaps every diffusion model is a dreamer. So.
[01:06:19.336] 👕 Evan SC: That part I don't know, maybe. Maybe they're just dreaming it away.
[01:06:25.149] 👩🎤 Ate-A-Pi: Maybe they're just treating it away. Awesome. So I just want to thank you for your time. H of 0 sounds like an amazing program. So thank you for your time with me today. And yeah, I hope to see some great teams come out from the program.
[01:06:48.782] 👕 Evan SC: No, you will. You definitely will. It was great talking to you today.
[01:06:53.545] 👩🎤 Ate-A-Pi: All right.