Our first fireside is between Sam Altman and Patrick Carlson. I first met Patrick back in 2019 when he spoke on the Son stage. Back then, his payment company Stripe was processing around $200 billion a year.
Over the next twelve months, they'll process more than a trillion dollars. In addition to running an amazing company, patrick recently set up Arc Institute, which provides multi year funding, no strings attached funding to world class scientists. And that's a cause that's near and dear to our hearts.
At Zone, we funded Rockefeller University with a very similar mandate. And last year Patrick helped me organize Zone, and all the funds we raised went to Arc Institute. Sam Altman is the co founder and CEO of OpenAI.
He and Paul Graham were actually the first two checks into Stripe, and so he and Patrick have known each other for a long time. Sam said recently that AI is a rare example of an extremely hyped thing that almost everyone still underestimates the impact of in even the medium term, and that in the literal sense, most people are short innovation and long stasis. I'm hoping Patrick and Sam will get into the investment implications of AI today and help us avoid being long stasis.
Patrick, over to you. Thank you, Greg, and thank you, Sam, for being with us. Last year I actually interviewed Sam bankman, Freed, which was clearly the wrong Sam to be interviewing, so it's good to correct it.
This year, with the right Sam, we'll start out with the topic on everyone's mind. So when will we all get our World Coin? I think if you're not in the US, you can get one in a few weeks. If you're in the US.
Maybe never, I don't know. It depends how truly dedicated the US government is to banning crypto. So World Coin launched around a year ago or so.
Actually it's been in beta for maybe a year, but it will go live relatively soon outside of the US. And in the US. You just won't be able to do it.
Maybe ever, I don't know. All right. Which is a crazy thing to think about.
Think whatever you want about crypto and the ups and downs, but the fact that the US is the worst country in the world to have a crypto company in, or you just can't offer it at all is sort of a big statement. Like historically big statement. Yes.
It's hard to think of the last technology for which that was the case. Maybe the Europeans are supposed to do this, not us. Yes.
Supersonic air travel or something. Or like us. All right.
So I presume almost everyone in the audience is a chat GPT user. What is your most common chat GPT use? Case? Like, not when you're testing something, just you actually want to get where chat GBT is purely an instrumental tool for you summarization, by far. I don't know how I would still keep I wouldn't still keep up with email and Slack without it, but posting a bunch of email or Slack messages into it.
Hopefully we'll build some better plugins for this over time. But even doing it, the manual way works pretty well. Have any plugins become part of your workflow yet? Browsing and the code interpreter once in a while, but honestly, they have not for me personally, they have not yet kind of like tipped into a daily habit.
Obviously. It seems very plausible that we're on a trend of super linear realized returns in terms of the capabilities of these models. But who knows? Maybe we'll ask them toad soon.
Nothing likely, but it's at least a possibility if we end up in the world where we ask them toad soon. What do you think? Kind of x post we will look back on the reason as having been too little data, not enough compute. What's the most likely problem? Like, look, I really don't think it's going to happen, but if it does, I think it'd be that there's something fundamental about our current architectures that limits us in a way that is not obvious today.
So maybe we can never get the systems to be very robust and thus we can never get them to reliably stay on track and reason and understand when they're making mistakes and thus they can't really figure out new knowledge very well at scale. But I don't have any reason to believe that's the case. And some people have made the case that we're now training on kind of order of all of the internet's tokens and you can't grow that another two orders of magnitude.
I guess you could counter with the synthetic data generation. Do you think data bottlenecks matter at all? I think you just touched on it. As long as you can get over the synthetic data event horizon where the model is smart enough to make good synthetic data, I think it should be all right.
We will need new techniques for sure. I don't want to pretend otherwise in any way the naive plan of just scale up a transformer with pretrained tokens from the internet that will run out, but that's not the plan. So one of the big breakthroughs in, I guess, GPD 3.5
and four is Rlhf. If you Sam, personally sat down and did all of the Rlhf, would the model be significantly smarter? Like, does it matter who's giving the feedback? I think we are getting to the phase where you really do want smart experts giving the feedback in certain areas to get the model to be as generally smart as possible. So will this create like a crazy battle for the smartest grad students? I think so.
I don't know how crazy of a battle it'll be because there's like a lot of smart grad students in the world, but smart grad students I think will be very important. And how should one think about the question of how many smart grad students one needs like is one enough or do you need like 10,000? We're studying this right now. We really don't know how much leverage you can get out of one really smart person where kind of the model can help and the model can do some of its own.
RL we're deeply interested in this, but it's a very open question. Should nuclear secrets be classified? Probably yes. I don't know how effective we've been there.
I think the reason that we have avoided nuclear disaster is not solely attributable to the fact that we classified the secrets, but that we did something. We did a number of smart things and we got lucky. The amount of energy needed, at least for a long time, was huge and sort of required the power of nations and we made the IAEA, which I think was a good decision on the whole and a whole bunch of other things too.
So yeah, I think probably anything you can do there to increase probability of a good outcome is worth doing. Classification of nuclear secrets probably helps. Doesn't seem to make a lot of sense to not classify it.
On the other hand, I don't think it'd be a complete solution. What's the biggest lesson we should take from our experience with nuclear nonproperation? The broader sense as we think about all the AI safety considerations that are now central. So first of all, I think it is always a mistake to draw too much inspiration from a previous technology.
Everybody wants the analogy. Everybody wants to say oh it's like this or it's like that, or we did it like this, so we're going to do it like that again. And the shape of every technology is just different.
However, I think nuclear materials and AI supercomputers do have some similarities and this is a place where we can draw more than usual parallels and inspiration. But I would caution people to overlearn the lessons of the last thing. I think something like an IAEA for AI, and I realize how naive this sounds and how difficult it is to do, but getting a global regulatory agency that everybody signs up for, for extremely powerful AI training systems seems to me like a very important thing to do.
So I think that's like one lesson we could learn and if it's established it exists tomorrow, what's the first thing it should do? Any systems over the easiest way to implement this would be a compute threshold. The best way to implement this would be a capabilities threshold, but that's harder to measure. Any system over that threshold I think should submit to audits, full visibility to that organization, be required to pass certain safety evals before releasing systems.
That would be the first thing. And some people on the I don't know how one would characterize the side, but let's say the more pugilistic side would say that all sounds great, but China is not going to do that and therefore we'll just be handicapping ourselves and consequently it's a less good idea than it seems on the surface. There are a lot of people who make incredibly strong statements about what China will or won't do that have never been to China, never spoken to someone who has worked on diplomacy with China in the past, really kind of know nothing about complex, high stakes international relations.
I think it is obviously super hard. But also I think no one wants to destroy the whole world and there is reason to at least try here. Also, I think there's like a bunch of unusual things about and this is why it's dangerous to learn from any technological analogy of the past.
There's a bunch of unusual things here. There's, of course, the energy signature and the amount of energy needed, but there aren't that many people that are making the most capable GPUs. And you could require them all to put in some sort of monitoring thing that, say, if you're talking to more than 10,000 other GPUs, like, you got it, whatever, there's options.
Yeah. So one of the big surprises for me this year has been the progress in the open source models. And it's been this kind of frenzy pace the last 60 days or something.
How good do you think the open source models will be in a year, say? Well, actually, I'll just ask that first. Yeah, good. I think there's going to be two thrusts to development here.
There will be the hyperscaler's best closed source models, and there will be the progress that the open source community makes. And it'll be a few years behind or whatever. A couple of years behind, maybe.
But I think we're going to be in a world where there's very capable open source models and people use them for all sorts of things, and the creative power of the whole community is going to impress all of us. And then there will be the frontier of what people with the giant clusters can do and that will be fairly far ahead. And I think that's good because we get more time to figure out how to deal with some of the scarier things.
David Luan made the case to me that the set of economically useful activities is a, you know, is purely a subset of, you know, all possible activities, and that pretty good models might be sufficient to address most of that first set. And so maybe the super large models will be very scientifically interesting, and maybe you'll need them to do things like generate further AI progress or something. But for most, just like practical day to day cases, maybe an open source model will be sufficient.
How likely do you think that future is? I think for many super economically valuable things, yes, the smaller open source model will be sufficient. But you actually just touched on the one thing I would say, which is like, help us invent super intelligence. That's a pretty economically valuable activity.
So is like cure all cancer or discover new physics or whatever else. And that will happen with the biggest models. First, should Facebook open source llama at this point? Probably should.
Should should should they adopt a strategy of open sourcing their foundation models LLMs or just llama in particular? I think Facebook's AI strategy has been confused at best for some time, but I think they're now getting really serious, and they have extremely competent people, and I expect a more cohesive strategy from them soon. I think they'll be a surprising new real player here. Is there any new discovery that could be made that would meaningfully change your P doom probability either by elevating it or by decreasing it? Yeah, I mean, a lot I think that's most of the new work between here and super intelligence will move that probability up or down.
Okay. Is there anything you're particularly paying attention to? Any kind of contingent fact you'd love to know? First of all, I don't think Rlhf is the right long term solution. I don't think we can rely on that.
I think it's helpful. It certainly makes these models easier to use. But what you really want is to understand what's happening in the internals of the models and be able to align that, say, like, exactly here is the circuit or the set of artificial neurons where something is happening and tweak that in a way that then gives a robust change to the performance of the model.
And if we can get the mechanistic interpretability stuff, well, that and then beyond there's a whole bunch of things beyond that, but that direction, if we can get that to reliably work, I think everybody's P doom would go down a lot. And do you think sufficient interpretability work is happening? No. Why not? A lot of people say they're very worried about AI safety, so it seems superficially surprising.
Most of the people who say they're really worried about AI safety just seem to spend their days on Twitter saying they're really worried about AI safety or any number of other things. There are people who are very worried about AI safety and doing great technical work there, but we need a lot more of them. We're certainly shifting a lot more effort inside, a lot more like technical people inside, OpenAI to work on that.
But what the world needs is not more AI safety, people who post on Twitter and write long philosophical diatribes. It needs more people who are going to do the technical work to make these systems safe and reliably aligned. And I think that's happening.
It'll be a combination of people that have good ML researchers shifting their focus and new people coming into the field. A lot of people on this call are active philanthropists, and most of them don't post very much on Twitter. They hear this exchange, they're like, oh, maybe I should help fund something in the interpretability space.
If they're having that thought, what's the next step? One strategy that I think has not happened enough is grants. Like grants to single people or small groups of people that are very technical, that want to push forward a technical solution and are maybe in grad school or just out or undergrad or whatever. I think that is well worth trying.
They need access to fairly powerful models and open eyes trying to figure out programs to support independent alignment researchers. But I think giving those people financial support is like a very good step. To what degree, in addition to being somewhat capital bottlenecked is the field skill bottlenecked, where there are people who maybe have the intrinsic characteristics required but don't have the four years of learning or something like that, that are also a prerequisite for their being effective.
I think if you have a smart person who has learned to do good research and has the right sort of mindset, it only takes about six months to make them take a smart physics researcher and make them into a productive AI researcher. So we don't have enough talent in the field yet, but it's coming soon. We have a program at OpenAI that does exactly this, and I'm astonished how well it works.
It seems that pretty soon we'll have agents that you can converse with in very natural form. Low, latency, full duplex. You can interrupt them, like the whole thing.
And obviously we're already seeing with things like character and Replica that even nascent products in this direction are getting pretty remarkable traction. It seems to me that these are likely to be a huge deal, and maybe we're substantially underestimating it again, especially once you can converse through voice. A you think that's right, and then B if that's right, what do you think the likely consequences are? Yeah, I do think it's right, for sure.
A thing someone said to me recently that has stuck with them is that they're pretty sure their kids are going to have more AI friends than human friends. And I don't know what the consequences are going to be. One thing that I think is important is that we establish a societal norm soon that, you know, if you're talking to an AI or a human or sort of like weird AI assisted human situation.
But people seem to have a hard time kind of differentiating their head, even with these very early weak systems like Replica that you mentioned, whatever the circuits in our brain are that crave social interactions seem satisfiable with, like, for some people, in some cases with an AI friend. And so figuring out how to handle that, I think is tricky. Someone recently told me that a frequent topic of discussion on the Replica subreddit is how to handle the emotional challenges and trauma of upgrades to the Replica models.
Because suddenly your friend becomes somewhat lobotomized or at least a somewhat different person. And presumably these interlocutors all know that Replica is in fact an AI. But somehow, to your point, our emotional response doesn't necessarily seem all that different.
What I think we're heading to is a society I think what most people assume that we're heading to is a society with one sort of supreme being super intelligence floating in the sky or whatever. And I think what we're heading to, which is sort of less scary, but in some senses still as weird, is a society that just has a lot of AIS integrated along with humans. And there's been movies about this for a long time, like there's like C three PO or whatever you want in Star Wars.
People know it's an AI still useful. They still interact with it. It's kind of like cute and person like, although you know it's not a person.
And in that world where we just have a lot of AIS that are contributing to the societal infrastructure we all build up together, that feels manageable and less scary to me than the sort of single big super intelligence. Well, this is a financial event. There's kind of a debate in economics as to whether changes in the working age population push real interest rates up or down because you have a whole bunch of countervailing effects.
And yeah, they're more productive, but you also need capital investment to kind of make them productive and so forth. How will AI change real interest rates? I try not to make macro predictions. I'll say I think they're going to change a lot.
Okay, well, how will it change measured economic growth? I think it should lead to a massive increase in real economic growth, and I presume we'll be able to measure that reasonably well and will at least the early stages of that be an incredibly capital intensive period, because we now know which cancer curing factories or pharma companies we should build and what exactly the right reactor designs are and so forth. I would take the other side of that. Again, we don't know.
But I would say that human capital allocation is so horrible that if we know exactly what to do, even if it's expensive you mean like the present day capital allocation done by humans? Yeah. Or you mean the allocation of the actual people themselves across society into different roles? No, I meant the way that we allocate capital allocation done by humans. Yeah, by done by humans.
How much do you think we spend on cancer research today? How much we spend on cancer research a year? I don't know. Probably well, it depends if you can't the pharma companies, but it's probably about like eight ish 9 billion from the NIH. And then I don't know much the drug companies spend, but I don't know, probably some small multiple of that again, but if it's like under 50 billion okay, I was going to guess total guess between 50 and 100 billion per year.
And if an AI could tell us exactly what to do and spend like $500 million a year for one single project, which would be huge for a single project, but it was the right answer, that would be a great efficiency gain. Okay, so we will actually become significantly more capital efficient once this technology, that's my guess. Interesting for OpenAI, obviously you guys want to be and are a preeminent research organization.
But with respect to commercialization, is it more important to be a consumer company or an infrastructure company? I am a believer as a business strategy in platform plus killer app. I think that's like worked for a bunch of businesses over time for good reason. I think the fact that we're doing a consumer product is helping us make our platform much better and I hope over time that we figure out how to have the platform make the consumer app much better too.
So I think it's like a good cohesive strategy to do them together. But as you pointed out really what we're about, we'd like to be the best research in the world and that is more important to us than any productization. And building the that can make these repeated breakthroughs, they don't all work.
We've gone down some bad paths, but we have figured out more than our fair share of the paradigm shifts and I think have the next big ones will come from here too. And that's really kind of what is important to us to build. Which breakthrough are you most proud of? OpenAI.
Having made the whole GPT paradigm, I think I think that was a kind of thing that has been transformative and an important contribution back to the world and comes from the sort of work, the multiple kinds of work that OpenAI is good at combining. Google. I o.
Is tomorrow, I think. Or starts tomorrow. If you were CEO of Google, how would you do? I think Google is doing a good job.
I think they have had quite a lot of focus and intensity recently and are really trying to figure out how they can move to really remake a lot of the company for this new technology. So I've been impressed. Are these models and their attendant capabilities actually a threat to search or is that just a sort of superficial response that is a bit too hasty? I suspect that they mean search is going to change in some big ways but not a threat to the existence of search.
So I think it would be like a threat to Google if Google did nothing. But Google is clearly not going to do nothing. How much important AML research comes out of China? Sorry, go ahead.
I would love to know the answer to that question. Does it come out of China that we get to see? Not very much. Yes.
I mean from the published literature, non zero, but not a giant amount. Do you have any sense as to why. Because the number of published papers is very large and there are a lot of Chinese researchers in the US who do fantastic work.
And so why is the kind of per paper impact from the Chinese stuff relatively low? I mean, what a lot of people suspect is they're just not publishing the stuff that is most important. Do you think that's likely to be true? I don't trust my intuitions here. I just feel confused.
Would you prefer OpenAI to figure out a ten x improvement to training efficiency or to inference efficiency? It's a good question. It sort of depends on how important synthetic data turns out to be. I mean, I guess if forced to choose, I would choose inference efficiency, but I think the right metric is to think about all the compute that will ever be spent on a model train, plus all inference, and try to optimize that right.
And you say inference efficiency because that is likely the dominant term in that equation. Probably. I mean, if we're doing our jobs right.
When GBD Two came out, only a very small number of people noticed sort of that that had happened and really understood what it signified. To your point about the importance of the breakthrough, is there a GPT-2 moment happening now? There's a lot of things we're working on that I think will be GPT-2 like moments if they come together, but there's nothing like release that I could point to yet and with high confidence say, this is the GPT-2 of 2023, but I hope by the end of this year, by next year, so that will change. What's the best non open AI AI product that you use? Honestly, the only product that I think of is like, really? I don't use a lot of things.
I kind of, like, have a very narrow view of the world. But Chat GBT is the only AI product I use daily. Is there an AI product that you wish existed and that you think that our current capabilities make possible or will very soon make possible that you're looking forward to? I would like a copilot like product that controls my entire computer so they can look at my slack and my email and zoom and imessages and my massive to do list documents and just kind of do most of my work.
Some kind of three plus plus sort of thing. You mentioned curing cancer. Is there an obvious application of these techniques and technologies to science that, again, you think we have or having capabilities for that you don't see people obviously pursuing today? There's a boring one and an exciting one.
The boring answer is that if you can just make really good tools like that one I just mentioned, and accelerate individual scientists, each by a factor of three or five or ten or whatever, that probably increases the rate of scientific discovery by a lot. Even though it's like, not directly doing science itself, the more exciting one is. I do think that same similar system could go off and start to read all of the literature, think of new ideas, do some limited tests in simulation, email a scientist and say, hey, can you run this for me in the wet lab? And probably make real progress.
I don't know how exactly kind of the ontology works here, but you can imagine building these better sort of general purpose models that are kind of like a human will go read a lot of literature, et cetera, maybe smarter than a human, better memory, who knows what. And then you can imagine models trained on certain data sets that are doing something nothing like what a human does. You're mapping CRISPRs to edit accuracies or something like that and really sort of a special purpose model in some particular domain.
Do you think that the scientifically useful applications, these models will come more from the first category where we're kind of creating better humans, or from the second category where we're creating these predictive architectures for problem domains that are not currently easy to work with? I really don't know. This is like most areas I am willing to kind of give some rough opinion in this one. I never have I don't feel like I had a deep enough understanding of the process of science and how great scientists actually work to say that.
I guess I would say if we can figure out someday how to build models that are really great at reasoning, then I think they should be able to make some scientific leaps on themselves, but by themselves. But that requires more work. OpenAI has done a super impressive job of fundraising and has a very unusual capital structure for the nonprofit and the Microsoft deal.
And all the things are weird. Capital structures underrated like, should organizations and companies and founders be thinking more expansively about people default or T defaulted? Like, all right, we're just like a Delaware C Corp. Open eyes, you pointed out, broke all the rules.
Should people be breaking more corporate structure rules? I suspect not. I suspect it's like a horrible thing to I suspect it's like a horrible thing to innovate on. You should innovate on products and science and not corporate structures.
The shape of our problem is just so weird that despite our best efforts, we had to do something strange. But it's been an unpleasant experience on time suck on the whole. And the other efforts I'm involved in have always had normal capital structures, and I think that's better.
Do we underestimate the extent so a lot of companies you're involved with are very capital intensive. Maybe Openi is perhaps the most capital intensive, although who knows? Maybe Helium will retake or something. But do we underestimate the extent to which capital is a bottleneck, the bottleneck on realized innovation? Is that some kind of common theme running through the various efforts you're involved with? Yes, there's like four companies that I'm I would say involved with, other than just, like, having written a check as an investor.
And all of them are super. Do you want to enumerate those for the sick audience? OpenAI and HealyOn are the things I spent the most time on, and then also Retro and WorldCoin, but all of them raised minimum nine digits before any product at all, and in Open AI's case, much more than that. And all have raised in the nine digits before as like, either a first round or before releasing a product.
And they all take a long time, many years, to get to a release of a product. And I think this is just like there's a lot of value in being willing to do stuff like this. And it fell out of favor in Silicon Valley at some point.
And I understand why. It's also great for companies that only ever have to raise a few hundred thousand, a million dollars and get to profitability. But I think we overpivoted in that direction and we have forgotten collectively how to do the high risk, high reward, hugely capital and time intensive bets.
And those are also valuable. We should be able to support both. And this touches on the question of why aren't there more Elons in that? I guess the two most successful hardware companies, in the broader sense, start in the last 20 years were both started by the same person.
That seems like a pretty surprising fact. And obviously Elon is singular in many respects. But what's your answer to that question? Do we lack you with his particular set of circumstances? Is it actually a capital story along the lines of what you're saying? If it was your job to cause there to be more spacexes and Teslas in the world, and maybe you're trying to do some of that yourself, but if you had to kind of push in that direction systematically, what would you be trying to change? I have never met another Elon.
I have never met another person that I think that can be developed easily into another Elon. He is sort of this strange n of one character. I'm happy he exists in the world, of course, but also a complex person.
I don't know how you get more people like that. I don't know what you think about how to make more. I'm curious.
I don't know. I suspect there's something in the culture on both the founder and the capital side, the kinds of companies the founders want to create, and then the disposition, and to some extent, though maybe to a lesser extent, the fund structure of the sources of capital. A surprise for me, as I've learned more about this space over the last 15 years, is the extent to which there's a finite or essentially finite set of funding models in the world.
And each has a particular set of incentives and for the most part, a particular sociology and that's evolved with time. Like venture capital was itself an investment. PE in its modern form was essentially an invention of invention.
I doubt we are done with that process of funding model invention and I suspect there are models that are at least somewhat different to those that prevail today that are somewhat more amenable to this kind of innovation. Okay, so one thing I'm excited about is I think, and you're a great example of this, but I think all of the people who became tech billionaires in the last cycle are pretty not most are pretty interested in putting serious capital into long term projects. And the availability of capital for significant blocks of capital upfront for high risk, high reward, long term, long duration projects that rely on fundamental like science and innovation is going to already has dramatically changed.
So I think there's going to be a lot more capital available for this. You still need the elon people to do it. One project I've always been tempted to do is say, okay, we're going to identify the, let's say 100 most talented people we can find that want to work on these sort of projects.
We're going to give them like two hundred and fifty K a year. So like enough money for ten years or something. So it's like give a 20 year old tenure or something that feels like tenure.
Let them go off and without the kind of pressure that most people feel, have the certainty to go off and explore for a long period of time and not feel the very understandable pressure to make a ton of money first and put them together with great mentors in a sort of great peer group. And then the financial model would be like if you start a company, if not that's fine. There'll be a writer or politician, think whatever.
If you start a company, the vehicle gets to invest on predefined terms. I think that would pay off and someone should do it. That's kind of the university model, I guess.
And I don't mean that as like this already exists. You're just reinventing the bus or something. I mean that it may suggest evidence that it can work and universities are usually not that good at, good at supporting their spin outs, but it happens to at least some extent.
And yes, one of the thesis for Arc in fact, is that by maybe formalizing this somewhat more than it is, by encouraging it somewhat more than it tends to be, that that actually might be a pretty effective novel. So Sylvana, my co founder at Arc, I've known her since we were teenagers more than half our lives and Patrick Sue, the other co founder she did her PhD with and so she'd known him for a long time. And so to your point about the long term investment, part of how I was comfortable with it is I'd known this person for, again a really extended period as you think of something like you mentioned retro or some of these other companies where you didn't.
How do you decide whether the person is the kind of person you can undertake this super long term expedition with? Actually, I had known Joe for a long time. It's a bad example. I guess that's the question.
Do you need to have known the person for a long time? It's super important. It doesn't always work, but I try to work with people that I've known for like a decade plus. At this point, you don't want to only do that, you want some new energy and volatility in the mix.
But having a significant proportion of people that you've known for a long time, worked with for a long time, I think that's really valuable. Like in the case of OpenAI, I had known Greg Rockman for a long time. I met Ilya for maybe only like a year before even a little bit less than we started the company, but spent a lot of time with him together and that was like a really good combination.
But I derive great pleasure from having working relationships with people over decades through multiple projects and it's a lot of fun to feel like you're building together towards something over that has a very long arc. Agreed. Which, which company that is not thought of as an AI company will benefit the most from AI over the next five years? I think some sort of investing vehicle is going to figure out how to use AI to be like an unbelievable investor and just have a crazy outperformance like rentech with these new technologies.
Is there like an operating company that you look at? Well, do you think of Microsoft as an AI company? Let's say no for the purpose of this question. Okay. I think Microsoft will transform themselves across almost every axis with AI and is that because they're just taking it more seriously or because there's something about the nature of Microsoft that makes them particularly suited to this, understood it sooner than others, and have been taking it more seriously than others? Um, what do you think the likelihood is that we will come to realize that GPT Four is somehow significantly over fit on the problems in the domains that it was trained on, or how would we know if it was? Or do you even think about overfitting as a kind of concern? Again, it's all about the code forces, problems before 21 versus after 21, where it does better on the earlier ones, et cetera.
I think the base model is not significantly overfit, but we don't understand the rlhf process as well, and we may be doing more like brain damage to the model in that than we even realize, you know, do you think that g like that the the generalized measure of intelligence exists in humans as anything other than a statistical artifact? And if the answer to that is? Yes. Do you think there exists an analogous sort of common factor in models? I think it's a very imprecise notion, but there's clearly something real that it's getting at in humans and for models as well. So I think we probably like over there's like way too many significant figures when people try to talk about it.
But it's definitely my experience that very smart people can learn, I won't say arbitrary things, but a lot of things very quickly. There's also some people who are just much better at one kind of thing than another. And I don't want to debate the details too much here, but I'll say it's a general thing.
I believe that model intelligence will also be somewhat fungible based in your experience, think about all this AI safety stuff. How, if at all, do you think synthetic biology should be regulated? I mean, I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn't a great experience, wasn't that bad compared to what it could have been.
But I'm surprised there has not been more global coordination after that. And I think we should have more. What do we actually do? Because I think some of the same challenges apply as an AI actually.
I think this is a production apparatus for synthetic pathogens is not necessarily that large and the observability and telemetry is difficult. No, I think this one's a lot harder than the AI challenge where we do have some of these characteristics like tremendous amounts of energy and lots of GPUs. I haven't thought about this as much.
I would ask you what we should do. I think that if someone told me this is a problem, what should we do? I would call you. So what should we do? I don't know that I've ready.
Prescription the easy thing to save, I'm not sure how much it helps, is we need a lot more general observability, wastewater sequencing, things like that. We should do that regardless. It doesn't help us.
Synthetic biology attacks and the fact that we don't have a giant correlational data set of the pathogens that people are infected with and then sort of longitudinal health outcomes is just like a crazy fact in general. And then obviously there's a somewhat innumerable set of infectious diseases, like classes of infectious diseases that people tend to be most susceptible to and obviously COVID itself being an example of this. And so I think we could make a lot more progress in panvariant, both treatments and vaccines than we do and then we have.
And so I think the particular thing, like if it is true that COVID was engineered, I think, you know, instances of that set of slight modifications to already existing infectious diseases, we can probably significantly improve our protections too. Obviously the concerning category would be completely novel pathogens and that's presumably sort of an infinite search space then you think get into how do you I mean there's a again finite set of ways to enter cells and receptors and so forth. So maybe you can use that to kind of tile the space of possible treatments, invest in a lot more surplus manufacturing capacity than we have for novel vaccines and hopefully mRNA platforms and similar make it easier to have general purpose manufacturing capabilities there.
But as you can tell from this kind of long answer, I don't think there's a silver bullet and it's I think plausible that even if you did everything that I just said, well, you still would not have enough. So I think it's hard getting way better at rapid response treatment, vaccination, whatever. That all seems like just an obvious thing to do that I would have again hoped for more progress on by now.
Yeah, I very much agree with that. And clinical trials that was the limiting step in COVID and I think it's at this point been widely reported and remarked upon that we had the vaccine candidate in January and you know, everything after that. I mean so some of what happened after that was obviously manufacturing scale up but but much of what happened after that was just like how long it took us to tell that this actually works and this is sufficiently safe.
And that seems among the lowest hanging fruit in the entire biomedical ecosystem to me 100%. But I guess your investment in trial spark is consistent with that observation. So Ezra Klein and Derek Thompson are writing a book about the idea of an abundance agenda and that so much of the left of the liberal sensibility is about sort of forbearance and some kind of quasi neopuritanism, et cetera.
And they believe, and I guess have been making the case in some of their respective public writings thus far, but for the purpose of this book, the argument that actually for a society is equal and prosperous and environmentally. Friendly and so forth. To actually realize many of these values we care about will need just, like, a lot more stuff in many, many different domains.
More kind of the Henry Adams curve realized. And they frequently observe that permitting in the broadest sense all sorts of well intentioned but self imposed restrictions are the rate limiting factor in making this happen, maybe most obviously with the energy transition across all the different things that you're involved with. To what degree do you think this dynamic of self imposed restrictions and strictures is the relevant variable in the progress that actually ensues? It definitely seems huge but I think also there's a lot of people who like to say well, this is the only problem and if we just could resolve like permitting writ large we'd all be happy.
And I don't think it's quite that simple either. I do think that the current system so I totally agree that we need much more abundance and my personal beliefs are abundant energy and abundant intelligence are going to be two super important factors there, but there's many others. Certainly as we start to get closer on being able to deliver a lot of fusion to the world, understanding just how painful the process to go get these things out is disheartening to say the least.
And it's pushing us to look at all sorts of very strange things that we can do sooner rather than wait for all of the permitting processes that will need to happen to connect these to the grid. It's like much easier to go desalinate water in some country that just has their own nuclear authority or whatever. I think it is a real problem, and I think we don't have that much societal will to fix it, which makes it even worse.
But I don't think it's the only problem. If Ezra and Derek interviewed you, and I guess they should, for this book and asked you for your number one diagnosis as to that, which is limiting the abundance agenda, what would you nominate? Like societal collective belief we can actually make the future better and the level of effort we put on it, every additional sort of gate you put on something when these things are fragile anyway, I think makes them tremendously less likely to happen. And so it's really hard to start a new company.
It's really hard to convince people it's a good thing to do. Right now in particular, there's just like a lot of skepticism of that. Then you have this regulatory thing and you know it's going to take a long time, so maybe you don't even try that and then you know it's going to be way more expensive.
So it's just like there's too much friction and doubt at every stage of the process of idea to mass deployment in the world. And I think it makes people just try less than they used to or believe less. When we first met, whatever was 15 or so years ago, mark Zuckerberg was preeminent in the technology industry and in his twenty s, and not that long before then, mark Andreessen was preeminent in the industry and in his twenty s and not that long before then, bill Gates and Steve Jobs and so forth.
Generally speaking, for most of the history of the software sector, one of the top three people has been in their twenty s. And it doesn't seem that that's true to me today. I mean, there's some great people in their 20s, but I'm not sure that problem.
Yeah, it's not good. Something has really gone wrong and there's a lot of discussion about what this is. But where are the great founders in their 20? It's not so obvious.
There's definitely some I hope we'll see a bunch. I hope this was just like a weird accident in history, but maybe something's really gone wrong in our educational system or our society, or just like how we think about companies and what people to aspire to. But I think it is worth significant consider turn and study.
On that note, I think we're at Time. Thank you so much for doing this interview. Thank you very much.
And thank you to folks at Sewing and to Graham for hosting.