Bret Speaks with Amy and Devon James of the Web3 Working Group. They discuss internet history, the future of our internet, and the pitfalls we should avoid. Find the Web3 Working Group at their website: https://www.web3wg.org/Find the Web3 Working Group on YouTube: https://www.youtube.com/@web3wgFind the Web3 Working Group on Twitter: https://twitter.com/web3wg ***** Our Sponsor: MUD\WTR: is a coffee alternative with mushrooms and herbs (and cacao!) and is delicious, with ...
I have the pleasure of sitting with my good friends Amy and Devin James, who are co-executive directors of the Web3 Working Group, which is a 501c3.
They know things.
I've been talking to them for years about questions surrounding the security and functioning of the web, and I'm going to Persuade them to speak as much in English as possible in talking about where we are headed and what internet it is we will face once we get there.
Amy and Devin, welcome to Dark Horse.
Hi, Brett.
It's so good to see you.
Thank you so much for having us.
Thank you.
Great to see you, Brett.
Yeah, it's great to see you guys.
It has been a long journey behind the scenes where we have checked in regularly about issues of safety and security on the web, issues of the, wow, I can't use the term equitable anymore.
But the way that the web distributes content and pays creators, all sorts of issues that are increasingly central to this mechanism we all use to interface.
So what I'm hoping you guys will do is talk to us about The current state of the web, about Web 3, I'm hoping you will define that term, and we can explore questions about how this interfaces with the various hazards and opportunities provided by artificial intelligence.
Anyway, let's start with that.
What can you tell us about Web 3?
What is it, and what should we do to make it better?
Okay, well, to get started, let's talk about how we got to Web 3.
So Web 1 began when Sir Tim Berners-Lee introduced the web protocol in 1989, 1990.
And before that, we had access to the internet via proprietary stacks.
So you could get on with CompuServe, you could get on with AOL.
But if you were on CompuServe and I was on AOL, we didn't see the same content and we couldn't interact with each other because these were closed proprietary stacks.
And the web was this way that all of the information could get connected and it gave us this open space.
And it was super exciting and it's obviously grown hugely in the last 20, 30 years.
It was actually intended, well it was, decentralized right at the beginning because literally everyone would run their own web server.
Right.
And the code was all in plain text English so anyone that wanted to learn about it could just check the source of one website that they liked and write their own and put up a site on GeoCity.
So it was very decentralized.
Yes, exactly.
One of the big changes when it came to Web 2.0 is that the infrastructure became incredibly centralized.
So Web 1.0 is often considered the read-only web because you needed sort of special skills to be able to write content to it, like you needed to be able to host your own server and to write your own web page.
There weren't these kind of like easy tools that we have today for user-generated content.
So Web 2 began in, let's say, the mid-aughts when YouTube, Facebook, MySpace, those kinds of platforms were creating the tools that users could write their own user-generated content to the web.
And so that is thought of by people as the read-write web.
And then Web 3, I don't know when they're going to pinpoint the exact starting point of Web 2, whether it will be with the launch of Bitcoin, although I think that's probably too early, or if it's somewhere now, because this is really when the flurry of activity in the building is happening. because this is really when the flurry of activity in But we're in the midst of it.
And Web3, people call it the read-write-own-web or read-write-verify.
The idea there is that there's this level of transparency and sovereignty that Web2 didn't allow because it was controlled by what we now call Big Tech.
And so Web3 gives us, it changes the shape of the network back to a decentralized shape.
It's pushing the control back out to the ends of the network to the user.
And that's because of things like blockchain technology and some other things, but that's the main one.
This episode is sponsored by Mudwater.
That's M-U-D slash W-T-R.
One of our favorites.
Mudwater makes a fantastic drink.
It's spicy and delicious and chock full of adaptogenic mushrooms and ayurvedic herbs.
With a seventh the caffeine of a cup of coffee, you get the energy without the anxiety, jitters, or crash.
If you like the routine of making something warm to drink in the morning but don't drink coffee or are trying to cut down, try Mudwater.
Each ingredient in Mudwater was added with intention.
It has cacao and chai for just a hint of caffeine, lion's mane mushrooms to support focus, cordyceps, the good kind, to help support physical performance, chaga and reishi to support your immune system, and cinnamon, which is a potent antioxidant in addition to tasting great.
Mudwater also makes a non-dairy creamer out of coconut milk and MCT.
They also make a sweetener out of coconut palm sugar and lucuma, a fruit of an Andean tree used by the Inca, to add if you prefer those options.
Or you can mix and match.
Add a bit of their coconut milk and MCT creamer with some honey from bees, real bees.
Or use Mudwater's lucuma and coconut palm sugar sweetener and skip the bees entirely.
Mudwater is also 100% USDA organic, non-GMO, gluten-free, vegan, and kosher certified.
Mudwater's flavor is warm and spicy with a hint of chocolate plus masala chai, which includes ginger and cardamom, nutmeg, and cloves.
It's also delicious blended into a smoothie.
Try it with banana and ice, milk or milk-like substance, mint, cacao nibs, And it's just the thing in the morning.
To get 15% off, go to mudwtr.com slash Dark Horse Pod to support the show.
Use Dark Horse Pod for 15% off at checkout.
All right, so let me see if I understand this.
And I should just say, personally, I remember the first time I became aware of the World Wide Web.
A postdoc at Michigan, where I was a graduate student, was very excited.
He had just spent the afternoon looking at the World Wide Web and he pulled me into the, I don't know what, it was a technology closet.
It was a room with a couple of Sun workstations.
Do you remember about what year it was?
It would have been '92, something like that.
Nice.
Okay.
Okay.
It was pretty early.
He pulls me into this tech closet and he sits me down in front of the Sun workstation.
He's like, check this out.
You can go anywhere.
All 10 of these webpages.
And there was nowhere really.
You know, there were a couple of pages and it was, I couldn't, let's just put it this way, it did not immediately dawn on me that I was looking at a revolution in the way information would be distributed.
And I did go through a phase where I put together a few crude webpages.
Heather became more advanced at putting together webpages.
But, you know, it was, as you say, a read-only web for most people.
You had to know a little bit of HTML to even get anything to show up on the web.
And if you didn't know a fair amount of it, it would show up badly paginated and spread out to the edge of the screen and all that.
But, you know, OK, so it was a place that you could deposit things and people could read it.
And it was decentralized because anybody, if you could get somebody to host it, you could put anything up.
And then there was the era of, you're calling it Web 2, where you had services that provided mechanisms whereby you would be interfacing with their HTML generator so that you could put things up and where you had services that provided mechanisms whereby you would be interfacing with their HTML generator so that you could put things up and it would But it required you to interface with these services.
And now in Web 3, you're talking about a decentralizing force.
And what's confusing, I suspect, for many people is why the authenticity question and the decentralization question appear to be the same, right?
So a lot of people will be vague on what blockchain exactly means.
Maybe you guys should start by explaining what blockchain is and then we can talk about how it interfaces with the web component to make what you're calling Web3.
Funny enough Amy was saying you know it's hard to really say exactly when when web 3 began because Bitcoin came out in 2009 but blockchain actually came out the same year that the web did or it was invented the same year that the web did.
1991 I think.
Around that you're right.
And it was actually to solve it was a couple of guys named Haber and Stornetta that wanted to solve for they ran into some fraudulent biology papers.
And they wanted to be able to basically say, I want to have, or really what it was is they saw that some changes had been made.
Something had been published, and then they made changes to it, and then they republished it.
And it wasn't transparent that the changes had been made.
And so they wanted to have a database that was replicated enough times where each copy of it was identical to each other so that you could actually see if anyone's trying to make any alterations to things.
In their case, it still had a dependency on a central point of failure.
There was no way to, because your computer's clock and my computer's clock and various web servers' clock, they might try to stay in sync with each other, but there's no way to autonomously stay in sync without trusting some central source.
So there was no way to timestamp them without depending on a central source.
They solved that ultimately by literally they would publish some aspects of the data literally into the newspaper once a week.
And so that would be the timestamp of it.
But it just kind of kept on running for years and years and years and years, but it always had that central point of failure.
So what it did have was it had full replication, where you would add a new block and it would contain a bunch of pieces of data and the block would reference the previous block.
Which means in order to add a new block you need to make sure that you have the exact same history as every other node on the network.
And so you can't make any changes to previous blocks without it being both transparent and rejected by everyone else because you've made a change.
To just say it super simply, a blockchain is a public ledger.
When it comes to Bitcoin, it's a public ledger of money.
For those of you that remember the register in the back of your checkbook, think of it like that, where it's just a log of the transactions that have happened.
The group of computers that are Verifying those transactions that they all agree on all of those transactions, so they're publicly available.
And then your other question was about, did you use the word autonomy or?
Authenticity.
Authenticity.
Yes, the authenticity of that comes from public key cryptography.
And we can get into that kind of as a second part of the conversation.
But essentially, that's just some math.
Well, but the other important thing about what was revolutionary about it was about that ledger.
Like, if you have your own checkbook, you have a single entry ledger.
You're making your own entries.
The Medici's 500 years ago or so popularized the idea of a double entry ledger, where both the lender and the borrower both have a copy of the ledger that has two columns for both sides of it, and it prevented fraud from happening.
And it's important to note that That 600 years ago when the Medici's popularized double-entry accounting, that was a dramatic change in business and allowing businesses to grow.
Before that, there was this general ledger system that was more prone to fraud.
Businesses could really only grow to the size of a circle of trust, usually inside of a family.
So that all of the people that were participating just were worthy of trust.
You knew who they were.
But businesses were allowed to become big businesses through the double entry accounting change.
And that's really why blockchain is so revolutionary and people aren't totally getting yet how giant it is.
Because it makes it into a triple entry ledger.
That's right.
Where the borrower, the receiver and a copy that everyone else has is that third entry.
The public is the third entry.
So no changes can be made in the ledger.
Previous to this, triple entry accounting had been theorized, but it hadn't been able to be executed in a way that was trustworthy, and that's what blockchain brings us.
So, I think from the point of view of the audience, it makes sense to hear these explanations, know that you're not going to retain them.
You should understand them as well as possible first time through, but you can sort of store them as the punchline.
That's the way I would go about it.
So, let's take the Medici's and double entry bookkeeping
Or double-ledger bookkeeping Into the evolutionary landscape we would say prior to this mechanism the reason that business interactions had to be retained at the family level or something similar was because that a Reduced the incentive to cheat when you treat your family member you also lose and so the incentive to cheat is lower but also It's iterated interactions.
Your family doesn't change chaotically.
So the point is the incentive to maintain a correct record so that those interaction, you know, if you, if you burn your credibility with all the people with whom you can trade, then there's nobody to trade with.
So it's a foolish thing to do.
So everybody behaves more honorably.
Yep.
What you're describing is double entry bookkeeping allows people who, um, Don't necessarily have a permanent relationship to trust the accounting enough to proceed with a finite number of transactions, which basically allows you to go from this very small scale to an indefinitely large scale with a High degree of safety.
But you still had to know who the other party was because you still had to have some degree of trackable trust with them.
If they've made lies in the past you want to keep track of who that was so that you don't necessarily interact with them in the future.
And the major change to triple accounting is you don't need to know who the other party is.
Because everyone keeps the ledger in sync with each other and thus trustworthy.
The security of the ledger is so high that nobody that who the other person is is absolutely immaterial.
If the math is right and the reporting of the ongoing updating ledger is therefore accurate, then There's nothing to fear in the anonymity of the other party.
Exactly.
It's not even a question of... On the Bitcoin blockchain, there's never been a fraudulent transaction.
No one has ever double spent an amount of coins.
No one has ever been able to steal someone else's on the blockchain itself.
There's other aspects of the stack that can be weaker.
Right.
There are shenanigans.
There are the weaker aspects of the stack.
But within the blockchain itself, there's never been fraud, which is enormous.
It's been running for 11 years.
Yeah, which is an amazing technological feat.
I will say one of the, maybe the fly in the ointment with respect to blockchain generally, Bitcoin specifically, is that the degree of understanding that is necessary to effectively use it is high, and I would say maybe even more to the point,
There is an absolutely perilous stage of utilizing it, at which you know enough to do it when it works, but you are in danger of putting money into this form and losing access to it, and there's literally nobody on earth who can help you, right?
Agreed.
It feels like we're at the IP address phase of the web where the nerds can use it and they're super excited about it and they can see the future of what it can do and everything like that.
But everyone else looks at it and goes, do you want me to remember the 198 dot dot dot?
Like, no, that's absurd.
We still aren't necessarily.
It's not like just naming is the solution.
It's a handful of little UX solutions in order to solve what you're How to track your keys, how to protect them well, how to make sure that when you're sending a transaction, it's going to the intended party and not just off into the ether, etc.
Agreed.
That's part of it, but it's also a question of tradeoffs, as you would say, right?
Because in our current paradigm, we have no sovereignty.
We have no control, right?
Like Google could cut off access to my Gmail or my Google Docs at any time.
And the trade-offs that you're making with blockchain technology is that, yes, I do have to take on some personal responsibility to know what my keys are, but I also cannot be separated from my data.
There is no third party, there is no intermediary who can come in and do something to cut off my access.
And so, it's that question of trade-offs and those user experience elements that Devin is talking about are currently being created, where there are going to be solutions to mitigate that incredible risk that people feel of like, OK, well, I'm not used to basically being my own banker.
I'm used to having the bank provide me this level of safety.
And then I'm also used to even the FDIC insurance providing me this level of safety.
And I have none of that in this other world.
And there are currently tools being enabled for that as well.
Right, and I would just, as long as, I mean, I really think it makes sense.
If the blockchain world is ever going to solve its problem, it needs to understand, you know, this principle in software design, you are not the user, right?
These people who have become expert in blockchain do not understand the high activation energy and You're trading one kind of risk for another.
Great power makes great responsibility.
Hopefully with great currency comes great organizational skills.
Agreed.
Because becoming a risk to your own wealth because you lose track of your access to it is a significant obstacle to people adopting.
And because people, to the extent they have understood this, have generated tools that then place you at some intermediate space in the tradeoff or subtly reverse it.
So if you're buying Bitcoin on an exchange, you think you own Bitcoin, but you don't.
You own an IOU.
And that IOU means that although it may be more robust, there's a reason for a crypto exchange not to want to betray its user base in a way that Google may not care.
We've also seen some serious issues with crypto exchanges as well, obviously, right?
And so what you're talking about right now is the difference between custodial and non-custodial holding of your funds.
And I'll just say right up top here, we definitely recommend people to engage in non-custodial forms of holding their Bitcoin, which basically means that they're holding it themselves.
And there are in fact solutions but not many people are fully aware of them yet to what you're speaking about of the non-custodial part that you're fully responsible and a lot of people don't have very good password management skills so if they don't have good password management skills and they've spent the last 10-20 years being trained by the web that you don't have to you can always click the password recovery button They've trained themselves to not be able to handle this responsibility.
Fortunately, there are in fact solutions for that.
If you have four or five friends that you actually trust, not so much that you trust them individually to have control of your money, but you can collectively trust them, there are technological solutions where you can basically give them a part of something that you can say, hey, I lost access to my keys.
If three of five of you all agree that I'm in fact me and I'm talking to you personally, you can recover my keys for me.
Yep, I see it.
On the other hand, I mean, let's just take one example of a problem.
I used an exchange.
I used a major exchange in order to get into the crypto market, right?
I had a slightly diverse portfolio.
I had some Bitcoin and I had some Ethereum.
At the point that issues of self-custody became important as we watched ricketyness in the exchanges, as we watched draconian measures following COVID where people were unable to access their money, for example, in Canada.
The Canada trucker thing, yeah.
The exchange opens that hazard.
So, what does one do?
Well, one takes their crypto out of the exchange.
And the problem is that if you convert your currency from one currency to another, even though you're exchanging one for one, suddenly you get taxed, right?
Because you've effectively converted an asset that varies in value, you have momentarily established its value and the government takes an interest in that money at a level of taxation.
Do you mean when you moved it, they considered it a taxable event?
Yep.
The exchange may, in the automatic reporting that they generate for you, they may make that assumption.
It isn't actually true, though.
You moving it from you to you is still not true.
It wasn't the you to you.
It was the Ethereum.
It was trading from one to another.
Yes.
Agreed.
And that's frustrating.
I totally agree.
But that's not a technology issue.
It's a regulation issue.
That's a taxation issue.
Exactly.
It's neither.
The point is it is an obstacle to normies getting into this type of currency.
As good an idea as it may be, the various uncertainties and unknowns.
When am I going to get hit by a tax I don't see coming?
When am I going to discover that although I think I own a whole bunch of Bitcoin, I don't because actually I bought it through an exchange that gave me an IOU and that's buried somewhere in the fine print.
All of these things are obstacles to people who know full well that their dollars are not secure in the way that they once were and might like to at least diversify, if not move out of dollars entirely, can't do it.
And it's why these currencies can't be used as a normal mode of exchange, right?
The burden of the technology is so high.
But of course, I'm not telling you anything you don't know.
Well, exactly.
To a certain degree, that's exactly why our organization exists, because that problem is solved on two sides.
One, there's been no new regulations written for the sake of crypto.
And some people look at that and say, well, why would you want more regulations?
Crypto is about freedom.
Why would you want... Because there's no new regulations, agencies like the SEC and the IRS are assuming that it's something old, something that they've already figured out how to regulate, whether it be a security, a stock, whatever else.
Some of them might be.
But because all those things were written into law 100 years ago, they certainly didn't imagine the existence of this technology.
And there's a handful of different ways in which this technology actually solves the problem that those regulations were intending to solve for.
So double solving that can actually create new problems.
So we're looking forward to new regulation.
And part of our job is to help educate regulators, or not regulate, well, regulators to some degree, but lawmakers themselves as best as possible so that they understand the technology and they don't throw the baby out with the bathwater.
One of the great examples that we like to use is in the mid-1800s in England, they were leading the way on the development of what would eventually become the automobile, you know, turning the steam engine into just putting it on four wheels, driving it through a town.
And they were way ahead of the rest of the world.
And lawmakers got scared of it.
I assume there was some accident and people got hurt, something like that.
But lawmakers got scared of it and they passed these things called the Red Flag Laws, where everyone, if you enter a city, you have to slow down to, I think it was two miles an hour.
You have to have someone walk 40 yards out ahead waving these red flags and warning everyone that's coming.
And a lot of other countries consider them too.
In the U.S.
there was one that was considered where if you were going to pass a horse and carriage, you need to literally park your car behind some bushes and disassemble the car so you don't scare the horses, right?
And so they actually implemented these in the U.K.
and it destroyed the nascent industry that was trying to develop this stuff.
And Germany saw the opportunity and said, Full steam ahead, no pun intended, and they got the credit for inventing the automobile.
Very quickly after that, England realized their mistake and repealed all the laws, and they tried to catch up again.
This is a global technology.
Some parts of the U.S. are acting like, well, we just own the tech industry, so whatever regulations we implement are going to be worldwide.
And that's just not true.
Other countries see the value of this.
And so it's our job just to help American lawmakers really understand the actual value of this so that they can go about fixing it or building the right set of rules for it.
But then the other side is educating the public because part of these problems are going to be solved by the next wave of people coming into it.
Either the users that are submitting complaints and saying them in just the right way for the engineers to go, aha, I finally get it.
Or the next wave of Web 2 engineers moving into Web 3 and saying, I have an idea for how to solve this.
I think a lot of the solutions to these things already exist in the market.
They're just still being kind of hashed out.
And then the next wave of growth for it is going to give the opportunity for them to actually be realized in a widespread way.
Well, I certainly hope that you're right.
these The vulnerability of fiat currency is, or the multiple vulnerabilities that come from fiat currency, are so great that I can only hope that Web3 does solve the many different obstacles and hazards that accompany crypto.
That said, I'm stuck in the same place that I am with, let's say, Pharma and public health.
Which is to say, I would certainly like to live in a world where these things are well regulated.
I certainly don't trust the people in the position of regulating them to do more good than harm.
In fact, again and again and again we've seen the opposite, and so I'm sort of a...
I'm forced into a libertarian position that I do not personally agree with because no government is better than malignant government.
And I wonder if we are not in the same place, especially in light of the obvious desire of the worst elements of our governance structure.
To force centralized bank digital currencies on us.
They obviously can't have a thriving crypto sector because if they do there will be no reason to embrace centralized bank digital currencies and you know in the worst case scenario we might all end up actually free and happy.
You know what I mean?
We focus mostly on the financial applications of this technology so far, but there are applications that are well beyond that, that are actually, I think, much more exciting in terms of how they are going to affect us on a day-to-day basis, just like average internet users like you and me.
And the same principles apply here in terms of transparency and authenticity.
And the entire way that this technology can remake the internet, re-decentralize its structure is so profound that it can bring back the kind of trust that you're looking for in these systems, whether or not they are regulated.
It's kind of like The access to information that they will provide, the access to transparency of that information, of providence of that information, etc., that they will provide, will then allow us all to look at that and say, well, this is how we think that things should be done.
So, regulation could use that information in order to better regulate, whereas most of the regulation right now is trying to say there's some sort of What's that word?
Asymmetrical information, right?
That's why stocks have to, you know, have these public disclosures, right?
And that's just sort of not necessary when it comes to blockchain technology because... By their nature, they're public.
And by their nature, they're transparent and have this record of, you know, when things happened and who was involved with them.
A record that's more indelible than anything else on the web.
Alright, I want to pause you there because I think there is a trichotomy here that people need to be, they're sort of dimly aware of it, but they need to be pretty well versed in it to understand what you might be talking about.
Okay?
So you tell me if I've got this wrong.
But what you have is you have blockchain technology.
And what you are now talking about is the use of blockchain technology for something other than cryptocurrency, right?
And so, blockchain is this ledger mechanism you were describing.
And the Bitcoin ledger, what it tracks is just debits and credits, that's it.
Other ones track other stuff, other types of data.
Ah, so people need to separate these things in their mind.
Because the first use case for, or the first major use case for this technology is cryptocurrency, people have them synonymized in their mind.
And it's really important for much of what you guys are talking about that they be able to separate these things.
And understand that that ledger technology can be used for all sorts of things.
As you're pointing out, and it's news to me, but as you're pointing out, the original use of blockchain was apparently a authenticity record.
It was not about currency at all.
Exactly.
So, the currency at the point of the Bitcoin white paper becomes this use case that then has the potential to replace fiat currencies.
And so it captured the public imagination, but it has unfortunately synonymized the two things.
So for the point of view, from the perspective of trying to maintain an understanding of these things, blockchain is separate from crypto and Bitcoin is the largest blockchain utilizing cryptocurrency there is.
And it was the original one.
It's far from the only one.
But anyway, keeping those things, Bitcoin is not crypto and crypto is not blockchain.
That is an important distinction to maintain.
Yeah.
So the Bitcoin blockchain, what it did was it combined some previously existing technologies to create Bitcoin.
It combined blockchain with public-key cryptography and it used proof-of-work for its consensus.
Those three pieces had all been done previously.
Haber and Stornetta had done blockchain, Adam Back had done proof-of-work, and public-key cryptography was just like, It's how TLS works, it's how we run it.
It's just like how anything is encrypted on the web.
And so combining those was like the big innovation that Bitcoin brought forward.
And because the proof of work mechanism is incentivized, because a sum of blockchains is released every time a new block is created.
A sum of bitcoins.
A sum of bitcoin, what did I say?
SM of blockchains.
Then that's what incentivized the use of the network.
And so when you say like they're different, they are different in terms of their application, but a lot of them still have that incentive mechanism.
And that incentive mechanism is really, really important.
I would say it is what allows Web3 to be a realistic vision, whereas before it wasn't.
Because we've had certain decentralized technologies for more than 20 years.
You know, we've had BitTorrent forever, and it's just become known as a... And do you remember folding at home and SETI at home, for example?
Okay, like they've been around for I think five years, ten years longer than Bitcoin.
And Bitcoin is dramatically more powerful of a network.
And it's because of the financial incentive.
Because all of the workers that are participating, with SETI at home and Folding at home, it was just, hey, use your idle computer time to do these important things.
You know, cure cancer and find whether there's aliens in the universe.
These are really important things.
But it was all voluntary.
And so you weren't going to get commercial activity.
You weren't going to get people dedicating real resources to it.
Because of Proof-of-work was the solution to the central point of failure that Stornetta and Haber had.
And part of that was because it gave the workers that were creating each new block an incentive to do so.
They were getting paid.
And almost every blockchain I can think of has that in common.
So there is a financial aspect to every single one of them, but the financial aspect just incentivizes the continuous operation of the rest of the network's operations.
It incentivizes or orchestrates the network to work properly.
We've had BitTorrent for all of these years, but it's just become known as a piracy thing because the people who are involved in maintaining that network do it just because they care about it.
You can't count on a file to be on the BitTorrent network for you.
I mean, now they have created a token, so, you know, I'm talking about before that, but you can't count on that file to be there for you.
Even though the dynamics of a network like BitTorrent compared to the CDNs that we use now are wildly more efficient.
They produce wildly better results.
They're wildly less expensive to operate.
And yet, the structure of the CDNs that we have right now, the centralized ones where we have this funneling problem where the more popular something gets, the lower quality it is, the slower it gets to you, the more expensive it is for the platform that's providing it to you.
That doesn't work when you compare it to a decentralized network and all of what it can provide to you.
But you couldn't count on those decentralized networks until we had that incentive piece.
You can't dismiss it.
Yeah, let's translate this into English and then biology.
So, SETI at Home is a great example, right?
SETI at Home, you could use your spare computer time to look for patterns in the universe that might indicate aliens.
Cool, but why would you?
Well, maybe you'll find aliens and people will interview you about what it was like to be the first person to spot an alien pattern.
But the chances are pretty damn low.
And so it was pretty limited.
You had to be kind of enthusiastic about it and maybe a little altruistic.
And neither of those things are a payout in and of themselves.
They're not sustainable, right?
They're not sustainable.
And in fact, the altruism piece is a penalty.
Really.
And you have something like BitTorrent.
Why did BitTorrent work better than SETI at home?
Well, because you did have an incentive to log into the network, right?
You had an incentive to log into the network, which is that you wanted to download something.
Get some free music.
Right.
And as long as you are logged in for your own selfish purposes, then maybe you feel a little shame about not doing your part.
Maybe you're You'd notice that it's really free to you to allow the extra bandwidth that you've paid for to be used in this way, and so maybe you're sticking it to the man.
Whatever it is, there's some reason once you're logged in for your own purposes for you to do a little bit of the heavy lifting that doesn't cost you anything.
But it's still not a payoff, right?
It's like holding a door for somebody.
It's a kind of altruism that is so cheap that why wouldn't you?
You're not going to dedicate enough computer resources that it'll increase your electricity bill, or you're not going to, you know, buy a bunch of extra hardware to do it, because that's an outlay of cost that you're not going to get back from the network.
Right.
But you might throw a little, you know, it's like throwing cardboard in the recycling bin rather than the trash, right?
Exactly.
Exactly.
A tiny bit of effort, you might as well.
And then there's a question about, well, what if you were actually, you know, what if you were paid?
And, you know, we've seen Silly examples of this, or somewhat silly examples of this, like bounties on glass and aluminum that do cause people who are really in desperate straits to try to accumulate enough aluminum to actually go buy a sandwich or whatever.
But if you actually got paid serious money, then you'd do it, because why wouldn't you?
And in fact, we've seen that this is so successful that it has now actually driven the price of computer hardware that's good at this sort of thing.
Through the roof and created scarcity, right?
Because the amount you can get paid by doing the work that is the central part of proof of work is substantial.
In fact, you can make a lot of money.
Mm-hmm.
All that correct?
Yes.
Okay, so what you are saying is incentives matter.
This, of course, shouldn't be news to anyone.
Of course, incentives matter.
It's why we do everything and it's why we're actually built the way we are, right?
Right, yeah.
Why does sex feel good?
It has something to do with the fact that your ancestors passed their genes on through this behavior and therefore a system was built to create an internal reward so that you would be driven to this thing, right?
So that kind of logic is of course how you would build a system that actually worked unlike something like SETI.
All right, given that, you have barriers to entry for the Web3 space, you have incentives to participate at a high level, and you have trade-offs for those of us who are not likely to end up doing the work of the network, but maybe would like the security of a currency that is immune to fraud or immune to governmental interference, right?
Barriers to entry that keep us from utilizing it.
So, okay, all of that is true for the currency side.
And then there's the question about if we take the currency out of this and we look at the technologies, you're talking about a mechanism for ensuring authenticity.
Well, suddenly authenticity is on a lot of people's minds because we recognize that we're like hours away from deep fakes that are so good that you can't detect them by looking at, you know, the direction of the shadows and The stuff that we used to be able to do to figure out whether a photograph was for real.
So yeah.
So Devin used to be a compositor before we got into this industry.
He was working in the entertainment industry.
And so I like to say like, he's the reason that photos no longer hold up in court, right?
Because he could create images that were photorealistic.
You would totally believe And we would look at things like which direction are the shadows coming from.
That kind of stuff was really important and getting it right means you can go to a movie and actually have the suspension of disbelief and that same exact person or technology can do it to make, you know, fakes on the web.
So it's always, it's been around for 20 or 30 years.
It's just computers can do it faster now, way faster.
It took an enormous team of compositors to make images that didn't really exist to look believable, right?
You had to have a tremendous amount of resources.
And now you don't have to anymore, or you won't have to very soon.
You know, the AI is not quite there yet.
The six finger and stuff is usually a pretty good giveaway, right?
It's getting better and better.
I think they just solved for the hands recently.
It's getting better and better.
And when we first saw this, I think it was in like 2020, 2019, Devin was like, wow, I cannot believe it.
This is so exciting.
The kind of filmmaking that's going to come out of this is just, he was just like giddy.
I was horrified because I had, because I'm a woman, and I had just an innate understanding of the way that this technology could be used to hurt me.
Where, you know, you've seen celebrities kind of have these deepfaked images of them naked, doing disgusting things, whatever.
I know Anita Sarkeesian from college, you know.
And so I just was like, oh, this is going to put this down onto a level where it's going to apply to everyday people.
It's not just going to apply to celebrities, that something like this could happen to you.
And then all of a sudden, people are going to be looking at you sideways as if those images-- you had participated in the creation of those images.
It gets me kind of flustered still.
So we actually made a video about this to illustrate this problem, and I got a friend of mine to shoot some sexy bikini footage, and Devin comped my face onto her body just to give people a visual image of, this is a threat to all of us.
It's a threat that's very real at this point now with where we are in the progression of things.
And so that takes us to the cryptography element, the authenticity element.
Before you get there, I want to say, first of all, you guys showed me that video years ago.
I guess this would have been pre-COVID, am I right?
Yeah, right before COVID, yeah.
I think it was right before COVID.
I'm still slightly disturbed by it.
I am too, I had to make it.
Yeah, even worse for you.
But yeah, and I think the point is, you know, my conscious mind understands that this was done to prove a point, right?
But some part of me, like, has this disturbing image of my friend Amy, and it's like, you know, that hazard is going to become ubiquitous because the tool that is going to allow you to generate this is going to be so low effort that people are going to create every video that serves to hurt somebody they don't like or that they get off on or whatever they're going to do.
And the point is, that's a frightening world to live in.
Right.
And to impact geopolitical events, you know, the Pentagon bombing a couple of weeks ago that caused a half a trillion dollars of market cap to be briefly lost.
Like, that's dramatic.
I mean, in that case, it's absurd that anyone reacted that way because it wasn't even in the shape of a Pentagon.
It had all those artifacting things that people talk about.
So, with a few seconds of looking at it, you could tell that this is fake, but it spooked the heck out of the market.
And I think mostly it spooked the market because they were like, OK, we figured it out pretty quickly, but pretty soon we're not going to be able to.
And mainly they were just scared by the implications of this, I think.
It was a risk aversion.
They don't know what this means, even if you know it isn't what it appears to be.
We're never going to be able to trust anything we see again.
Exactly, exactly.
And when these fakes get really, really good, I mean, basically we're going to be in an arms race where the detectors are going to be pitted against the generators and we're going to get to a level where no human is able to detect.
Fortunately, there's a much better way to actually distinguish between them.
Except for...
Hold on, hold on.
Before we get there, I want to complete this dystopian picture of the world that's coming if we don't do something.
Certainly.
And I think it can be summed up in the following phrase: "Pix and it didn't happen." Yeah.
You said something else.
Everything is a Psy-Op.
Assume everything is a Psy-Op.
Everything is a Psy-Op until proven otherwise.
Until proven otherwise.
Exactly.
Until proven otherwise.
And actually... Right.
Oh, go ahead.
Well, no, but PIX or it didn't happen, like, how do you even... No, no.
PIX and it didn't happen.
I changed it to PIX and it didn't happen.
Right.
You have vivid pics that prove it happened, and yet it didn't.
It didn't happen.
Right.
It didn't happen.
Yeah.
Right.
But, you know, okay, so then if you teleport yourself into that world before you guys ride to the rescue with a technology that might save us from it, the problem is, okay, you can adapt to that world.
How?
Total cynicism, right?
It's the, sigh up until proven otherwise.
And the point is, do you really want to live in a world in which your eyes are incapable of telling you what happened unless it happened immediately in front of you, right?
No, because the internet is the greatest library that's ever existed.
And I don't, we just had a son, and I don't want my son to grow up without the resource of the greatest library that's ever existed.
Right, and in which basically it is a cesspool of nefarious claims backed up with evidence that even if your conscious mind can dismiss it, your subconscious mind will make whatever of it that it will make.
Exactly.
And it's like you said, it's like once you see it, you have forever seen it.
You can't unsee it.
And so wouldn't it be great if you could just make sure that those things never ended up in your feed to begin with, but you could also know that those filters weren't being used nefariously to suppress information you wanted access to.
Wouldn't that be amazing?
Hey, is there a technological solution to this that might delude us there, perhaps?
What's so funny about this is that to everyone inside the crypto industry, this answer is plain as day, super obvious.
I feel almost silly talking about it.
But to everyone on the outside of the industry, it's like, what are we going to do?
We need detectors because we're not going to be able to tell the difference between these kinds of images.
Are the detectors even going to be able to keep up?
Blah, blah, blah.
The very simple answer to this problem is public key cryptography.
Fred Wilson wrote an article called Sign Everything that's extremely well written and just very straightforward.
If people want to read that, I recommend it.
But basically what it means is that So public-private key cryptography uses key pairs.
So the keys have a mathematical relationship.
Well, hold on.
Before we get to that, before public key cryptography, we only had what's called symmetric cryptography.
And this is what's been used since, you know, kind of Roman times kind of thing, where the key that is used to encrypt something is also the key that is used to decrypt something.
So you need to have a distribution mechanism for those keys so that whoever is on the other side is able to receive it and no one intercepts it in the middle.
And if you and I sit down at a table And we take the alphabet and we agree that an A is the key, etc.
And then you and I go our separate ways.
I can send you a letter and you take your copy of the key and you decrypt it.
Decoder rings, exactly.
And this was used, this was even used through World War II.
And one of the major, if you ever saw the movie The Imitation Game, what it was basically talking about is they were trying to figure out a general computer that its purpose is to be able to decrypt the Axis messages because they were using cryptography.
And they figured it out.
They were able to decrypt them and they didn't want to let the Germans know.
So they couldn't, like, you know, give all these tells and stuff like that.
But it enabled them to make strategic decisions to shorten the length of the war and to win the war sooner.
And it was because they broke their cryptography, because they were dependent on symmetric cryptography.
So one of the reasons that the web became, like, one of the things that made Web 1 go into Web 2 was the introduction of TLS and SSL.
Why you have that little lock on your URL used to be when you'd visit your bank, now it's on every web page.
It uses a combination of symmetric and asymmetric cryptography.
The difference there is that Like Amy was describing, it involves a key pair, and we probably shouldn't go into all the details of this.
Amy actually made a video about it.
That's one of, you know, obviously we make a lot of videos to teach this stuff, so if anyone wants to really, really get into this stuff, they should watch that particular video.
But it depends on a key pair, where you have one that is private and that you maintain, and one that is public, and they are mathematically related to each other.
And so anyone can, using just math, nothing else, be able to prove that if I put something out in the world that is derived from my private key and I've also shared my public key, they can know with certainty that that thing was, in fact, derived from my private key.
There is a one-to-one relationship that can't be faked by anyone else, et cetera.
And you can't reverse engineer the public key back to the private key.
Exactly.
People will have some experience with this, right?
There was somebody in everybody's circle who used PGP.
PGP.
And there would be a large sequence of ASCII or extended ASCII characters at the end of it.
And there would be a large sequence of ASCII or extended ASCII characters at the end of it.
And it would say PGP block begin, PGP block end.
And it would say PGP block begin, PGP block end.
And if you weren't encrypting, you ignored it.
And if you weren't encrypting, you ignored it.
But if you wanted to send this person a message that was truly private, you could use that thing.
There was no risk of getting to anybody.
It was useless on its own.
You could use it to encrypt the message that they had the decoder ring for on the other side.
And those two things were generated in tandem at the beginning, right?
The computer, you would give it some random inputs, move your mouse or something like that to give it some numbers that were not predictable from anything.
And it would generate this pair of keys.
And then your private version, you had to guard.
And it was also guarded by a password that presumably you could be.
It's really hard not to write down anywhere.
Yeah.
Right?
And so anyway, that password allowed you to access your private key, which didn't mean anything to you either.
It was only useful if you got a message that had been encrypted with your public one, and voila, a message could arrive.
It didn't matter if it was intercepted because nobody could read it.
And when you got it, you could open it up and decrypt it with your half of this thing.
And not impossible to break it, but at levels of foreseeable levels of computing power, the mathematical, the computing horsepower necessary to break it put it, what was it?
Tens of thousands of years in the future if you set a computer to the task of guessing and it just kept... Yep, exactly.
But like you said, it never caught on, right?
There was like one person in your circle that used it because it was this kind of like onerous thing to do, to encrypt your email.
It wasn't that hard, but it was just this extra step.
It was super annoying.
But then ProtonMail built it in to its software.
And so now you and I can exchange emails just as we would on any other email software, only it's fully encrypted end to end.
JOHN MCWHORTER: Automatically without you going to do anything extra.
JENNIFER SAUNDERS: And that can be applied to this as well.
Now, again, I mean, I really think, I hope that the crypto folks will turn the lasers off in their eyes for a little bit and just tune into this podcast and hear about the problem of all of these technologies.
But the reason that PGP never caught on was not just that it was a little weird, right?
Once you went through the process of generating the keys, it was easy enough, but the point is, okay, so I have, let's say I have your key, Devin, and I send you a message.
Very likely, the next thing that happens is you're going to send me another key and say, could you encrypt that again?
I no longer have the private key to decrypt that message you just sent me, right?
That happens all the time because just like with cryptocurrency, you can lose access to stuff that you're entitled to have.
And, you know, so ProtonMail, I use it myself.
It's a great example of what can be done.
On the other hand, it now introduces the problem of, like, the crypto exchange, right?
You are now dependent on ProtonMail being what it appears to be, right?
And that's an additional problem.
So... Right.
And you're dependent on... Because ProtonMail is maintaining the servers where your information is, you know, is stored, essentially, right?
And most people didn't download a copy of their own private key.
It's just stored... Exactly.
And so while we all just take the leap of faith and trust that ProtonMail is encrypting everything end-to-end and they don't have any sort of back doors into that and it is on their servers, and so if somebody were to subpoena the information on the servers they wouldn't necessarily be able to access our information, that's all a leap of faith that we're making.
And what we're talking about with Web3 is decentralizing the full protocol stack so that it's decentralized all the way down to the layer of the server infrastructure itself.
There's a protocol called a cache that you can use to rent server infrastructure so that you're not necessarily using a single provider.
And it's literally just decentralizing the server infrastructure itself.
We're in this weird time right now where most of us are still on Web 2, Web 3 is sort of being built, and the bridges from Web 2 to Web 3 aren't obvious and are being experimented on to see what makes it easy, what people like, what makes it, and even really what's necessary.
So a lot of people are calling this like Web 2.5 because there are some apps that are out there right now where a piece of their app is decentralized, right?
They're built on Arweave or they're built on IPFS, so they have this aspect of decentralization because they're decentralized file storage, but they also have a client that's hosted on a server so that you can just go to www.whatever and get to it directly.
And that piece is still centralized because they're running it on Amazon Web Services because they haven't felt the pressure to decentralize all the way down to that server layer and have it hosted instead by a cache, for example.
Because building a new protocol stack where the app is decentralized all the way down is a lot more work than what you're currently used to doing, just because these protocols are still being built, essentially, right?
Let me just add one more thing before we move fully away from the sign everything.
The extra layer to that, we were only talking about how to encrypt things so that they stay private.
You know, I'm sending it to you and only you can decrypt it.
The other thing that you can do with this is I could take some message that I'm going to put out into the world publicly.
And I can create a signature based on that message using my private key to decrypt it.
And because my public key is available to anyone, anyone can use the public key to decrypt that signature.
And what that tells them is the signature was generated based on the thing that I'm sharing and it was generated by my private key and only my private key.
So it's a way that I could put up a blog post or a video or anything else I want and attach a signature to it and that signature is mathematical proof that the thing hasn't been altered since I signed it, that thing that I'm sharing the blog post or the video, and that I'm the one that actually shared it.
And so that's the extra layer that we use in order to figure out that authenticity aspect of the stuff being shared on the internet, that especially as this fear is growing, and especially as actual market impact is growing, there will become more of an awareness that this is the way to solve there will become more of an awareness that this is the And thus, we will adopt this new perspective of sign everything.
If you want it to be perceived as authentic from your followers, from the public, from the government, anyone else that you want something to be perceived as authentically from you, you attach a signature to it.
It's incredibly simple for you to do so.
And anyone in the world can mathematically prove that it came from you.
It didn't come from anyone else and it hasn't been altered.
Well, I'm really glad that you brought that up.
And the thing about that is that he's making it sound like it would be this manual process that you would have to engage in.
And really what it is, is that you could use an app that's integrated that.
And so that's what I was talking about, where wouldn't it be great if your feed could automatically be filtered so that you're not seeing any of the stuff, any of the garbage.
But you can also audit those filters to know that they are not being used to suppress information.
Okay, I need help understanding some aspects of this.
Okay.
Something shows up in my feed.
It appears to be a video of some event that would matter if it happened, but would be trivial to, with the help of AI, generate something that made it look like it happened and it didn't.
What does this system do for me?
I can look...
Suppose there's no signature.
Right.
Don't trust it then.
I can filter it out on the basis that if it were genuine, it would have one.
Exactly.
Suppose it has a signature that's unknown to me.
Exactly.
That's only been used to sign one thing before.
So again, what you know is this doesn't have the party that released this doesn't have any track record of trust in the past.
So you can treat it as you want.
Like some people might say, well, at least they signed it.
They're starting to establish a track record.
But I know that it's less trustworthy than the thousand messages that I know were published by Amy and all used the same key.
The same public key, so I know all those came from her, and she has a whole history of them.
Whereas this one, that's a really huge event, it's signed, but by a key that's only been used once.
So the person that signed it had no interest in establishing their past track record of trust.
So we can go back to your picks, and it didn't happen.
Assume nothing is true.
Assume everything is a PSYOP, unless it's proven otherwise.
The signature proves otherwise.
It proves something.
It gives you some piece of information.
Right.
You're getting to the issue of trust.
And just because something is signed and just because something is on a blockchain doesn't mean that it's trustworthy.
And I think that that gets conflated.
So I'm really glad that you're bringing that up.
I get that it's not.
You could put garbage on a blockchain.
Not only is there garbage on the blockchain, but it'll be there forever.
But it does allow me, if something's never been signed, I could set a filter that says, I only want to see things from sources that have signed 100 things or more that I can then go look at and see whether or not, you know, it could have signed I only want to see things from sources that have signed 100 things or more that I can then go And then I know, oh, that doesn't strike me as likely to be worth trusting.
Much proof, right.
Or it could be somebody who has, you know, documented some thing that I can otherwise verify.
You know, I don't know what the pattern would be that would suggest that this was a source.
I would say putting a signature onto your social media profile.
Then as long as you trust that the tweets that are coming out of someone, if you've interacted with them on Twitter and you trust that it's them and they have created a loop between their public key and their Twitter profile or their Instagram profile or whatever, that attaches them to them.
Like the person that has those keys also has the login information for that Twitter account.
Yep, this gets us some fraction of the distance.
It's that bridge, right, between Web 2 and Web 3.
It's like, OK, well, what we have right now are these profiles on social media, and we assume that they are, you know, who they say that they are.
For the most part, it used to be that that blue check told us that somehow the platform was involved in verifying and saying who that they were.
It told you that you were dealing with a very high quality person.
That's no longer a filter because you can just buy that now.
So, you know, it is tricky.
We're in a tricky landscape here, right?
Are you going to put up, you know, your ID along with your public key?
Like, wow, that would be a lot to put out there.
But you're right that we need these sort of steps along the way to say, to give it that verification.
there's other things that have been played with where it was kind of like what Devin was saying, when you can share your keys with friends and that they can help you to recover them.
You can also do that with attestation of like, this person is who they say that they are.
That was being done with land records at one point was something we were working on.
It was like key signing parties where people get together and they show their ID to the other people that are having pizza with them and whatnot, and then they sign something.
And so now if you have, if I have trust for all the statements you're making online and you attest that these 10 other people that you met at this, at this particular event are who they say they are, then that increases my level of trust of them as well.
Okay.
Um, I see what's getting obscured in this is that this is a giant, uh, leap from the, The zero trust future that is coming almost immediately if we don't have this.
Right.
A world where everybody will either be a permanent Rube Right.
Being suckered left and right or a cynic who can't be convinced of anything.
Right.
That's the two futures we have if we don't have a technological.
Neither of those are good for us.
Exactly.
Both of them are disastrous failure modes and a mixture is an even worse one.
But this provides a mechanism to solve the cynicism problem.
Right.
It's.
It gives you a basis on which you can establish trust of a certain amount of stuff.
Yep, the world will still be full of untrustworthy stuff and you won't necessarily know what to think of it, right?
If I see that a particular image that suggests a particular fact belongs to a Twitter profile that has 5,000 followers and follows 300 people and has posted on a variety of topics and is followed by three people I know, right?
That is not a garbage account, but it's not a highly trustworthy account from my perspective.
It's just somebody who probably exists, right?
So anyway, the point is that's an intermediate level of trust.
This doesn't give me a high degree of confidence of that image.
And it also, I hesitate to say, opens a frightening landscape of possibilities with respect to what the value of a highly trusted source signing a false piece of information would be.
And then what happens in a world where somebody has spent a lifetime building up credibility and then somebody puts a gun to their head to borrow it to establish a fact that isn't a fact, right?
That's a frightening prospect.
Yeah, for sure.
Agreed.
But, you know, I mean, again, I hate to introduce that here because I do think there's a frightening landscape of uh, potential abuses here that we will only learn about after such a thing exists.
But I do think such a thing has to exist because the cynicism failure mode is, you know, it's next week, right?
It's catastrophic.
Yeah.
Yeah.
So I wouldn't necessarily say that agree though, that that is a newly introduced issue because without that, all we, all we have to trust people is as far as people we see on the internet is the fact that they have the ability to log into an account.
And if they've built up a large following and they built up a lot of credibility, someone coming to a gun with their head, the exact same thing.
They're not asking for their, for them to sign something with their private keys.
They're asking for their Twitter credentials same exact threat Yeah, you're you're you're right about this and I do wonder how much of our Our worldview is shaped by People who have established credibility and have some vulnerability are being, you know, manipulated.
Agreed.
Let's say, I don't, when I look at Bill Gates and I see what he's doing with respect to public health, It looks diabolical to me, but what I don't know is whether his connection to Epstein caused him effectively to be a robot doing somebody else's bidding that isn't really what he would be dedicating his time to if he was a free person.
I don't know.
So for that reason, a healthy degree of cynicism about what other people say is going to be important no matter what version of this we're in.
Because we don't know how someone's being manipulated.
There's healthy skepticism.
Skepticism.
Thank you.
Much better.
I mean, this is unfortunate because the healthy people are going to go to cynicism.
Let's hope not.
And this is the only thing I've heard that at least provides the first stepping stone to a world in which we can again establish some basis for trusting stuff.
Right.
I mean, what you're talking about does sound like just a very dark, sad world, if that is where we proceed.
And I think that, you know, I'm not naturally a person drawn to technology by any stretch.
I, in fact, find a lot of the things that we work on quite boring.
But This technology is the hope.
It is the way out, in my opinion.
And once you understand it enough to understand what it can provide, it's like, OK, yeah, let's start running toward the light.
Because there is an option here.
And so it becomes really exciting that we could build this sort of decentralized library of Alexandria.
That's what our project was actually first called when we first got started.
You know, that's immune from the kind of destruction that the first one suffered because it has this decentralized aspect.
So as long as a single copy exists, it still exists.
And so that it has all of these tools to make that information more.
and more, I don't know, Something that you have more resources to evaluate and analyze.
Well, and just to add the controversial element, one of those tools is going to be personal AI, because AI is good at recognizing patterns.
And if you've got that person who has a history of posting somewhat trustworthy things, that they've grown up a following, and then suddenly they start talking about other things, and those other things are also being shared by other accounts that aren't very trustworthy, you might not be able to detect that as easy as your personal AI assistant.
Well, actually, I don't know if you want to jump to AI, but this might be the moment to do that in this conversation, because AI and Web3, they really need each other.
Web3 solves a lot of the problems that are concerns about AI.
AI won't be able to get to the market without Web3, and so that's going to bring Web3 along.
There have been a lot of us that are like, what is going on?
Why has Web3 taken so long to catch on?
Why is it still so complex?
Why is it still so difficult to access?
Why isn't it more integrated into everyday apps for end users, not for developers?
You know, it took the web less time to catch on Web 1 than it has for Web 3 to catch on what is going on.
And the AI issue is really, I think, going to be a breakout moment for Web 3 because you can't get access to some of the things that you need to build AI, and Web 3 can provide that, like access to the GPUs.
Fill that in for me a little bit.
I'm trying to understand, you know, my sense is the Web3 taking so long is a matter of all of the various complexities of That you solve at the expense of robbing it of its value, right?
You know, sort of like the crypto exchanges.
You want some crypto?
You know, as an investment?
Oh, you can get it on an exchange very easily.
Just use your credit card.
But what did you give up?
You know, you are now exposed to the risk that the government is going to decide that you're a bad person who's not entitled to spend their money.
Or that the exchange is going to go bankrupt and you're low on the list of creditors or whatever it is that might happen.
The difficulty in separating the technological obstacles and the knowledge necessary to properly interact with it is robbing producers of the incentive to generate the mode of interaction because the audience is limited.
Okay, yeah, that makes sense.
I think that because it's been so focused on the financial applications up until now, it adds this degree of danger that's been off-putting.
I think that's what I mean about Web3 and AI having this breakout moment together, because AI is going to be using that Web3 infrastructure that we've talked about.
When I talk about Web3 infrastructure, I'm referring to what I like to call the plumbing of the web.
It's the kinds of things that the protocols that are responsible for the things that we do on the web day in and day out that we never even think about.
So the kinds of things that are allowing us to be on this video call right now, you know, decentralized file storage, video transcoding, content indexing.
All the things that a computer can do that you need to kind of put into various data centers in order for the web to exist have, I think all of them at this point, have now a decentralized analog that can do that thing with a higher degree of trust where you're not having depend entirely on say AWS or YouTube to do it for you.
You're using a decentralized network that is transparent.
You can see it and it's actually running on some individual's computer somewhere.
Yeah.
I want to ask you about that though.
My impression, you know, you, the three of us have talked about this many times over the last however many years.
And my sense is that all of this stuff is now doable in some decentralized form, and much of it is quite good.
But there's nothing that lacks a central point of failure.
And so my sense is, no matter how much you decentralize, there's a point at which you're going to run up against one of the protocols at the bottom of this stack.
Right.
And somebody has ultimate control over it, whether it's ISP or DNS.
Exactly.
That's a totally reasonable worry because if you leave one central point of failure in there, then that's the point of leverage.
And so you don't want to.
And you're actually incorrect about that.
The whole stack has now been decentralized, including ISP access and DNS access.
So you can do storage, you can do video processing, you can do compute, you can do everything you need to on the web in a fully decentralized way, including there are, in fact, decentralized wireless networks or decentralized cell phone networks where someone literally puts a little kind of small personal cell decentralized wireless networks or decentralized cell phone networks where someone literally puts a little kind of small personal cell tower on their house, on their roof or whatever,
And they earn cryptocurrency in exchange for doing that, for enabling their broadband to have access to it and them keeping the hardware up and running.
And it actually, what's really smart about the way this, the one that I'm thinking of in particular is called Helium, is if it doesn't do you any good in like a major city to have 10 of them that are directly overlapping and not actually increasing the amount of bandwidth available.
So it's biased toward filling in the holes in the network.
So wherever there's physical gaps, you get paid a higher amount if you fill that gap.
Cool.
All right.
So I would say that where we are, though, in terms of the development of these things, is that the individual protocols for these things exist.
And now we need developers to come along and bundle them up.
Build the killer app.
Build that app for end users that allows you and me easy access to engage in these systems with these protocols in a fully decentralized way.
Because right now, you just have kind of like, I don't want to force it to this metaphor if it's really not right, but there was an era, a computer era.
like a real thing that you and I can actually fully interact with yet, but we're super close to that.
And we're getting there more and more every day.
So are we-- I don't want to force it to this metaphor if it's really not right.
But there was an era, a computer era.
I had an Apple II when I was a kid.
And it was a command line interaction.
It was doable, right?
You needed to know a little bit to operate it.
And the more you knew, the more you could do with it.
But it was, you know, the command line was an obstacle, right?
Because most people did not know what to do with the flashing cursor.
And the fact that you couldn't talk to it in English was a deal killer.
And then Apple, borrowing from other people, including Xerox, realized that Running it required familiar metaphors, right?
And that a trash can, a virtual trash can, that you could drag stuff you no longer wanted to, was good.
And that a folder was equally good.
And that copy and paste and cut, right?
That these metaphors were going to be the thing that allowed normal people to interact with the computer.
And then at some point you realize, There's no reason that that file has to live in a folder.
It could live in all relevant folders simultaneously, that it's not limited to the physics of a folder.
And so you get this mixture of the constraints that come with metaphors and then people who get used to those metaphors can then be induced to break the mental constraints and to free themselves from some understanding.
But really the point is what made computers viable for most people?
But what are the metaphors doing?
They're actually obscuring the complexity behind things that aren't complex.
So again, I know you're not necessarily a fan of this, but that's what AI is going to do and is already doing.
We were just looking at two articles about web3 platforms that are introducing AI, one of them that is just a chatbot sitting on top of their documentation so that any web2 developer who wants to implement this stuff can just chat with a chatbot, get all the answers they need.
And then another one is a whole web 3 search engine so that anyone can come along and ask any question about the entirety of all the web because there's thousands of protocols at this point.
It's hard to track them all.
It's hard, especially because they're constantly all evolving.
And so that's one of the ways that the complexity can be obscured behind asking simple questions and getting direct, simple answers.
Oh, look, I'm all for that.
And in fact, every computer language is such a thing, right?
It's an intermediate level of complexity.
And the fact is, essentially, nobody speaks machine language.
And so, you know, it's always been the mechanism to get these things to work.
And frankly, even in technological Realms where we think we're talking about the real substance of the matter a whole lot of what we say even inside of the hard sciences is metaphor You know for the parts that one does not have to be specific about so you can get to the particular question About what you're trying to be very very concrete.
So this is how the human mind functions and in some sense my my message in this podcast is You gotta, maybe they're just, you need a Steve Jobs, right?
You need a Steve Jobs who spent his time in, what was it, uh, calligraphy class that caused him to realize that fonts mattered, right?
You need, you need the font people and you need the metaphor people.
And you know, in some sense, you guys are the bridge because I know you guys think in both ways.
Until Web 3 has that component where one sits down with it and you have the pleasure of exploring, of discovering what is possible, rather than the, gee, I guess I need to spend another couple hours with the manual, right?
Because I wouldn't want to operate this saw without doing that, you know?
It's got to work more like a computer game that you can sit down with and start Feeling empowered right away, rather than a dangerous tool that you better not, you know, skip a paragraph where it's going to hurt you.
A hundred percent.
Yep, exactly.
And I would say, like, my message in this podcast would be, Web 2 developers, your time is now.
Like, make the leap to Web 3, because the tools are there for you now to be able to quickly build apps for end users.
And those apps can be 10x more powerful than what you're building on Web 2.
And by doing so, it's going to make your app more successful.
And let's be honest, Web 2 is where all of the people that have a great number of the people that have the skills you're talking about are still employed.
And they've yet to make that leap because there's this perception that it's all financial services and there's some amount of overlap between the interest of tech and finance, but it's not incredibly high.
And so most of the first wave of this is finance-related people, finance-focused people.
Just a few years ago, you did need to have sort of special skills.
You needed to be a blockchain developer to be able to participate.
And that's just no longer the case.
Now you just need to be a web developer to build something on top of it.
And it can still run completely decentralized because web is just code.
It can be running locally in your own environment.
It can be running a decentralized network or it can be running on AWS.
And all we really need is we need the alternative to AWS to be less expensive and more trustworthy.
And that already exists.
So now there's good enough reason for web developer for you to run your web server on something like a cache.
But where Amy was going earlier is not only is that true for AI, but it in fact even solves a constraint issue that AI doesn't have, that the Web2 era doesn't really have a solution for.
There's a handful of new GPUs that are coming out called the A100 and the H100 that are so powerful they're $20,000 a piece.
They're incredibly constrained supply.
And so like one thing we've heard is that a lot of VCs are trying to buy them as fast as possible because then they can like add them as a little Like a kicker to their DFO.
A kicker on top if they try to offer someone a deal.
If they want to invest in some AI startup and they're competing with someone else, they can say, well, I've got access to the A100 and you get access to that kind of thing.
So a cache we were talking about earlier is like the decentralized AWS.
It's a general compute where anyone can offer some hardware up to the network and anyone else on the other side.
It's basically just a marketplace for compute, right?
In addition to just general compute like computers.
Compute power.
So you're talking about, you know, you have a computer sitting on your desk and most of the time it's doing very little.
Exactly.
And you can, you could pool those resources.
Right, right.
I could, I can rent.
There's also a lot of unused space in data centers, like more than you would guess.
And that's primarily what Akash is offering is that unused space to people.
Because that way they can maximize the value of what they've already invested in the data center itself and all the hardware, all the space, etc.
And if it's not being actively used by a client that they have a contract with and they've got 50% of it available, they can make it available to the network and for limited periods of time it can get being rented constantly and they can maximize the income that they get from their hardware.
It's like a company builds a building and only inhabits 30% or 40% of the floors and rents out the others.
And as it grows, it takes over those floors.
So there's lots of space that can be built for future use that could be brought online now.
And that same thing applies when it comes to training AI networks, these large language model networks.
They require, they have a phase where you need to have some...
Well, sorry, let me just finish your thought.
So instead of needing, you know, your VC to get you access to the GPUs that you need or having to figure out how to buy them, you know, let's say that there's some developers who just want to like bootstrap themselves.
They have the skills to build.
They don't want to go the route of investment.
They just want to build something cool.
They can get access to the GPUs that they need at a much more affordable price through these rental markets that we're talking about.
And they can use it just for the period of time that they need to do their training and then not have that, you know, the financial overlay of having bought the machines themselves and then sitting on a shelf after they're done with the training.
They can just rent it for the period that they need for the training and then it moves on to the next customer.
So they're renting it remotely.
That's the part I didn't know.
So both the person who put in the money to buy it is able to maximize the value of having bought it because they're constantly getting rental income from doing so and the person who just wants access to the training doesn't have to buy the hardware to do so.
Yeah, this is what markets are great at, right?
Figuring out how to take the residual and turn it into a profit.
And make efficiency.
Exactly.
Exactly.
But the other way that it makes AI safer is by being able to verify the training models that are used by keeping a kind of like transparent record of those.
Right.
In two different ways.
There's something called ZKML, Zero Knowledge Machine Learning, that I don't know the details of it yet.
I've just kind of glanced over it a little bit.
But Zero Knowledge itself has been around for a little while.
There's Zcash and a handful of other cryptocurrency that's basically just trying to maintain privacy so that not all of your transactions are public to everyone, but it's still verifiable.
So that's really huge.
And what ZKML lets you do is say, look at the output of something and you can verify that it came from a particular training model and a particular set of training data.
And then the other thing that you can do to increase that value is make sure that the training data itself is public.
Most training data that most of these models are trained on are proprietary and it's part of their walls around, you know, it's their moat.
It's an important part of their business.
It's like one of the real dangers with AI is if your model gets out, right?
Like that's your proprietary information.
Exactly.
So but the whole collective has a lot more incentive to share that to share access to that data than these few things.
And there's there's no actual mode around it aside from you need data and then you need to tag that data and collect it all together and then apply it to a training model.
So storing that data on a permanent public file storage system like our we've for example especially are we've because it has the ability to tag every single piece of data that goes into it.
Makes it a perfect repository for training data, because you can say, at this point in time, this is the training data.
Everyone can audit it.
Everyone can look at it.
And then we apply ZKML on the other side.
We know what went in.
We know what the model that was used to train.
And then we can trust the actual output.
It is apparent to me that these models are going to have to...
Presumably every model, everybody who owns an LLM that is trained on a data set has a complete record of the data set.
Possibly.
At some point, the problem is the data set is going to become, and I know this is already partially an issue, but is going to become a live phenomenon.
In other words, an LLM that can go read up on a topic upon being queried is a different object.
It seems to me an absolute requirement that a record of everything that was processed be maintained and an ability for a court, for example, let's say that an LLM commits a serious crime.
We need to be able to figure out what, you know, if there's a serious crime that has been committed by several LLMs, what is it that they processed that caused them to engage in this?
That's an important question.
And as long as it's proprietary, we don't know.
Which we don't know.
And right, it's like the hidden stuff inside a pharma.
Yeah, but one of the things that's really exciting here is, you know, Web 2, proprietary stacks really had the advantage.
They were able to make it so sleek and easy to use and, you know, kind of like Apple products where, like, you can't, you know, you have to jailbreak them to get into them because they want it to be so easy to use, right?
That proprietary aspect was an advantage.
And open source is like slow and clunky and, you know, awkward and there's just difficult to kind of it's like herding cats, right?
Open source software development up until this point.
What's exciting here is that with AI, open source really has the advantage because each model can kind of train the next, right?
And you can use it as this kind of virtuous spiral, positively, to make the models better and better and better.
Well, that and also one of the ways that closed source was able to maintain its moat is that you write your code, and then you compile it.
And what's available is the binary, the compiled binary.
And like you pointed out earlier, people can't read that.
LLMs can.
They can reverse engineer compiled binary back to the original source.
So it's almost pointless to try to close your source.
You're not protecting anything.
There's licenses, but that just requires legal enforcement, which has always been the case.
But practically and realistically, you can't protect your code anymore just by compiling it.
These machines will be able to, already can, most code, you know, take apart and reconstruct it.
Eventually, it's going to be able to do all of it.
Yes, so we are once again running up against the failure of those who wrote the laws and the Constitution being able to anticipate what technology would do with any of these things, right?
The fact is, you know, you can see the frightening predicament of people who have spent a lifetime learning to generate software code.
If an LLM can instantly decode whatever you've done from the binary and learn from it and improve on it, and you know, all of your investment is just trained in LLM in seconds.
On the other hand, the LLMs are reading all of us, you know, all the stuff that we went to graduate school to figure out how to think our way through, and it's going to be able to reverse engineer that very quickly.
And, you know, are we entitled to the benefit of what we figured out if an LLM then reads it and repeats it in language that doesn't sound like us?
You know, that's...
So anyway, we're in a brave world.
That's exactly why I said earlier that you should have your own personal AI agent because then you do get the benefit of all those things.
If you're only thinking of AI as adversarial, that someone else is running it and you have to be on the lookout for it.
Yes, that's a lot more dystopian, you know, future.
If you have your own agent that you know is running on hardware that you trust, that was trained by models that you have some amount of awareness of or understanding of, and its instructions are to assist you and it learns your patterns and it knows what you need and everything like that, it can help you detect and its instructions are to assist you and it learns your It can help you, you know, it can help you in a number of important ways.
Well, I mean, as I think you know, my concern is the number, you know, yes, it could definitely be very, very helpful.
And it can also be really destructive.
A world of people who have personal LLMs.
doing their bidding online is a world none of us have ever seen and I guarantee you that the horrors that we have not yet contemplated that arise from that world is going to be It is going to be a very surprising place to find ourselves.
I agree, but I also think that the beauties and the inventions and the creativity and the incredible advancements that we'll make in that world will outstrip it.
And there are more good people than there are bad people.
And these are just tools, just like fire.
Bicycle for the mind.
It's a bicycle for the mind.
It can be used for good, it can be used for bad.
And that is the most compelling reason, in my opinion, that these things need to be open source, is that we all need access to them.
And when Devin says, you know, you can trust it, you can verify it, I often hear when he says that, I don't have the tools to do that.
I don't have the technical capability to do that.
And what you need to just put in your mind is, well, I can proxy that to someone I do trust, whether that's an institution that stands up to say, we are verifying these things and giving them our stamp of approval, or it's your partner who does have those tools, or your friend, or however it is you go about trying to determine that for yourself, the open source
Every single time a technology of this degree of power and this much of an increase comes around, it's scary.
the internet itself, the computer before that, electricity before that, and go way back, the harnessing of fire, they're dangerous.
They have the potential to harm.
They're very scary.
But every single one of them has leveled us up enormously.
The ability to harness fire meant we could eat meat, meant our brains grew, meant we, I think you definitely would know this, that it took us from the last form of ourselves into our current form of ourselves.
And that's rather significant.
Electricity enormously lifted us up.
The internet lifted us up.
All those things.
So they were always scary.
And yet they didn't end the world necessarily.
Not a single one of them infected.
And they, in fact, always increased the health, wealth, wellness, length of life, the productivity, the creativity, etc.
of humankind.
And I think this is no different.
Yeah, unfortunately, there's a sort of A bit of magical thinking in there.
You know, I don't disagree that if you had an infinitely large planet and, you know, therefore indefinitely large segments of it could go extinct and would be repopulated by the others, we could guarantee that that's where we would end up.
Unfortunately, what we've got is An asymmetry between the utility of the power that will come from AGI to the unscrupulous versus those of us who are normally constrained by morals, right?
And the problem is that even if you have an equally powerful LLM as some shameless person, that person can do more with it, right?
Because there are things they will do that you will not.
So what has to be true, you're of course right, Amy, there are more decent people than there are terrible people, but the disproportionate power that terrible people will get from this leverageable technology is something that we have to be concerned but the disproportionate power that terrible people will get from this And And this is, of course, why the law exists, right?
What you need to do is take the advantage that unscrupulous people have and neutralize it by delivering a penalty that exceeds the benefit that they can get, right?
That's the fundamental element of That's how we can have society.
Punishment.
Exactly.
And the problem is we don't have laws that are adequate to this moment.
We don't have, you know, we have corrupt people governing us, so they're not in a position to deliver us the enlightened laws that we might need.
Yes and no, but anything that they could use it to to harm you with is almost certainly already illegal.
So it's a matter of enforcement.
It's a matter of detection.
Not really.
Good old criminology.
If an LLM reads content generated by thoughtful people who've invested in understanding the world, generates a model of how their mind works, and then delivers a book to its owner,
As if written by some other person who did not consent to being exploited in this way, and the owner of this LLM publishes this book, and because it's full of enlightened things, becomes wealthy and renowned from it.
It's not illegal, as far as I know.
It's certainly immoral.
You could make a case that it has wire fraud.
Like, we've got a lot of laws.
And so you find the right one that falls just right into it and you can, in fact, prosecute some behavior like that that isn't explicitly defined.
But I think that there is undoubtedly danger that looms ahead of us, you know, especially as we're making the transition.
Right.
We talked about how Devin was a compositor and he could make images that didn't exist 20 years ago.
Well, we've we've developed that Skepticism over this period of time, right?
To be aware, like, is this image real?
Is it not real?
Was it made by, you know, how was it made?
And so, as this technology rolls out, that period of time during the rollout, I think people are much more vulnerable than they are on the other side.
And so, you know, being, you know, skeptical is important during that period of time for sure.
Agreed.
Agreed.
And I also think that part of what you're pointing at is What you don't want to be is lagging behind, right?
Right.
A hundred percent.
Your folks are going to be leveraging this stuff full bore.
And what you want to do is make sure that you're well protected until we figure out how to manage this stuff.
And the way you will do that is by leveraging technologies such as a blockchain ledger to establish the authenticity or lack of authenticity for pieces of evidence, for example.
Yep.
Exactly.
And on that note, I think it's important to say that that's a big part of why it's important for the U.S. to make a clear legal path for entrepreneurs to be building these systems.
Because right now those waters are very murky.
And I can say from my own experience that entrepreneurship is fraught enough as it is without the looming danger of the SEC coming after you or something along those lines through a kind of regulation through enforcement action years after you've built something and people are using it.
You're familiar with what happened to LBRY, I'm sure.
Because we are in this competitive landscape of We need these tools to be used for good.
We need lots of people building them so that we ultimately end up with the best option and so that they are used for good.
And we know already that in some ways China is ahead of us in building a system that would be used for much more authoritarian control of what can be said, social credit scores, all of that kinds of thing.
And so if they are allowed to get ahead of us, if we're not allowed to build and they're already ahead of us, that to me is a very, very real danger that we're looking at right now.
Because you have to wonder what the example I was giving earlier about England and Germany and the car, England and Germany got into two world wars within the next 50 years after that.
And you have to have, you have to wonder at least to some degree, did that 30 year head start that England gave Germany have any impact on that?
Like, this stuff isn't just about technology.
Geopolitical events spring forth from who has a technological advantage over someone else.
So if we all put the brakes on on Web3 and AI development, and the Chinese Communist Party doesn't because they see ways that they can use this stuff to increase the degree of control they have over their own population, that's going to harm us dramatically.
Oh, I'm no fan of the idea of a pause.
The horses are already out of the barn and a pause will make things worse and not better.
There's no logical argument.
I've not heard a logical argument for the pause.
I'm definitely against it, though.
I have no shortage of concerns about the AI era and what it's going to look like.
I did want to suggest a couple things, get your take on them.
One is, I don't trust governmental authorities to be able to regulate this environment without making it worse.
I think they're almost certain to make it worse, both due to ineptitude and corruption.
And so I'm not eager for that solution, even though I would with a quality government.
But what about the idea that enlightened
members of this community, whatever it is, could generate a checklist of sanctioned behaviors and objectives and forbidden behaviors and objectives and that those involved could either sign on to this
And then be monitored by others in that community or not sign on to it and that we could look then as members, you know, it's a little bit like the idea of, I don't really need the food I eat to be certified organic by the government, but it's nice to be able to go talk to the farmer and say, you know, is this uncertified organic?
Did you spray it?
Right?
Right.
Or are you doing informally what I Need you to be doing and I'm not so worried about the surcharge that comes from the governmental There's a fantastic example of exactly what you're talking about.
I think it's hydroponic.
Hydroponic can't be labeled as organic because it's not in soil and the way the law was specifically written is it has to be in soil.
And hydroponic can make much healthier food.
But farmers that want to make the healthiest food are disincentivized from growing it that way because they cannot label it as organic.
And so you're totally right that even with best of intention, Having too strict of rules based around words that can change their meaning over time can definitely be dangerous.
And honestly, the problem we're up against right now is we're already in that boat.
That the securities laws written 100 years ago are just loose enough, and then the way that the courts have interpreted them are just loose enough, that some of these things overtly look like securities for sure, but some of them don't.
Especially the kind of stuff that we're talking about, decentralized, there's a subsector that's called Decentralized Physical Infrastructure Networks, or DIPIN.
That really have to do with there is some utility that you're getting from it.
It's not just a financial service.
Yeah, you have to pay for it, but I have to pay for AWS and that doesn't make it a financial service.
So, taking away the fact that you have to pay for it, the point is it creates some utility, it runs some thing.
Those should not be treated like financial services.
And unfortunately, because of the lack of newly written legislation since this stuff has come out, what Amy mentioned earlier is that the SEC has had a policy of Enforcement like that they they dictate what the new rules are just by putting out enforcement actions and going after people and we're just supposed to Back interpret from there.
So I think especially in the U.S. and in a number of other countries, it's necessary to have some amount of revision just to define the lines a little bit more clearly.
Like some of these things are commodities, some of these things are securities, and some of these things might be a third category that has never been previously imagined and should be treated differently.
And without those things, an honest entrepreneur that has a great idea for a new protocol for how to solve a particular problem and wants to bootstrap that by raising funds from the audience that's already interested in that, it's dangerous to do so in the United States.
There's a high degree of reason, especially even though we have something, you know, the Jobs Act was passed, I think, 10 years ago, where its purpose was to say crowdfunding systems should be able to crowdfund based on just a share of something and not just an object that's being released.
So if there's a million tokens that get released, 10% of them get set aside for the crowdfunders that help bootstrap it for me, help me actually raise the money to build the thing.
That's a good way to go about developing a new thing, but it's almost certainly illegal in the U.S. to get rid of the things that are not going to be able to do.
despite the JOBS Act and despite that a lot of lawmakers want this stuff to be legal in the U.S.
So fortunately I feel a little bit more optimistic about it because the people that are currently in charge of the The House Financial Services Committee, the current chair, is one of Bitcoin's biggest fans.
He said, like five years ago, he said, Bitcoin is an unstoppable force.
Everyone that's tried has failed.
Like, he gets it.
We've had a number of, half of our job is talking to these lawmakers and especially their staffs.
They get it.
A lot of people in Congress right now really do deeply understand it and thank goodness for that because they are crafting things that we think could actually solve this problem.
But the truth is, even if they mess it up, it's just like the German and English thing.
This is a worldwide technology.
There's a number of other countries in the world, England, throughout the EU, they're passing all kinds of laws that make it safe to actually build a business on this technology because they want to pull in some of that financial activity and stuff like that.
And so, if we're out-competed, we're going to catch up.
So I hope that we don't go through a period of them messing it up and we end up with you can't grow hydroponics and call it organic.
And someone else does and proves the model and shows just how beneficial it can be.
And so we have to then catch up.
I hope we don't go that route.
It's possible.
But even in worst case scenario, we can go that route by learning after the fact.
Well, but you're talking about the legal side, and Brett, you're really asking about kind of like a self-regulation and how can the industry engage in that.
And that is in the midst of being worked out, I would say.
So, you know, the W3C, Sir Tim Berners-Lee started it in the 90s alongside the World Wide Web in order to provide recommendations for how to use the web standard and how the web standard should grow and change over time so that the different parties that were using it could weigh in about what their needs are.
And we're looking at that same kind of thing today where, you know, there's a specification being created for decentralized identity.
So that that issue of who I am across platforms can be worked out so that it's interoperable rather than platform specific.
And those standards are kind of being hashed out.
And there's different protocols that are gathering around certain standards.
There's protocols that are inventing their own standards and trying, you know, they're kind of out there by themselves.
And so that's something that we're kind of just in the midst of working out right now.
But when it comes to that
That self-enforcement of those ideas or that ethos that you create, that's something that is different because of that level of transparency that we've talked about, because of the authenticity that these networks provide, that can be analyzed differently, that can be sort of more available as these problems are being solved, but the answers aren't fully there yet, is the honest truth.
It's something that the industry, the Web3 industry, is very aware of and is like an active kind of conversation about how do we sort of self-regulate to make sure that things are going in the best direction.
The absence of regulation at the federal level is like harming that conversation from proceeding because we're not sure fully like what we even can do yet in terms of building these protocols in some ways.
Like a decentralized autonomous organization might be a way to actually solve that where all the stakeholders can prove that their vote came from them using cryptographic security and a blockchain to record all the votes.
But launching one could be treated as a security and thus it's not safe to do so.
And how do you even launch one if a single person or some sort of entity has to be what that organization is connected to?
It's a difficult road right now.
And one of the big issues that we will be facing as we walk that road is governance.
Because these protocols may change over time as needs change and how those changes are decided is going to be really, really important.
So we're going to see a lot of different kind of experiments in governance, experiments in governance.
And from that, hopefully, you know, hopefully we get it right.
Right.
Yeah.
Yeah.
I mean, I guess I was.
We're sort of blurring the distinction here between Web3 and artificial intelligence.
Obviously, there's a relationship here.
They are going to emerge in tandem.
But I'm concerned about the asymmetry that comes from artificial intelligence, the ability to leverage it more powerfully if you have all options open rather than a moral constraint, and that the solution One solution is for those who are developing these things to self-certify adherence to a set of rules and values.
And what I don't know is what would be on that list.
Is it plausible to you that a, you know, if I go to the farmer and I say, you know, did you spray these things?
And they say, no, then I know something.
Right?
Is it possible to spell out what the guidelines are that we would want honorable developers to adhere to so that there could then be, you know, if somebody self-certified and then violates the parameters, that that would be understood to be a serious problem?
Is it possible to generate that list and for it to carry any weight?
I think so.
I think that's one of the things that one of the technologies you mentioned earlier, ZKML, or Zero Knowledge Machine Learning, can be a part of.
Because right now it's kind of a black box.
The output you get, you don't necessarily know, one, what the training model was, or two, what the training data was that fed into giving you that output.
And if your output is some way to manipulate someone, then very likely either The training data itself, the training model itself, or the prompt that was used to generate the output includes some instructions to do illicit behavior in some way.
And without the transparency of knowing, you don't know.
ZKML and publicly stored training data give you the ability to actually have that transparency.
And so I think it creates the opportunity because There isn't necessarily an incentive to do these things in public unless you want that for the scalability, you want the collective action to it, and also you want to be able to be a signatory onto that letter saying we're only going to use this for good.
We're never going to intentionally use it to harm other people.
One of the ways that you can validate that you're a signatory on that is that you're using public training data and that your models are transparent using technologies like CKML.
So I I'm beginning to see a connection to some other disasters that we have encountered in recent history.
Where if we look at the financial collapse of 2008, the Fukushima disaster, the Deepwater Horizon explosion, the Aliso Canyon disaster, COVID.
If we look at all of these things, we see the same pattern where Competition produced the ability to do something technologically that outpaced our ability to undo it, right?
And that in each of these cases, What you needed was the pace of development to be slowed down so that the corrective action was of the same magnitude, right?
If you can't plug a hole at the bottom of the ocean, you shouldn't be drilling a well at that depth, right?
That sort of thing, right?
The problem that I think I'm now pretty sure we're going to face in the land of AGI is we are going to be driven by competition to develop AGI as rapidly as possible, and the ability to control AGI will lag.
There's a disaster that we don't need to face if we could agree to a symmetry between the power of the AGI and the power to control the AGI, but of course we can't for the game-theoretic reasons.
Because it's a global technology.
If we limit our own AGIs, then the CCP will generate them faster, etc.
So that's really the question that has to be addressed.
Which is why I continue to believe that the answer to that is that it needs to be as open as possible.
Right, me too.
So that everyone has access to the latest developments and that the developments are being done in public because then I tend to believe that that asymmetry is defeated by the collectiveness that if everyone has access to their own and there's one bad actor that's trying to abuse their own proprietary one, It's going to be defeated by the collective action of everyone having access to their own of equal power.
And if you put a thousand of an equal power thing with one, well, that thousand collective is definitely more powerful than the one.
Yeah, although let's take an example of where a rational limit might be imposed.
Okay.
It seems to me that looking at the sum total of the data that the LLM read, is actually a lossy kind of compression.
And what you really want to be able to do is recreate the state of the LLM upon each new ingestion of data, right?
For everything it reads, it changes a little.
And you want to be able to say, well, what if I had asked this query and then pick any moment, sort of the way the Wayback Machine lets you go and say, well, what was the state of the web on this hour of this day, right?
You can do that.
You can do that because a trained model is a state.
And if those states are stored in something like Arweave, which is a permanent file storage system.
It's hashed and timestamped and you know exactly when it existed at that point.
With each article it reads, you get an update?
No, because you're not going to rerun all the training because of the cost of the compute power to rerun the training.
That's what I mean.
So you're only going to rerun the training every so often.
That's not quite the same as when it's ingesting a new article.
There's different variations of that.
But right now, for example, GPT-4 has not just the training data that it got up until 2021, but it has a connection to Bing so it can search the web.
But that's not new training data.
That's part of a prompt.
So it takes its overall training data and it's kind of a twisted version of a prompt of saying, it takes my prompt and it also says, do these reproducible things, i.e.
browse the web, look for this search term, and then Put those two sets of data together and then give me an output.
And all of that could be stored.
Yes.
So that every query is stored in a way that's reproducible.
Sure.
No, I get that.
But it seems to me that you want to have, you know, and I understand, I mean, this is exactly why I raise it this way, it would slow down training quite a bit if you had to maintain each of these individual states rather than say, go read all these things and come back and be an LLM, you know?
Yes and no.
I mean, it depends on how you're doing it.
Again, if it's an open collective thing, then I might retrain the model that I'm using once a month because there's a cost to training.
You have to get access to those incredibly expensive GPUs to do it.
And so because there's a cost, that's a constraint.
You're not going to do it all that often.
You're going to do it at whatever rate you want to.
The collective can do it more often because the collective has access to more GPUs and more resources.
And so if there's an open model where everyone's paying a subscription to have access to it of $10 worth of cryptocurrency that goes into a decentralized autonomous organization and trains whenever that pot has raised high enough to pay for the rental of the GPUs needed to spend a week doing the training, Then it's going to do it at that rate and there'll be a hash of the output at that point and we're going to always have a provable version of the state.
And you can only do that with an open model.
I mean, you can do that most reliably and most effectively and most efficiently with an open model.
Yeah, I mean, I still see a lossy kind of compression, but it's not obvious to me.
I don't know how much it would slow things down if you did have to store all of these intermediate states, but I do, you know, the scientist in me knows that there would be a tremendous amount of value in being able to understand how it is that a particular state of the ultimate model was generated if you could go back and you could say this is the point at which it began to make the change yes yes well you don't necessarily even need to store the entire state
you can if you're constantly adding your training data to this permanently stored thing what you can do is you can just store a hash of the state and so it's not very large it's tiny but it's provable to that particular version of the state and then so it can evolve over time and you can say at these different time stamps we recorded a version of it and hashed it and so now we've got cryptographic proof or really mathematical proof that it is what we claim it to be without having to store a new set of it each time But could you go back to any of those hashes?
Yes, because a blockchain has history.
Yes, yes.
Then that is the potentially lost version.
That's what you're describing?
I think so, yeah.
And there would be a lot of value in that, but it's exactly the kind of thing that the market will force us not to do.
I mean, because the API version of the market where open API, I mean, open AI is getting 20 bucks a month from everyone that wants to be a power user of this thing.
They have no incentive to do it.
I totally agree.
Anyone that is working on an open source version of this and There's so many more entities that have to gain by sharing their resources on having an open version of it than the one entity OpenAI and Microsoft behind them or even, you know, Google's obviously doing this.
There's a handful of companies that are trying to do this, but they know there's actually been internal notes from Google leaked.
They know they don't have a moat.
They can't actually prevent open source from beating them.
And they know that open versions of this are going to be more powerful.
So there's only a limited period of time here.
We're really beholden to a few proprietary instances of this.
But there's the problem is there's a perverse incentive, I think, in terms of, let's say that you are going to have your your LLM read a bunch of Technical writing in a particular area and then it was going to do some inventing on your behalf and you were going to monetize those inventions.
You're going to patent them.
And somebody wants to sue you and say actually what you're doing is you're taking my IP and you're you're sanitizing it with your LLM and then you're going to go profit where in fact I was entitled to.
And the point is you don't- Laws around private ownership on this stuff.
Yeah, they're insufficient.
I totally agree.
Someone actually just pointed out that maybe Section 230 doesn't actually cover the outputs of these things because it's not the content generated by their users necessarily.
It's the content generated by a chatbot that's run by that company itself.
So I totally agree.
The laws around private ownership and how that impacts this stuff are insufficient currently.
They're insufficient, but if you had a You know, uncertified organic, right?
We agree to store these states so that you can recover the intermediate and discover that this, you know, reading this thing enabled it to invent X and therefore, you know, so you could self-enforce such a thing, but I think absent a recognition that one needs such an agreement, the market will drive people
I agree some part of it will, but there's also opportunities to monetize your writing.
Like in the sense that I write a blog post that has some insights into it, and just reading that blog post isn't worth monetizing.
I'm not going to charge people money for it, but I can attach a license to it.
And attach that license in an atomic way so that it can't be separated from it.
So if you were then to look at the AG at the output of the and figure out, oh, is specifically reading this blog that gave him the insight necessary to come up with this invention?
Well, the license attached to that blog was that I'll share 50 50 with anyone that uses this data to come up with something new.
I'll share 50 percent of the value of that, or my license might say I won't share it.
I own it, which might tell those models avoid that because that would be bad.
Because right now, the problem you're talking about is just the fact that It's like 97% of the web is license ambiguous, just because the web doesn't have a format to attach license to content, whether that's videos or music or whatever else.
And so Walled Gardens had to build up around it in order to protect that distribution and enforce those licenses.
But it's obviously not it isn't working in the current Web 2.0 world.
It's definitely not going to work in the generative AI.
And like, what about the model who, the woman, the model whose photo is being used very often in a lot of these things in generative AI?
Shouldn't she get paid extra for that?
Again, that's about licensing.
And so there's a handful of entities that are working on solving this.
What's the name of the open licensing?
What is it called?
Oh, Creative Commons.
Creative Commons has been incentivized to try to solve these kind of problems for a long time.
And they are working with Web3 protocols and Web3 startups to come up with the new licenses that have never been envisioned before and then put it on a permanent information storage.
And now you can attach the piece of creative content that I generated to a license that envisions it being reused by someone else in the future.
And so you have, for example, what's her name?
Elon's baby mama.
Grimes.
Grimes, exactly.
She's openly declared there is no license for this, but her just saying it on Twitter becomes a license, where she says, use my voice, use the whole history of my music, and invent new things, and I'll split it with you.
It's brilliant, because that's going to be successful, and new money is going to be made by that, and it's going to incentivize other people to say, I like this idea, I'm going to try this idea.
Yeah, but we need some new architecture because let's say that I put your, I'll split it with you 50-50 on some piece of writing.
A, you need to be able to, you know, I could put that on a trivial piece of writing and LLM could read it, it could have no effect on its state, and I could claim 50% ownership of something.
So we have to be able to go back and prove that it didn't have a material impact on what was invented.
Agreed.
But, so, okay, if you do go back and you store those intermediate states so that we can prove that my thing actually enabled you, what if 25 things which offered to share 50%, you know, 50-50 with you had a material impact on what you ultimately produced, right?
And then, you know, so that's... The licenses are going to have to take that into account.
I agree.
That's a challenge.
The licenses are going to have to be, you know, smart and we're going to have to have a legal structure that allows you to navigate these things.
And, um, I wonder if, uh, a smart working group couldn't put together, uh, a, um, formalizable, you know, prospectus on, uh, these kinds of things and people could either sign up for it or, these kinds of things and people could either sign up for it or, uh, amend it, propose, propose alternative options Yeah, that's a good idea.
Um, one of the things that got me just into this space to begin with was how profound of an effect this is going to have for content creators to be able to monetize their content and have access to their audiences.
And so, you know, the the element of how AI will affect that isn't honestly something I've thought very deeply about.
But the way that Web3 is going to change the game for content creators is incredible and will be extremely helpful to content creators like you who are concerned or have experienced, you know, being demonetized, are concerned about censorship, have been deplatformed.
that kind of thing, because you have a direct relationship with your audience and they can pay in a frictionless way that feels like it does right now where there's a subscription and yet micropayments are behind the scenes flowing to you and to other people that you might recommend their content and that kind of thing.
So I want you to flesh that out a bit, because this is in fact the conversation that our friendship started around and And I was very struck by the picture you painted of the world we were headed towards.
I'm a little surprised that it has not emerged.
I am disappointed that it has not emerged yet, too.
Exactly.
Yeah.
But do you want to describe if you were, you know, Building the protocols into the proper stack, what world would we be living in and how would it work for creators and influencers and, uh, you know?
Yeah, absolutely.
We've actually been working, um, with, uh, on a project that Ian Crossland started, you know, Tim Pool, Ian Crossland.
Um, we called it Speakeasy cause you know, it's, I think it has a new name since then.
I don't remember.
OK.
But the the way that it works.
So, you know, Tim Pool's content was like the you know, we were building it around that.
So I'll just use that as an example.
Right.
So he has different guests on his show on a regular basis who also create content.
So let's say that he's putting up his content and he's attaching a license to it that says, you know, it costs whatever amount for a single play.
Um, and that you and I have a browser extension that we've installed that we'll just, we, we pay a subscription to on a regular basis.
Like once a month we pay $5.
And then behind the scenes, that browser extension pays every time we click play to Tim Pool for his content.
Well, he can also have a section where he's recommending other creators.
And those creators, in their terms, could say, well, any platform that's distributing this for me and it results in a sale, they can get a cut.
It could be a flat fee.
It could be a percentage.
It could be something else entirely.
the terms of the contract are completely, you know, able to be customized such that they could even, you know, accommodate those like music, you know, music contracts that are in existence now or written a hundred years ago.
And so they're really complex and crazy, but it could even accommodate that.
So anyway, the point being, so let's say, you know, um, there, that there's a guest, you, you go on his podcast and he wants to recommend your content and you have your content published in the same way so that it says, you know, your content is, let's just use some easy numbers.
It's a penny per play and- Or a buck to download.
So if I want to watch it a hundred times, I pay a buck and I've just got it.
But if someone's just watching it once, they pay a penny.
Sure.
But if they're viewing it through Tim's website because he's recommended it on his website or he puts it in a social channel and he's recommended it that way, a portion of that sale will go to Tim and a portion of it will go to you.
And let's say you have a producer even that you are also sharing those revenues with, that would automatically get sent to them as well.
And it wouldn't happen after the fact, like with music distribution, with royalties and stuff like that.
It would happen when the transaction happens.
That's one of the reasons that smart contracts are great, because you build smart contracts on top of a payment platform.
And it means I pay that one penny and it looks at the smart contract.
The smart contract says, okay, I'm supposed to look at this particular license and the license says 30% to this party, 30% to the person that recommended it and the rest to this.
And it splits it right away and everyone just gets the deposit right into their wallet right away before the person is even done watching the content.
And it also, again, has all of the same properties we've been talking about this whole time, like transparency, so that it can be audited really easily.
You and I don't have to be worried about a third party handling that split for us.
As you know, somebody who distributes on YouTube, It's very opaque, the mechanism, like how you're paid, how much you're paid, why you're paid.
All of that is extremely opaque.
Why you're suddenly demonetized.
They're obviously making money from the views that they're getting because of your content, but they've decided you are not allowed to.
But then let's take this a step further, because a lot of creators are like, OK, that sounds cool.
But the friction here is that I put out my content for free and I rely on ads.
So what happens then?
And then also, the kind of difficulty in that is, let's say you do put your content on a platform like YouTube, where you don't control the ads that are being shown on your content.
So, you know, you're having this like very deep conversation about the origin of life and, you know, evolution and all of that kind of stuff.
And you have an ad play that's like about, you know, how that evolution isn't real and, you know, whatever.
It's like, there's like that, right, that friction that they're, that things are incompatible.
You can't, you know, you're somebody who's like talking about, I don't know, like intuitive eating.
And then you have like a weight loss, like ad play on your, you know, on your content and that that kind of contrast is a bad fit for your audience.
Well, the layer that this also opens up is for sort of creators to opt in to the kind of advertising aggregators that would make sense for them, but this infrastructure still all has to be built.
The pieces are there.
The top layer hasn't been assembled.
The tools are there.
Yeah, there are definitely protocols working on ad aggregator type services that would be token based, that would be blockchain based in some sort of way.
But it's not built out to the point where you could use it as a viable alternative to using something like YouTube.
And so that's definitely disappointing that it's taken so long.
But it's definitely still something that I think is Potentially going to happen.
The other aspect of it that you and I have talked a lot about is the game theory dynamics of how that gets going because.
In order for me to want to pay into a subscription $5 a month, I need access to the content that I want, right?
And if there aren't enough of the creators that are using that same system, then I can't get that access to the content that I want.
And so that chicken and egg problem is something that hasn't been figured out yet.
Yeah, you have to jump into a network somewhere that sees the potential in this.
But I mean, the potential is absolutely huge.
Yes.
Much bigger than the current market.
And the potential, this is why I got excited about it in the first place because I have a background in the arts and I saw the crunch that creators were facing in the digital distribution mechanism and how I like creating sort of weird edgy art and it made me super nervous to be, you know, essentially you're like sharecropping on these platforms.
Like you don't own the land, you don't own your audience, you don't own anything about it.
And you can be just erased at any moment, you know, and have invested years of work.
It's so tragic and invested years of work to, to be making like barely a subsistence living, you know, like the amount of money that they're, that they're sharing with the creators is so out of balance with the amount of money that they're making.
We did like a basic calculation when we did, um, an experiment with emotion heap, like almost 10 years ago.
And so the average, uh, she's a musician.
So the average user listens to about 500 songs per month on Spotify.
And if they were paying a penny per play for those songs, the, those two, two cents per play becomes the equivalent to the Spotify $10 a month.
Yeah.
Okay.
Well, that's, that's what it is.
A penny per play, the audience pays 50%, the audience pays $5 instead of $10, and the creator would make 7 to 8 times what they're currently making right now.
To both sides would be benefited by this, obviously.
The economics for the creators is like, it makes it so that you can have like a real sustainable living as a creator.
You don't have to be a rock star or starve.
And frankly, even rock stars now have to tour to make money because it's the only way left for them to be making money because the way that digital distribution has changed the dynamics and the economics of it has harmed artists so much at this point.
Yeah, it's frozen them out.
Am I correct that it would also be possible with this architecture to do something like, let's say you generate a photo manipulation program, you know, something that does what Photoshop does or thereabouts.
And you could use, you know, micropayments.
Instead of selling somebody a subscription to it, you could pay by the hour, for example.
Yes, or by the use.
Like newspapers, like I hate the paywall, the subscription wall, where it says like, oh, you've read the first few lines of this.
You don't get to read the rest unless you sign up for a monthly subscription for $10 or $15 a month.
No, I would rather pay one penny for that one use.
Same exact thing.
Yeah.
Yes, you totally could.
It can be fractions of pennies because it's cryptocurrency.
It goes down to ten decimals.
Yeah.
And it also is taking out the the payment processor costs that you incur in the current system.
That payment processor cost is so high that you end up having to have a minimum payment amount, which then you're like, well, I'm not going to pay a dollar to watch this video or a dollar.
You know, Anyway, yeah.
It just solves a lot of those problems.
Micropayments are something that was theorized about at the very beginning of the web.
There was even like, you know, there's 404 is the error code for you can't find something.
402, I think, was the code for a micropayment, you know, and it's just still empty.
It just never made it.
There were attempts at micropayments in the kind of the early aughts.
There were several big attempts at it.
There were people that thought that it was eminent.
But the problem that all of those offerings at that time faced was that there was some sort of centralized entity that had to administer it.
There was some point in the chain where it wasn't fully transparent, and so it involved trust.
And that is the big innovation that Bitcoin brought to the table, which just changes everything.
So a couple things.
One, it'd be cool.
I don't see any obstacle to it.
You could also, if you put together a Photoshop-like program, you could say, use is free, but I get 20% of any profit made on anything You do with this.
You create with it.
And so lots of people would use it, never make a profit, and they would get the use for free, but it would be lucrative from the point of view of the people who invested in building the program itself.
Yep.
But it also strikes me that this might be the use case that gets the difficulty of crypto low enough to get people used to both using crypto and collecting it.
Right?
Because if the idea was, let's say Twitter included a micropayment mechanism, you can make funny videos and you can put them on your Twitter and people can tip you or you can charge them five cents if they want to look you can make funny videos and you can put them on your Twitter and people can tip you
funny videos and you can put them on your Twitter and people can tip you or you can charge them five cents if they want to look at it based on your description or whatever.
And that, you know, in order to make it tractable from Twitter's perspective, you know, you, you load 20 bucks in at a minimum and then you spend it over whatever period of time it takes to spend it.
And that, you know, in order to make it tractable from Twitter's perspective, you know, you load 20 bucks in at a minimum and then you spend it over whatever period of time it takes to spend it.
And then, you know, creators have to have some minimum amount before they can extract it.
But that anyway, I guess the question is, would Twitter do it in dollars or would Twitter require you to use crypto?
But if Twitter required you to use crypto, then getting in on that economy would be a good incentive for a lot of people.
And the, you know, it's the minimum complexity.
How do I get some crypto, then I spend it effortlessly with a click.
And if I make some crypto, how do I get to something I can spend?
Get it back out.
That takes us full circle back to the regulation conversation.
The reason that platforms can't sell tokens directly to users right now is because of ambiguity around things like money service business laws and blue sky laws.
If Twitter were to sell you some tokens and then those tokens went up in value because the price of that crypto is changing over time, they could then be liable to have to do some reporting on you, basically, and that you would have to be doing reporting as well.
And that just makes the...
And they would be treated as a money transmitter, which has a whole lot of extra laws.
That makes the danger too high for them, that that could be something that they would be enforced upon.
And so then that's why you have to go to those crypto exchanges that you were talking about that aren't the best experience.
And so it just adds this layer of friction.
Any number of these companies can sell their service directly for credit cards.
So it's just like we have carve-outs for that.
And then we have carve-outs for exchanges being able to sell money with money.
But we don't have the ability for that end point where it would be most convenient for the end user to just use a debit card to buy five bucks worth.
and then use that constantly.
It's just a problem in terms of where we want to be and the obstacles to getting there.
But we're hoping that they're working on that right now, actually.
Patrick McHenry, like Devin was saying, he's advanced two bills, the market structure bill and the stable coins bill, that are potentially moving forward right now.
Things are hopefully moving in a positive direction on that front.
Alright, so before we close this discussion out, I fear I have cluttered the discussion a bit because of my own Fears about AI and, you know, experiences with crypto and stuff like that.
I'm wondering if you don't want to paint the picture as you see it.
The web architecture that you're hoping to foster with your organization and, you know, what problems it solves, what things remain after this.
You know, what's the stack of protocols and what do they do?
Sure.
OK.
I would say, for one thing, it's less reliance on these giant corporations.
Like, one of the things that Web2 did was it made the web easy to use by centralizing around five or six big companies.
And it was because they had enough resources to build a really nice interface and have enough resources behind the scenes to make things fast.
We can solve both those problems in a decentralized way, especially if we also lean into AI, where The right interface, you know, YouTube's interface is great in general for most people, but it's maybe not quite right for everyone.
So one of the great things that you can do is you've already solved for the distribution problem that you've got.
The storage is on permanent file storage solutions or even temporary file storage solutions.
So you've got RWE for permanent file storage that's fully decentralized.
And then IPFS for temporary file storage.
So like chats and stuff like that that don't need to be preserved forever.
But it's also decentralized.
So you've got the back end able to be solved for.
And then you've got, you know, a cache is a way that you can actually have processes running in the back end and stuff like that.
And then you can actually have the front end generated on the fly for you.
You can say, I want a bigger, you know, play screen and less suggestions on the side.
Or I only want the suggestions that come from this algorithm that I select.
Or no algorithm whatsoever.
I only want it to show me subscriptions.
So it comes down to the client and the end user themselves to set their preferences for how those algorithms should behave.
And then it becomes a market for algorithms.
Because people can actually license their algorithms and say, This one's really great, but I've figured out how to innovate on it a little bit so certain types of people will enjoy this one more.
And so you can subscribe to mine for two cents a month and you'll always get access to your feed in such a way that really works well for you.
You're able to do private community you're able to do like video communication like this without having to depend on zoom Which might be storing this stuff as far as we know we have no idea you could actually do it in a private way using decentralized video transcoding process Protocols like livepeer is one where primarily livepeer is used kind of as the back end for YouTube we actually ran into this issue when we were building Alexandria and
Where you could just upload a video and it would play back right as it is and most people because what they would upload is something they downloaded from YouTube it was already transcoded and ready to be played for people but sometimes someone would take an output like we came across a filmmaker that wanted to put trailers up for his movies and And whatever you export directly out of Final Cut Pro or Premiere is generally larger than the size of the file that YouTube is serving up to.
Because one of the most important things that YouTube's doing behind the scenes is it's transcoding into multiple different formats.
So that if you're watching it on a mobile, it's going to get a smaller one.
If you're watching it on your 4K TV, it's going to be a larger one.
But you've got the bandwidth to handle that.
And if you don't have that, it's going to be an awful experience.
And we hadn't yet implemented that into Alexandria.
So this guy uploaded videos that were really high bitrate.
And for the most part, you couldn't watch them without sitting around and waiting.
So Livepeer is a solution to that, where it's using a whole bunch of people's GPUs in a different way than training AIs and LLMs.
It can be used for the actual transcoding.
So it turns it into a bunch of different formats, and then it serves up the right format for the right user at each different point.
Everyone that's creating content can put it basically everywhere rather than and not with a whole bunch of extra effort where right now, you know, we create content and we have to put it on to every different social media network.
And we also, you know, if we want to put it into the decentralized space, we have to go to that individual one and upload it there.
And we upload it to Twitter and we upload it to YouTube and Instagram.
That's a lot of extra labor.
It's valuable to us because it reaches a larger audience.
We want to do it, but we don't want the extra labor of it.
Whereas instead, if it was just you put it into a single permanent file storage place and you attach a license to it that says any front end that wants to distribute this has the right to do so as long as I get paid X amount and these are my terms.
And so then you can have a variety of front ends, both from the centralized ones like YouTube or even say Apple, for example, might have a very specific threshold of saying we need to get paid 30%.
Otherwise, we won't do it.
So they get to filter on their side of saying, fine, anyone that doesn't want to offer up 30% gets excluded.
Anyone that does gets included.
And then the individual front ends that you have your AI generate for you can include everything according to the preferences that you set.
So now we're enabling content creators to maximize their possible audience because they don't have to pick just the right vertical to kind of publish into.
They can just publish to the internet itself and discovery can be increased because it's not going to be the algorithm that was that was come up with for the sake of benefiting YouTube itself.
Like YouTube's algorithms aren't necessarily designed to benefit individual content creators.
They're designed to bet.
Like I just read recently, they wanted to compete with Netflix.
So they were trying to specifically lift up 20 minute and longer content because they wanted longer session times.
Other people came along and figured out, well, it turns out if you make really short three or four minute long videos and they're compelling enough, someone will stick to it longer.
And so even though the algorithm worked against them initially, the algorithm learned from them.
Maybe that might be better.
But the point is, it was designed for one company for one company's benefits, not necessarily for the content creators benefits.
If you have a market of competing algorithms that anyone can select for themselves, which one serves either their audience, their front end, or their particular content best, then it's going to get it to the right people.
Like Amy was talking about, there's this disconnect between your content and the ad that runs in front of it.
That doesn't benefit you or the advertiser.
The advertiser is less likely to be clicked on, so that doesn't make any sense.
It should, in fact, make more sense so that it doesn't just benefit who pays the most, but who would be the most So just to sum up what Devin said, Web3 re-decentralizes the web.
It pushes the power to the ends of the network.
And in so doing, it empowers individuals to have the experience that they want.
It allows them to set their preferences.
It empowers creators to reach their audiences and to make more money.
It empowers developers to build tools that they weren't able to previously build.
And it makes the tools that they build more resilient to attack, less expensive to operate.
And the reason that they're less expensive is that they're less resource intensive.
And so, therefore, they're also more efficient.
They're greener, basically, right?
They're more environmentally friendly.
And so, the kinds of That's great.
that Web3 gives us, it's that generational upgrade that it's just better, faster, cheaper in every possible way.
And most importantly, it restores the sovereignty, the transparency, and potentially then the trust that's been lost in the web that we have today.
That's great.
I see an analogy.
You know, back when I was a kid, there were things like hardware stores and hardware stores were owned by people and, you know, a family running a hardware store might send their kids to college on that income.
And then, you know, over time, It's all turned into Home Depot and Ace and, you know, people are working there and probably not sending their kids to college and, you know, the person at the top is, you know, has a very impressive yacht, no doubt.
But anyway, the idea that there is some process afoot in our economy that over time freezes people out because it finds a hundred ways to do that and the people find themselves with less and less of a stake basically just treading water.
That's clearly happened on the web, and I love the idea that it was decentralized.
It went through this period of centralization, and that Web 3.0 is what decentralizes it.
I would love to see that happen.
The fact that it has a benefit with respect to AI is a huge bonus, and maybe that will be the fire that gets lit under people, that inspires them to build.
So that's quite a vision.
Now, I think I understood you to say all the technologies necessary to do this exist.
The architecture that integrates them is what's necessary.
Is that fair?
Not architecture, just the killer apps.
The apps.
Yeah, I mean, you could say the architecture to integrate them.
It's really just, you know, there are a number of different compute protocols.
There are a number of different storage protocols, video transcoding protocols, indexing protocols, etc.
And so what we need now are developers to go and experiment with them, put them together in Interesting ways and also to build for those end-users, you know, what is that seamless experience that can onboard a billion people?
Mm-hmm.
Mm-hmm.
Yep.
Yep.
All right.
Well Yeah, I'm hopeful that that world can emerge and that it will be a facilitator of increased trust and Free exchange of ideas, which is obviously on the ropes that it will deal the death knell to centralized bank digital currencies.
We'll enable content creators to get paid for what they're delivering to all of us.
All of those things are worthy objectives in isolation, and together it would be a much better place.
So I thank you for laying out that vision and for articulating how we might get there, and I look forward to seeing where this goes.
Where can people find you?
We're on every social network at Web3WG and our website is at Web3WG and our YouTube channel is Web3WG so definitely we think they should come to our website and sign up for our mailing list so that if there are kind of collective actions that we can all take together we have access to reach out to them to let them know that those things are happening and then we put out most of the kind of news that we follow on Twitter and all the educational videos that we make we do put out on YouTube.
It's web3wg.org, just to mention that.
And I'm personally at Amy of Alexandria.
And I'm Devin Noah James at Twitter.
Awesome.
And your organization, as we said at the top, is a 501c3, so presumably people could donate cause.
They would find that through your website?
Yeah, they can get in touch with us there.
Great.
Alright, well, I hope that some people see this and are inspired to help you guys out and maybe people who have vision and own important properties that could leverage this vision will reach out as well.
Yeah, I was gonna say it would be amazing to hear from creators who are interested in, you know, using these tools too.
Exactly.
Alright, great.
Anything else people should know?
No, thanks so much Brad.
Thank you, this has been a delight.
Yeah, really fun to talk with you.
It's been great.
I look forward to checking in with you as things develop.