24. Oktober 2022

Hot-Fusion Technologien

Die Presse ignoriert alternative HOT-Fusion Ansätze komplett. Es gibt physikalisch viel bessere Konfigurationen. Das ist alles Hot-Fusion, nicht Cold-Fusion oder LENR. 

Es geht darum, Atomkerne zu verschmelzen indem man sie mit den richtigen Geschwindigkeiten, also hot, häufig genug nahe aneinander bringt. 

TLDR: Was heißt hier "besser": in heißem Plasma haben die Teilchen verschiedene Geschwindigkeiten. Aber nur ein paar Prozent am oberen Ende haben die richtige Geschwindigkeit für Fusion. Man muss dafür sorgen, dass die meisten die richtige Geschwindigkeit/Energie haben, dann weniger Schwund, kleineres Gerät, viel günstiger. 

Fusion geht nicht nur mit dem Tokamak oder mit Lasern. Aber in der Presse kommen immer nur diese 3 vor:

  1. Tokamak, also ITER
  2.  Laser Fusion, d.h. USA National Ignition Facility
  3. In deutschen Medien auch Wendelstein 7-X in Greifswald, weil wir da stolz drauf sind. Wir behaupten zwar, dass der nie Energie machen soll und nur der Forschung dient, aber insgeheim halten wir es für möglich, dass ITER scheitert und am Ende Deutschland mit dem Stellarator vorne liegt.

Grundsätzlich will man eigentlich nur Verfahren, die p-B11 Fusion ermöglichen, weil p-B11 neutronenfrei abläuft, im Gegensatz zu D-T oder He3 Prozessen. p-B11 erzeugt 3 Alphateilchen, die man direkt in Strom umwandeln kann, komplett ohne Neutronen. D-T muss Neutonen abbremsen, um Dampf zu erzeugen. Gleichzeitig wird damit alles radioaktiv. He3 ist zwar besser als D-T (immerhin ein Alpha) aber macht trotzdem auch ein 14 MeV Neutron. Das ist unschön, braucht Abschirmung, auch nicht toll für Raumfahrt. p-B11 dagegen braucht kaum Abschirmung, nur ein gutes Vakuum. Das ist perfekt für Raumfahrt. 

Allerdings: p-B11 braucht mehr Teilchenenergie. Kein Problem wenn man Protonen und Bor-Ionen durch Spannungsgefälle beschleunigt. Aber schwierig bei thermalisierten Plasmen (braucht 1 Milliarde Grad statt 100 Millionen), weil Thermalisierung eine ungünstige Geschwindigkeitsverteilung macht bei der nur wenige Prozent des Plasmas schnell genug sind, um zu fusionieren. Alle anderen Teilchen sind Ballast, der für Strahlungsverluste sorgt und das Heizen erschwert. Das disqualifiziert eigentlich Verfahren, die auf Hitze in magnetisch komprimiertem Plasma basieren. Deshalb meine ich dass Tokamak falsch ist. Auch mit He3. Damit entfällt auch die ganze He3-Stripmining-auf-dem-Mond Sache. 

Andere physikalisch bessere Ansätze:

- Dense Plasma Focus, ca. 8 Million USD Finanzierung. Vielversprechender Ansatz der die Instabilität von Magnetfeldern in Plasma nutzt, statt sie zu bekämpfen. Ziel: 5 MW Reaktor so groß wie ein Container. Ist sehr schön für dezentralisierte Stromversorgung. Etwas Fringe, weil drastisch unterfinanziert und weil der Gründer aussieht wie ein verrückter Wissenschaftler. Aber er weiß was er tut. http://en.wikipedia.org/wiki/Dense_plasma_focus

- Inertial Electrostatic Confinement: Protonen auf Fusionsgeschwindigkeit beschleunigen durch eine Spannungsdifferenz. Derzeit keine Finanzierung, aber ein schöner Prototyp, der mit 200 Millionen USD hochskaliert werden müsste. Nicht günstig, aber 100x günstiger als ITER. Ziel: 100 MW Reaktor so groß wie ein Haus, immerhin einer pro Stadtteil und nicht wie heutige Kraftwerke 1000 MW Großtechnik. https://en.wikipedia.org/wiki/Polywell

- Field Reverse Configuration: zig-Millionen USD private Finanzierung, Beschleunigung der Protonen durch Magnetfelder, im Prinzip gegeneinander gerichtete Plasma-Kanonen. Auch eher Container-Größe. Allgemein zu FRC hier: http://en.wikipedia.org/wiki/Field-reversed_configuration praktisch z.B. https://en.wikipedia.org/wiki/TAE_Technologies

- General Fusion: zig-Millionen private Finanzierung. Spektakuläres Konzept, dampfgetriebene Rammen machen Schockwellen in einer rotierenden Kugel aus geschmolzenem Blei. In der Mitte irgendwo Plasma und Fusion. https://en.wikipedia.org/wiki/General_Fusion

​Daneben gibt es immer wieder Firmen, die behaupten ​den Tokamak besser zu können als ITER. Die Chinesen haben einen, weil ihnen ITER zu lange dauert. Lockheed Martin ist immer wieder in der Presse. Es ist nicht klar, warum die wahrgenommen werden, aber andere Fusion-Projekte nicht. Viele glauben, dass Lockheed hier nur die Reputation von Lockheed Martin's Skunk Works nutzt um den Aktienkurs zu pushen. 

Auf der anderen Seite kann man davon ausgehen, dass einige Technologien von ITER veraltet sind bis das Ding läuft und dass man jetzt einen Tokamak beginnen könnte, der zeitglich mit ITER online geht zu 1/20 der Kosten. Tokamak ist und bleibt aber teure Großtechnik mit Dampfturbine und Neutronenaktivierung der inneren Struktur. 

Hitze und damit Thermalisierung muss man unbedingt vermeiden. Deshalb sollte man gar nicht in Millionen Grad Kelvin rechnen, sondern besser Ionen mit genau der richtigen Energie und Geschwindigkeit erzeugen, z.B. 600 keV Bor-Ionen für p-B11 Fusion. 600 keV ist anspruchsvoll, aber nicht neu. Bor hat 6 Protonen, also braucht man ein 100 kV Spannungsgefälle. Das ist weniger als bei Überlandleitungen und etwas mehr als eine typische Röntgenröhre. Also bekannte Technik. 

Wir müssen nur endlich vom Tokamak weg und dann liegt Fusion nicht immer 30 Jahre in der Zukunft.

_happy_fusing()

11. April 2022

The Sigmoid Hypothesis


1920: The first Atlantic crossing by plane, a biplane with propellers. Commercial radio just started broadcasting. Nuclear Power is not even thought of. 

1970: Nuclear power stations are common. Broadcast color TV is the norm. The Jumbo jet takes commercial air travel to a new level and humans have just landed on the moon. 

2020: Still nuclear fission. Color TVs are now flat. The Jumbo Jet is still the largest commercial airplane and humanity is not able to land on the moon but is determined to regain the capability in a few years. There are smart phones and ubiquitous information, though.

That does not look exponential. The jump from 1970 to 2020 should have been even more impressive than the 50 years before. While information technology developed exponentially, almost all else just improved. Admittedly the reference dates are carefully chosen. Technological developments that started in two major wars have fully played out by 1970. But still, a person from 1820 would have found 1870 interesting. A person from 1870 would have been amazed by the state of the art in 1920 (radioactivity and airplanes). A time traveler from 1920 peeking into 1970 would not have believed her eyes (nuclear power, moon landings, ubiquitous electricity and lights). That is exponential. Not just change, but an increasing level of change. Accelerating progress. 

Compare that to someone watching 2020 with 1970 eyes. The media and information landscape changed beyond imagination, but other than that the world has not changed a lot. It is bigger. There are other topics in politics, billions of people were lifted from absolute poverty. Things have improved: rockets are now reusable, electrical light is basically free thanks to LEDs, cars need only half as much gas, and the tallest building is twice as high. Still, everyday life looks like an improved 1970 with smartphones. 

Humanity should be on Mars and beyond. After 50 years between transatlantic flight and the moon, the next 50 years should have given us more than just a flight to Mars. That would be linear. Exponential growth would mean something like a million people on Mars and the first woman setting boot on Saturn's moon Titan. And while Moore's law still holds, continuing the exponential growth of transistor counts, there are physical limits on the horizon and improvements come at increased costs in terms of prices and energy consumption. AI made progress but turned out to be more difficult than thought in 1970. And fusion power is still 50 years away.

There are lots of improvements going on. The tech level is growing. But the rate of growth does not feel exponential. On the other hand, the capabilities of information technology still grow exponentially. The amount of information available to researchers grows exponentially. Counting patents, the number of inventions per year increases which means at least faster than linearly. And while the population growth seems to deviate from the exponential curve, the number of scientists and engineers entering the work force is still somewhat exponential.

The resources put into technology still seem to grow exponentially, but the outcome appears linear. There is a worrying discrepancy between engineering resources, scientists, information, and processing capabilities on one hand and the resulting technological progress on the other hand. It looks like improvements are more difficult to achieve now than before. Every year we are putting in more effort in terms of money, thoughts, and knowledge. We might even get more improvements each year. But the aggregate of all technological improvements, something that we might call a technological level seems to crawl upwards slowly. It does not appear to accelerate. It seems rather steady, more like linear progress. Still improving broadly but not exponentially. 

The question: is progress really getting more difficult? Does the difficulty increase exponentially eating up exponential investment to result in linear progress? Are we at a turning point where progress might even slow down despite increased efforts? 

Maybe technological progress has never been exponential. Maybe it is sigmoidal. A sigmoid starts slowly then accelerates appearing exponentially. But it has a turnaround point. A point where the gradient maxes out and starts to fall. In other words, there is a fast period after which progress slows down. Later it might even saturate. That does not mean that technology falls back. On the contrary: the tech level increases. Products still get better. But more slowly. Because at a high level it is more difficult to make improvements. There are still improvements. Only they cost more. They need more investments, more research, more computer simulations, more data, more money, more time. 

That's where we are now. The exponential technological progress we were used to has slowed to linear progress. It looks like the turnaround point. The point where things still get better, but technical revolutions become increasingly rare. Maybe the year 2070 will look like a slightly improved 2020. Self-driving cars will be common and fusion power will be only 20 years away. There will be permanent stations on the moon. Several billion more people will have joined the well-off global middle class. And movie recommendations will be as much spot on as music recommendations today. That would not be so bad. There will be no singularity, though. No runaway AI, no nanites dismantling the Earth. That would be good after all. 

_happy_saturating()

31. Januar 2022

More Accelerando Than Snow Crash

The Mental Model of the Web3 Future Is Not Snow Crash. It's Accelerando. The inevitable path to a new economic model made possible by web3.


There is a lot of emphasis lately on the metaverse and virtual worlds. We believe that web3 helps to share stuff between virtual worlds, to tear down the walls between online worlds. Ready Player One shows a unified metaverse, where avatars from different virtual worlds meet. That's nice. Maybe even useful someday. There is also business and money to be made. Actually, a lot of business will be enabled or improved: entertainment, marketing, customer support, and more.

However, the real impact on the economy of the future comes from automation of business processes. In a web3 world software, scripts, and AI, can make deals. Specifically, business executing AI will have a large impact. Ultimately, AI will be able to act with tangible effect through the web3 we are currently building. It's the real economy that counts. That's why we should look to Charles Stross' Accelerando rather than Neal Stephenson's Snow Crash.

We have been learning from many examples in different fields that AI is good at finding new ways to do things. When AI optimizes a task, it often finds more efficient ways than experienced humans in the same field. For example, self-learning AI invents unconventional strategies in games. It explores strategies that the best human players would have disapproved of until they were defeated by those strategies. AlphaStar, Google's StarCraft AI once produced overwhelmingly many Oracles, a Protoss unit. A strategy no professional player tried because it has disadvantages in the later game. But still the AI beat top human players until they countered the strategy as soon as they detected it.

In another experiment self-learning AI that needed to communicate to solve a task quickly developed a more effective way of communication. They invented their own language. A protocol that was more efficient than the protocols they were given as a starting point. The language was not easily understood by humans. It was analyzed. But this took time while the AI moved forward. Understanding the AI's way is a moving target. Ultimately humans will use their optimizations without fully understanding them.

We are now at a point where software driven business processes emerge. Web3 enables software to post offers, to negotiate, to close deals, and to check fulfilment. Software is already doing significant business at stock exchanges. Software can react more quickly than humans which is important in times of high-speed trading. Some of these agents are driven by deep learning and genetic AI. While there are many details and nuances, basically trading stocks is rather simple. There are sell offers, buy offers, and real time information. The task is to optimize profit over time. A difficult task considered erratic markets, a volatile information situation, erratic market behavior, and feedback loops. But the trading model is simple: buying and selling securities.

Now, web3 promises to pull all other business into software's reach. While theoretically everything could be wrapped into a security, not everything in the real world is suited for securityfication. Partially, because it is irrelevant, like selling my own house. Selling my house is not accessible to software because nobody has made it a security, and nobody will.

In the field of patents and intellectual property rights are usually not freely tradable because there are too many barriers. IP has fundamentals that are difficult to consider automatically. Trading IP goes beyond comparing market prices. Assessing the value of IP is the domain of human experts. IP deals also need notaries, attorneys, and registers, in other words: legacy real-world mechanisms.

Car manufacturers deal with thousands of supplies, each with a detailed part specification, negotiated quality expectations, technical standards, and individual considerations. They are far from being securitized, out of reach of trading software. Until now.

Smart contracts can replace government registers like commercial and land registers. If a land register is secured by a blockchain instead of a government or an attorney, then this not only makes trading cheaper by removing the middleman. It also makes trading the goods accessible to software.

Physical properties of car parts can be measured and compared with specifications. A smart contract checks if negotiated standards are met. It decides to what extend deliveries deviate from expectations. Pricing is fixated and made transparent to all parties as a smart contract. Money flows reproducibly and reliably based on measured and negotiated parameters of contracts. In the beginning humans will create these contracts, negotiate their parameters, and set up real-world measuring equipment. Humans will also approve payments. But that is still a lot of work. After some time of waving through payments, smart contracts will be made to pay without human approval on small lots. Then, when there were no major glitches for some time checking parts deliveries and payments will be automated.

Still, finding and negotiating thousands of parts is a lot of work waiting to be automated. And it will be automated. Suppliers will offer their parts through smart contracts that manage specifications and tolerances. Smart contracts also offer variations, and they will have logic, scripting, or AI to estimate production cost of variants. That makes sifting through of all these variants, specs, and tolerances for countless parts easier and humans just approve selections, confirm deals, or intervene when the AI does stupid things. And again, after some time without major glitches the industry will let software make the deals unsupervised demanding only after-the-fact reporting.

There is one more step required to completely automate the industry: planning and building factories. This will take more time. But individual manufacturing through 3D printing accelerates the process. Tesla already knows how to build gigafactories for certain products on demand. There is now so much institutional know-how that these facilities can be built in months instead of years. Factory projects are increasingly data driven and all this data will finally be used to train AI.

Software simulation of production processes also helps to self-train AI. A game of building factories, negotiating parts and resources to win market share against a competitor is not fundamentally different from managing resources and combat in StarCraft. AI will optimize itself with simulated competitions. Then AI will plan and build factories. As always, after some time without major glitches, some players will let AI react to market demand automatically. Even some goof-ups can be tolerated. Human decision making when estimating future demand, planning products, and executing business plans is far from perfect. If the financial impact of AI-mistakes is on the same level as the one by humans, the AI wins. Finally, the AI will win. And the first humans to adopt this way of doing business will become rich.

Then AI optimizes the business. The AIs will optimize communication by inventing new protocols. Negotiation protocols that are more efficient than the ones inherited from humans. There are many ways to optimize in a software driven world. Maybe they dispense with checking individual deliveries. Maybe they don't put up tenders anymore. Suppliers might deliver parts without prior negotiations based on information from crypto oracles. After all, the financial output of the entire operation is key. They might omit payments for supplies and just share the revenue. A smart contract takes key performance indicators and generates a pay-out scheme for all involved entities in a transparent fashion. There are hard short-term facts like revenue, time-delayed measurements like product reliability, and long-term soft information sources like polls about buyer's remorse. All this data can be used to optimize the business. At some time, there is data available from millions of products, markets, and processes over many product cycles. This data is then employed by the executing AI to find new ways.

Would humans base a car business on revenue sharing and common long-term benefits? Probably not. Human experts would reject this way of doing business for many reasons. Humans are good at coming up with reasons not to change things. Until they are outperformed.

Humans are also good at inventing possible ways for improvements based on their experience. We can imagine countless optimizations and process changes. Science fiction authors are especially good at that. But we largely fail to predict developments beyond our experience. That's where AI excels. It finds categorizations that escape us. It finds optimizations we won't think of.

AI will change the way business is done so much that humans will not understand what's going on. At first, we will. We will be surprised by AI's inventions. We will marvel at the ingenuity and frivolity of its ways. For as long as we can analyze and understand what is happening. Later we will fail to understand and just embrace the benefits.

This is what Charles Stross calls Economics 2.0. A business model more efficient than ours. Let's call it Economy3 to be in line Web3. Economy3 is made of economic processes that outperform the ones we know. It is interactions and rules we do not understand, that can only be executed by AI. Not because of the required speed of decision making, rather because the rules will not be known. They will not be codified. They are decentralized in neural network weights or whatever AI is made of in the future. The new rules will not be programmed into AI. Rather AI will develop the rules because they work better than the inherited ones.

This sounds as if we humans have no say in the process. But we do. The key phrase is "work better". We define what "better" means. If "better" means more profit, then average people might be screwed in a way described in Accelerando. In this future the so-called Vile Offspring, basically untamed rouge AI, dominate the inner solar system and even dismantle the Earth to put its resources to "better" use. Earth's resources not meaning oil and ore, but the iron of the core, hence the dismantling.

A development that ends in the dissolution of our planet does not sound "better". And that's the key point. We will have to define the term "better" so that it serves people. We need more performance indicators than profit. We need performance indicators that represent the wellbeing of people and the environment for that matter. AI optimizes along fitness functions and training data. AI designers define these fitness functions and select the training data. We decide how AI optimizes. We have a say. A society that really tries will have a deciding influence. Realistically the result will be somewhere between utopia and the planet's pulverization into smart matter. We must make sure that besides profit and wealth for some people, there is also well-being for as many people as possible. Maybe the fitness function just needs as much Gross National Happiness as Gross Domestic Product.

Coming back to web3: this development path is almost inevitable because it is possible. The path is obvious. There are no unknowns, no new technologies to be developed, no new principles to be discovered. The paradigms are already in place. The rest is engineering.

There is one more thing: the smart-contractification of the real world. Paper contracts will be replaced by smart contracts. Business entities will learn that blockchains tell the truth. Companies will sue each other to honor agreements that are codified by smart contracts. Finally, courts will begin to refer to the blockchain truth in their decisions. Then, the real-world is smart-contractified. It will take some time to get there. But the path is clear.

Once the real economy (the one that builds smart phones, not just non fungible images) takes web3 serious we are bound to end up with Economy3. An automated future in which it is not necessary to work hard to pay the rent. That's where we want to go.

We are currently building the tools: web3 and AI. Then we'll get the real world to use the tools while making sure that the beast we're unleashing does not deviate too far from a good path. It is our responsibility to educate our societies about the risks and empower them to set the rules.

We must shape the future economy, not just virtual worlds. It's the real world that matters and the real economy. In this sense the mental model to guide our path is better characterized by Charles Stross' Accelerando than Neal Stephenson's Snow Crash. Read Accelerando, enjoy it, fear it, and learn from it.


Raph Koster’s Future of Online Worlds Applied to weblin.io

Raph Koster talked about the steep path to a unified metaverse. He raises many interesting points that address key points of weblin.io's architecture and design principles. 

A virtual discussion.

At the 2nd Annual GamesBeat Summit: Into the Metaverse 2 Raph Koster gave a speech about the future of the metaverse, about connecting virtual worlds, and about the steep path to a unified metaverse. He raises many interesting points.

The weblin.io project regards the web as a metaverse, if not the starting point for "The Metaverse". I would like to review the speech and comment the central messages with respect to weblin.io and the web metaverse. In other words: how they apply to the web as a metaverse.

Raph Koster talks about a high tech metaverse with 3D, AR, VR running on advance engines. Even beyond the engine, these worlds need sophisticated coding and modelling. Contrast that with the web metaverse which runs on a browser engine. This conventional approach makes things easier. The web metaverse gets away with much less complexity which creates lower barriers for interoperability. It turns out: things are much easier. We are lucky.

It is very interesting to apply the central messages of the talk to weblin.io because they address important features, the architecture, and design principles. Let's discuss:

Raph says: "The idea of taking multiple online worlds and cross connecting them with basically hyperlink connections, and […] hop freely between them with one client"

weblin.io comments: With weblin.io we are hopping freely between spaces with one client. The spaces being web sites, the one client being a web browser and freely hopping means clicking a web link. It's not 3D, no virtual worlds, not fancy. But the web metaverse is the biggest world in terms of content. It's the biggest world in terms of people. And most easily accessible by means of a web browser and some rather small client software, a graphical chat client with animated avatars as a browser extension or a native program that projects a social layer above all web pages.

Raph says: "Ongoing challenges include crappy voluminous user-generated content"

weblin.io comments: In case of weblin.io there everything is user-generated. It's the Web. It is often great and sometimes it is crappy. Speaking about "crappy voluminous" specifically: the web metaverse has a build-in check for content quality. Web content is produced to be used on the web, not specifically for the web metaverse. Hence, if it is good enough for the web, then it is good enough to make up a place of the web metaverse. Voluminous user-generated content will never drag down the web metaverse as it easily can in a virtual world other than the web that lacks such a built-in safeguard.

Raph says: "Play-to-earn have always had the risk of […] economy crashes due to […] mudflation"

weblin.io comments: Simulated economies with artificial money sources and sinks are difficult to balance. Play-to-earn needs a real economy, not a simulated one. It must be driven by real money that flows into the economy from the real world. Only real value creates a real economy because real money from the outside worlds is hard to get. It must provide a ROI for the outside world. That's the weblin model.

Raph says: "Players have not been that interested in item portability"

weblin.io comments: That is true in general. You won't take your WoW Hunter Bow to EVE Online. Different engine technologies, game mechanics, and balancing are strong barriers, that might be overcome someday, though. The real point is importing NFTs which have fixed real world attributes as in-world items. This needs a suitable mapping of NFT attributes to in-world features. If the mapping is transparent and stable, then real-world NFTs gain value and utility in-world.

Raph says: "The open web is a model for the kind of standard for decentralized creativity"

weblin.io comments: The existing standardization of web technologies makes the web a perfect model of a decentralized easy to access metaverse. The places are already there. Content is there. weblin.io adds people, and voila, the web becomes a metaverse. "decentralized creativity": that's what the web is about.

Raph says: "An enormous amount of the metaverse needs are going to be flat"

weblin.io comments: Often 2D is easier to navigate and a lower barrier. Navigating the web just needs a browser and a point device. That's an easy virtual world. No need to navigate in 3D to get to a document. Just a click and the document is full screen. And populated by people who happen to be reading the same document at the same time.

Raph says: "The art we see needs to break away from the notion that it is something baked into a client"

weblin.io comments: In the #Webaverse the content always comes from the server. The client fetches the content and projects a social layer on top where people meet. Check.

Raph says: "If we want a decentralized metaverse — one that is open and not controlled by one party — we obviously need to decentralize control"

weblin.io comments: Virtual words are usually controlled by one party. The web on the other hand is a decentralized metaverse, always was and probably will be. The social layer above the content that makes the web a metaverse in the first place is also decentralized. Every web content provider can host the social layer for their content by running a chat server. Once they operate the chat, they can enforce rules and moderate. In other words, they can exercise property rights. That's how weblin.io is built.

Raph says: "The biggest barrier to item portability is actually that every […] world implements that functionality in completely different ways […] There are zero shared data structures"

weblin.io comments: A common denominator of data formats might be a start. Viewed from a 3D perspective, common denominators lack the functionality required for a good user experience. But for our case, the web metaverse, typical web standards work perfectly as common denominators. For example, it is easy to make an in-world avatar available to the web metaverse. Inhabitants of virtual worlds can use an (animated) rendering of their in-world avatar on web pages to meet other people, even people from different games. From the point of view of the web metaverse all these virtual worlds are just sophisticated avatar creators. Avatars are designed in-world by all the means of the virtual world including the need to earn equipment or to buy vanity items. Then the avatar appearance is transferred to the web where people can present themselves by their game avatars.

Raph says: "[We might] take a cue from […] WordPress [the] plugin architecture [which] allows different platforms to implement the same applications programming interface (API)"

weblin.io comments: The underlying content of the web metaverse is already decentralized being provided by countless servers. Even the social content, users and game items is decentralized. Users can connect through their own messaging server. They can use an open-source client with a small set of interfaces. The reference implementation by weblin.io shows how pluggable item providers allow for decentralized game content on the social layer.

Raph says: "Just the coordination challenge of building that API is likely to be a multi-decade process of arriving at agreement on standards"

weblin.io comments: That's a consequence of the complexity of 3D worlds. The weblin.io project shows how small a set of APIs really needs to be to make the web a decentralized metaverse.

Raph says: "The need to coordinate and share multiple standards pushes towards a single platform owner that can force [necessary] standards into existence. But we know that isn’t the dream we all ultimately want"

weblin.io comments: No, it's not. The web is decentralized and relies on open standards. And the social layer that makes up the web metaverse is also built on (few) open standards. In direct analogy to the content part of the web where HTTP(s) provides data in HTML, JSON, and Javascript, the social layer that makes the web a metaverse is driven by XMPP, a distributed and standardized messaging protocol. Data formats on top of XMPP are the same as the ones that encode the web content. The standards of the web metaverse are already available. They are widely used and highly accessible. That's a perfect foundation to keep the web metaverse decentralized going into the future.

Raph says: "[Few] worlds […] have ever been willing to sign up [to] Rights of Avatars"

weblin.io comments: The webin.io project signs up. We neither control the social layer nor users and avatars. We are providing standards, an open-source reference implementation, and infrastructure to kickstart the web metaverse until content providers provide their own messaging servers. Content providers might control their space by exercising their property rights and users can connect through an XMPP entry point of their choosing. In particular (but without devaluing other avatar rights) we support the right of avatars to speak freely everywhere. And the right to "be secure in their persons, communications …". In the web metaverse users are anonymous, if they so choose, which is the default.

Raph says: "[Making the one metaverse of compatible virtual worlds] is going to be hard."

weblin.io comments: Acknowledged. The 3D case is hard. The weblin.io project approaches the problem from a different angle. We start with the web as the metaverse. The web is already there. It is easily accessible. It does not have to be built because it is already content rich. It is already decentralized. Web links even point into places inside virtual worlds. In that respect the web is a superset, the distribution platform, not just for web content, but also for 3D virtual worlds. Virtual worlds are part of the web. The summary of all virtual worlds and all web content is The Metaverse.

We want our avatars not only inside 3D worlds. We want our avatars to break free of virtual world boundaries. Not just between virtual worlds, but also between virtual world silos and the web. We want to use our virtual world avatars on the web. This is easier than it sounds because standards and formats of the web metaverse are simple. A virtual world developer needs just one week to write an exporter that lets all their users join the web metaverse, The Metaverse. 

Break free. Reclaim the web!

_happy_breaking()