9. August 2023
24. Oktober 2022
Es geht darum, Atomkerne zu verschmelzen indem man sie mit den richtigen Geschwindigkeiten, also hot, häufig genug nahe aneinander bringt.
TLDR: Was heißt hier "besser": in heißem Plasma haben die Teilchen verschiedene Geschwindigkeiten. Aber nur ein paar Prozent am oberen Ende haben die richtige Geschwindigkeit für Fusion. Man muss dafür sorgen, dass die meisten die richtige Geschwindigkeit/Energie haben, dann weniger Schwund, kleineres Gerät, viel günstiger.
Fusion geht nicht nur mit dem Tokamak oder mit Lasern. Aber in der Presse kommen immer nur diese 3 vor:
- Tokamak, also ITER
- Laser Fusion, d.h. USA National Ignition Facility
- In deutschen Medien auch Wendelstein 7-X in Greifswald, weil wir da stolz drauf sind. Wir behaupten zwar, dass der nie Energie machen soll und nur der Forschung dient, aber insgeheim halten wir es für möglich, dass ITER scheitert und am Ende Deutschland mit dem Stellarator vorne liegt.
Grundsätzlich will man eigentlich nur Verfahren, die p-B11 Fusion ermöglichen, weil p-B11 neutronenfrei abläuft, im Gegensatz zu D-T oder He3 Prozessen. p-B11 erzeugt 3 Alphateilchen, die man direkt in Strom umwandeln kann, komplett ohne Neutronen. D-T muss Neutonen abbremsen, um Dampf zu erzeugen. Gleichzeitig wird damit alles radioaktiv. He3 ist zwar besser als D-T (immerhin ein Alpha) aber macht trotzdem auch ein 14 MeV Neutron. Das ist unschön, braucht Abschirmung, auch nicht toll für Raumfahrt. p-B11 dagegen braucht kaum Abschirmung, nur ein gutes Vakuum. Das ist perfekt für Raumfahrt.
Allerdings: p-B11 braucht mehr Teilchenenergie. Kein Problem wenn man Protonen und Bor-Ionen durch Spannungsgefälle beschleunigt. Aber schwierig bei thermalisierten Plasmen (braucht 1 Milliarde Grad statt 100 Millionen), weil Thermalisierung eine ungünstige Geschwindigkeitsverteilung macht bei der nur wenige Prozent des Plasmas schnell genug sind, um zu fusionieren. Alle anderen Teilchen sind Ballast, der für Strahlungsverluste sorgt und das Heizen erschwert. Das disqualifiziert eigentlich Verfahren, die auf Hitze in magnetisch komprimiertem Plasma basieren. Deshalb meine ich dass Tokamak falsch ist. Auch mit He3. Damit entfällt auch die ganze He3-Stripmining-auf-dem-Mond Sache.
Andere physikalisch bessere Ansätze:
- Dense Plasma Focus, ca. 8 Million USD Finanzierung. Vielversprechender Ansatz der die Instabilität von Magnetfeldern in Plasma nutzt, statt sie zu bekämpfen. Ziel: 5 MW Reaktor so groß wie ein Container. Ist sehr schön für dezentralisierte Stromversorgung. Etwas Fringe, weil drastisch unterfinanziert und weil der Gründer aussieht wie ein verrückter Wissenschaftler. Aber er weiß was er tut. http://en.wikipedia.org/wiki/Dense_plasma_focus
- Inertial Electrostatic Confinement: Protonen auf Fusionsgeschwindigkeit beschleunigen durch eine Spannungsdifferenz. Derzeit keine Finanzierung, aber ein schöner Prototyp, der mit 200 Millionen USD hochskaliert werden müsste. Nicht günstig, aber 100x günstiger als ITER. Ziel: 100 MW Reaktor so groß wie ein Haus, immerhin einer pro Stadtteil und nicht wie heutige Kraftwerke 1000 MW Großtechnik. https://en.wikipedia.org/wiki/Polywell
- Field Reverse Configuration: zig-Millionen USD private Finanzierung, Beschleunigung der Protonen durch Magnetfelder, im Prinzip gegeneinander gerichtete Plasma-Kanonen. Auch eher Container-Größe. Allgemein zu FRC hier: http://en.wikipedia.org/wiki/Field-reversed_configuration praktisch z.B. https://en.wikipedia.org/wiki/TAE_Technologies
- General Fusion: zig-Millionen private Finanzierung. Spektakuläres Konzept, dampfgetriebene Rammen machen Schockwellen in einer rotierenden Kugel aus geschmolzenem Blei. In der Mitte irgendwo Plasma und Fusion. https://en.wikipedia.org/wiki/General_Fusion
Daneben gibt es immer wieder Firmen, die behaupten den Tokamak besser zu können als ITER. Die Chinesen haben einen, weil ihnen ITER zu lange dauert. Lockheed Martin ist immer wieder in der Presse. Es ist nicht klar, warum die wahrgenommen werden, aber andere Fusion-Projekte nicht. Viele glauben, dass Lockheed hier nur die Reputation von Lockheed Martin's Skunk Works nutzt um den Aktienkurs zu pushen.
Auf der anderen Seite kann man davon ausgehen, dass einige Technologien von ITER veraltet sind bis das Ding läuft und dass man jetzt einen Tokamak beginnen könnte, der zeitglich mit ITER online geht zu 1/20 der Kosten. Tokamak ist und bleibt aber teure Großtechnik mit Dampfturbine und Neutronenaktivierung der inneren Struktur.
Hitze und damit Thermalisierung muss man unbedingt vermeiden. Deshalb sollte man gar nicht in Millionen Grad Kelvin rechnen, sondern besser Ionen mit genau der richtigen Energie und Geschwindigkeit erzeugen, z.B. 600 keV Bor-Ionen für p-B11 Fusion. 600 keV ist anspruchsvoll, aber nicht neu. Bor hat 6 Protonen, also braucht man ein 100 kV Spannungsgefälle. Das ist weniger als bei Überlandleitungen und etwas mehr als eine typische Röntgenröhre. Also bekannte Technik.
Wir müssen nur endlich vom Tokamak weg und dann liegt Fusion nicht immer 30 Jahre in der Zukunft.
11. April 2022
1920: The first Atlantic crossing by plane, a biplane with propellers. Commercial radio just started broadcasting. Nuclear Power is not even thought of.
1970: Nuclear power stations are common. Broadcast color TV is the norm. The Jumbo jet takes commercial air travel to a new level and humans have just landed on the moon.
2020: Still nuclear fission. Color TVs are now flat. The Jumbo Jet is still the largest commercial airplane and humanity is not able to land on the moon but is determined to regain the capability in a few years. There are smart phones and ubiquitous information, though.
That does not look exponential. The jump from 1970 to 2020 should have been even more impressive than the 50 years before. While information technology developed exponentially, almost all else just improved. Admittedly the reference dates are carefully chosen. Technological developments that started in two major wars have fully played out by 1970. But still, a person from 1820 would have found 1870 interesting. A person from 1870 would have been amazed by the state of the art in 1920 (radioactivity and airplanes). A time traveler from 1920 peeking into 1970 would not have believed her eyes (nuclear power, moon landings, ubiquitous electricity and lights). That is exponential. Not just change, but an increasing level of change. Accelerating progress.
Compare that to someone watching 2020 with 1970 eyes. The media and information landscape changed beyond imagination, but other than that the world has not changed a lot. It is bigger. There are other topics in politics, billions of people were lifted from absolute poverty. Things have improved: rockets are now reusable, electrical light is basically free thanks to LEDs, cars need only half as much gas, and the tallest building is twice as high. Still, everyday life looks like an improved 1970 with smartphones.
Humanity should be on Mars and beyond. After 50 years between transatlantic flight and the moon, the next 50 years should have given us more than just a flight to Mars. That would be linear. Exponential growth would mean something like a million people on Mars and the first woman setting boot on Saturn's moon Titan. And while Moore's law still holds, continuing the exponential growth of transistor counts, there are physical limits on the horizon and improvements come at increased costs in terms of prices and energy consumption. AI made progress but turned out to be more difficult than thought in 1970. And fusion power is still 50 years away.
There are lots of improvements going on. The tech level is growing. But the rate of growth does not feel exponential. On the other hand, the capabilities of information technology still grow exponentially. The amount of information available to researchers grows exponentially. Counting patents, the number of inventions per year increases which means at least faster than linearly. And while the population growth seems to deviate from the exponential curve, the number of scientists and engineers entering the work force is still somewhat exponential.
The resources put into technology still seem to grow exponentially, but the outcome appears linear. There is a worrying discrepancy between engineering resources, scientists, information, and processing capabilities on one hand and the resulting technological progress on the other hand. It looks like improvements are more difficult to achieve now than before. Every year we are putting in more effort in terms of money, thoughts, and knowledge. We might even get more improvements each year. But the aggregate of all technological improvements, something that we might call a technological level seems to crawl upwards slowly. It does not appear to accelerate. It seems rather steady, more like linear progress. Still improving broadly but not exponentially.
The question: is progress really getting more difficult? Does the difficulty increase exponentially eating up exponential investment to result in linear progress? Are we at a turning point where progress might even slow down despite increased efforts?
Maybe technological progress has never been exponential. Maybe it is sigmoidal. A sigmoid starts slowly then accelerates appearing exponentially. But it has a turnaround point. A point where the gradient maxes out and starts to fall. In other words, there is a fast period after which progress slows down. Later it might even saturate. That does not mean that technology falls back. On the contrary: the tech level increases. Products still get better. But more slowly. Because at a high level it is more difficult to make improvements. There are still improvements. Only they cost more. They need more investments, more research, more computer simulations, more data, more money, more time.
That's where we are now. The exponential technological progress we were used to has slowed to linear progress. It looks like the turnaround point. The point where things still get better, but technical revolutions become increasingly rare. Maybe the year 2070 will look like a slightly improved 2020. Self-driving cars will be common and fusion power will be only 20 years away. There will be permanent stations on the moon. Several billion more people will have joined the well-off global middle class. And movie recommendations will be as much spot on as music recommendations today. That would not be so bad. There will be no singularity, though. No runaway AI, no nanites dismantling the Earth. That would be good after all.
31. Januar 2022
There is a lot of emphasis lately on the metaverse and virtual worlds. We believe that web3 helps to share stuff between virtual worlds, to tear down the walls between online worlds. Ready Player One shows a unified metaverse, where avatars from different virtual worlds meet. That's nice. Maybe even useful someday. There is also business and money to be made. Actually, a lot of business will be enabled or improved: entertainment, marketing, customer support, and more.
However, the real impact on the economy of the future comes from automation of business processes. In a web3 world software, scripts, and AI, can make deals. Specifically, business executing AI will have a large impact. Ultimately, AI will be able to act with tangible effect through the web3 we are currently building. It's the real economy that counts. That's why we should look to Charles Stross' Accelerando rather than Neal Stephenson's Snow Crash.
We have been learning from many examples in different fields that AI is good at finding new ways to do things. When AI optimizes a task, it often finds more efficient ways than experienced humans in the same field. For example, self-learning AI invents unconventional strategies in games. It explores strategies that the best human players would have disapproved of until they were defeated by those strategies. AlphaStar, Google's StarCraft AI once produced overwhelmingly many Oracles, a Protoss unit. A strategy no professional player tried because it has disadvantages in the later game. But still the AI beat top human players until they countered the strategy as soon as they detected it.
In another experiment self-learning AI that needed to communicate to solve a task quickly developed a more effective way of communication. They invented their own language. A protocol that was more efficient than the protocols they were given as a starting point. The language was not easily understood by humans. It was analyzed. But this took time while the AI moved forward. Understanding the AI's way is a moving target. Ultimately humans will use their optimizations without fully understanding them.
We are now at a point where software driven business processes emerge. Web3 enables software to post offers, to negotiate, to close deals, and to check fulfilment. Software is already doing significant business at stock exchanges. Software can react more quickly than humans which is important in times of high-speed trading. Some of these agents are driven by deep learning and genetic AI. While there are many details and nuances, basically trading stocks is rather simple. There are sell offers, buy offers, and real time information. The task is to optimize profit over time. A difficult task considered erratic markets, a volatile information situation, erratic market behavior, and feedback loops. But the trading model is simple: buying and selling securities.
Now, web3 promises to pull all other business into software's reach. While theoretically everything could be wrapped into a security, not everything in the real world is suited for securityfication. Partially, because it is irrelevant, like selling my own house. Selling my house is not accessible to software because nobody has made it a security, and nobody will.
In the field of patents and intellectual property rights are usually not freely tradable because there are too many barriers. IP has fundamentals that are difficult to consider automatically. Trading IP goes beyond comparing market prices. Assessing the value of IP is the domain of human experts. IP deals also need notaries, attorneys, and registers, in other words: legacy real-world mechanisms.
Car manufacturers deal with thousands of supplies, each with a detailed part specification, negotiated quality expectations, technical standards, and individual considerations. They are far from being securitized, out of reach of trading software. Until now.
Smart contracts can replace government registers like commercial and land registers. If a land register is secured by a blockchain instead of a government or an attorney, then this not only makes trading cheaper by removing the middleman. It also makes trading the goods accessible to software.
Physical properties of car parts can be measured and compared with specifications. A smart contract checks if negotiated standards are met. It decides to what extend deliveries deviate from expectations. Pricing is fixated and made transparent to all parties as a smart contract. Money flows reproducibly and reliably based on measured and negotiated parameters of contracts. In the beginning humans will create these contracts, negotiate their parameters, and set up real-world measuring equipment. Humans will also approve payments. But that is still a lot of work. After some time of waving through payments, smart contracts will be made to pay without human approval on small lots. Then, when there were no major glitches for some time checking parts deliveries and payments will be automated.
Still, finding and negotiating thousands of parts is a lot of work waiting to be automated. And it will be automated. Suppliers will offer their parts through smart contracts that manage specifications and tolerances. Smart contracts also offer variations, and they will have logic, scripting, or AI to estimate production cost of variants. That makes sifting through of all these variants, specs, and tolerances for countless parts easier and humans just approve selections, confirm deals, or intervene when the AI does stupid things. And again, after some time without major glitches the industry will let software make the deals unsupervised demanding only after-the-fact reporting.
There is one more step required to completely automate the industry: planning and building factories. This will take more time. But individual manufacturing through 3D printing accelerates the process. Tesla already knows how to build gigafactories for certain products on demand. There is now so much institutional know-how that these facilities can be built in months instead of years. Factory projects are increasingly data driven and all this data will finally be used to train AI.
Software simulation of production processes also helps to self-train AI. A game of building factories, negotiating parts and resources to win market share against a competitor is not fundamentally different from managing resources and combat in StarCraft. AI will optimize itself with simulated competitions. Then AI will plan and build factories. As always, after some time without major glitches, some players will let AI react to market demand automatically. Even some goof-ups can be tolerated. Human decision making when estimating future demand, planning products, and executing business plans is far from perfect. If the financial impact of AI-mistakes is on the same level as the one by humans, the AI wins. Finally, the AI will win. And the first humans to adopt this way of doing business will become rich.
Then AI optimizes the business. The AIs will optimize communication by inventing new protocols. Negotiation protocols that are more efficient than the ones inherited from humans. There are many ways to optimize in a software driven world. Maybe they dispense with checking individual deliveries. Maybe they don't put up tenders anymore. Suppliers might deliver parts without prior negotiations based on information from crypto oracles. After all, the financial output of the entire operation is key. They might omit payments for supplies and just share the revenue. A smart contract takes key performance indicators and generates a pay-out scheme for all involved entities in a transparent fashion. There are hard short-term facts like revenue, time-delayed measurements like product reliability, and long-term soft information sources like polls about buyer's remorse. All this data can be used to optimize the business. At some time, there is data available from millions of products, markets, and processes over many product cycles. This data is then employed by the executing AI to find new ways.
Would humans base a car business on revenue sharing and common long-term benefits? Probably not. Human experts would reject this way of doing business for many reasons. Humans are good at coming up with reasons not to change things. Until they are outperformed.
Humans are also good at inventing possible ways for improvements based on their experience. We can imagine countless optimizations and process changes. Science fiction authors are especially good at that. But we largely fail to predict developments beyond our experience. That's where AI excels. It finds categorizations that escape us. It finds optimizations we won't think of.
AI will change the way business is done so much that humans will not understand what's going on. At first, we will. We will be surprised by AI's inventions. We will marvel at the ingenuity and frivolity of its ways. For as long as we can analyze and understand what is happening. Later we will fail to understand and just embrace the benefits.
This is what Charles Stross calls Economics 2.0. A business model more efficient than ours. Let's call it Economy3 to be in line Web3. Economy3 is made of economic processes that outperform the ones we know. It is interactions and rules we do not understand, that can only be executed by AI. Not because of the required speed of decision making, rather because the rules will not be known. They will not be codified. They are decentralized in neural network weights or whatever AI is made of in the future. The new rules will not be programmed into AI. Rather AI will develop the rules because they work better than the inherited ones.
This sounds as if we humans have no say in the process. But we do. The key phrase is "work better". We define what "better" means. If "better" means more profit, then average people might be screwed in a way described in Accelerando. In this future the so-called Vile Offspring, basically untamed rouge AI, dominate the inner solar system and even dismantle the Earth to put its resources to "better" use. Earth's resources not meaning oil and ore, but the iron of the core, hence the dismantling.
A development that ends in the dissolution of our planet does not sound "better". And that's the key point. We will have to define the term "better" so that it serves people. We need more performance indicators than profit. We need performance indicators that represent the wellbeing of people and the environment for that matter. AI optimizes along fitness functions and training data. AI designers define these fitness functions and select the training data. We decide how AI optimizes. We have a say. A society that really tries will have a deciding influence. Realistically the result will be somewhere between utopia and the planet's pulverization into smart matter. We must make sure that besides profit and wealth for some people, there is also well-being for as many people as possible. Maybe the fitness function just needs as much Gross National Happiness as Gross Domestic Product.
Coming back to web3: this development path is almost inevitable because it is possible. The path is obvious. There are no unknowns, no new technologies to be developed, no new principles to be discovered. The paradigms are already in place. The rest is engineering.
There is one more thing: the smart-contractification of the real world. Paper contracts will be replaced by smart contracts. Business entities will learn that blockchains tell the truth. Companies will sue each other to honor agreements that are codified by smart contracts. Finally, courts will begin to refer to the blockchain truth in their decisions. Then, the real-world is smart-contractified. It will take some time to get there. But the path is clear.
Once the real economy (the one that builds smart phones, not just non fungible images) takes web3 serious we are bound to end up with Economy3. An automated future in which it is not necessary to work hard to pay the rent. That's where we want to go.
We are currently building the tools: web3 and AI. Then we'll get the real world to use the tools while making sure that the beast we're unleashing does not deviate too far from a good path. It is our responsibility to educate our societies about the risks and empower them to set the rules.
We must shape the future economy, not just virtual worlds. It's the real world that matters and the real economy. In this sense the mental model to guide our path is better characterized by Charles Stross' Accelerando than Neal Stephenson's Snow Crash. Read Accelerando, enjoy it, fear it, and learn from it.
A virtual discussion.
29. September 2021
21. September 2021
Do you remember the geek code in the 90s? It encoded how geeky you were and your thoughts about geeky topics.
I am a physicist and I want a ScienceCode that tells in short what I think about various science, physics and cosmological theories.
My ScienceCode is:
bb+/un ci0 eu+ dm+/sn de- mv/rp sm+ ss- st- lqg+ gr+ sr++ gut- fp++/gf sh0
What is your ScienceCode?
We are starting with a format that is a bit more regular than the original Geek Code. Roughly:
[ theory rating [ "/" modifier ] ]+
rating: "--" | "-" | "0" | "+" | "++"
bb--: Never Happened. There was nothing resembling a Big Bang like starting point.
bb-: No Big Bang, maybe close, but no singularity.
bb0: I am not sure if there was a Big Bang. Things could be completely different. Maybe it only appears like there was a start.
bb+: I think there was one. Maybe more, but our universe began with a singularity.
bb++: There was a Big Bang almost 14 billion years ago starting with a singularity. After that the laws of nature unfolded.
bb++/1: There was a Big Bang and only one. No cycles, no big crunch, no bounce, no conformal transformation.
bb--/ss: No Big Bang, because the universe is in a steady state.
bb-/cc: Conformal cyclic cosmology has a state that almost looks like a singularity for the new phase.
bb+/un: Universe from nothing. Happens "all the time" if you wait "long enough" in a non-universe without time.
ci--: There was no initial inflation. Maybe even no Big Bang. All made up. Just one theory constructed after the data to fit the data, without another independent proof.
ci-: Probably no initial inflation. It is just not necessary in order to generate the measured homogeneity in the microwave background. Also, the large-scale homogeneity of the current structure of the universe is in question.
ci0: I don't know if there was an initial inflationary phase. There could be other explanations for the structure of the microwave background. I think we're still figuring that out
ci+: There probably was an initial inflationary phase resulting in the homogeneity we see in the microwave background and the current large-scale structure.
ci++: For sure there was inflation after the Big Bang. That's what explains the observational data.
eu--: The universe is not expanding. It is infinite, static. What would it expand into? Also: science does not agree on the expansion rate.
eu-: The universe is not expanding. May be tired light, may be another effect that only shows on a very large-scale.
eu0: The jury is still out on continued expansion. It might just look like expansion. Sure, far galaxies are increasingly red. But that's so far away that even a small yet unknown effect might accumulate to let it look like redshift. We don't know enough.
eu+: Looks like the universe is expanding. Also, the expansion is faster for galaxies farther out.
eu++: The universe expands at an accelerating rate. That's what the data show. There is a Nobel price for that (a universe can be from nothing but not a Nobel price)
dm--: Not necessary to explain the observations. Conclusions are misled by observation bias and by the desire to find "classical" answers. Sure, there are measurements, but they will be disproven or explained one at a time without dark matter.
dm-: Something, just not matter. No particle, no heaps of frozen energy. There is an effect but it does not result from something in our universe, maybe not even from inside our universe. Should not be called dark gravity or not even "dark", just unexplained large-scale gravitic effects.
dm0: Could be anything: modifications to gravitational theories, yet unmeasurable discrepancies in electromagnetic forces, shadow gravity from other universes. Or it might be a yet undiscovered not event theorized particle. Impossible to tell as long as the standard model is not the last word.
dm+: Dark matter explains all the data better than any other theory, so I'd say 50:50 that there is stuff in our universe.
dm++: There is dark matter, because it explains so many different observations. I also have a preferred theory on what it is made of. We just need more time to prove it.
dm++/ax: with axions
dm++/w: with WIMPs
dm++/bh: with black holes, tiny ones, and/or primordial or big ones.
dm++/sn: with sterile neutrinos
de-: Nope, no dark energy, because there is no explanation required. Even if it were, dark energy is a big gun. Any measurement can be explained with the properties "dominating the universe", "ubiquitous", "unknown".
de-: Some measurements look like there is a need for a driving force that might even increase. Unfortunately, it has been called dark energy which is an even worse term than dark matter. There are no doubts about the measurements, but about their interpretation and about some assumptions that went into the calculations.
de0: I am not sure if there is the need for dark matter. That's not even a theory yet. "Fills the universe uniformly" sounds like a modern Aether, the medium where EM-waves used to propagate. It might evaporate like its predecessor.
de+: Something causes the universe to expand, and it is not the momentum from the Big Bang.
de++: A field that uniformly fills the universe. Universe grows, so its power grows, hence accelerated expansion. As simple as that. You are the 4%.
Standard Model of Particle Physics
sm--: The Standard Model is wrong. The interpretation of probability amplitudes is unclear. The theory is too complex, has too many parameters, is not unified with gravity, not even close. Also, there are experiments with statistically significant measurements that clearly show something is missing. It barely covers 4% of the universe. We need a new theory.
sm-: The Standard Model works for everyday purposes just like Newton without Einstein: usually delivers the right numbers, but that does not make it the truth. The truth is very different and not yet known. May be strings or any supersymmetry, may be a holographic theory or something we do not yet grasp.
sm0: The current Standard Model is just the current model. It is workable, but there have been many "standard" models. It is a set of equations that mostly return the numbers we find in experiments. The equations may be almost right or totally off. I don't care as long as they work. Future generations may develop totally different models.
sm+: The Standard Model works very well. There might be small corrections necessary. But for all practical purposes now and in the foreseeable future the Standard Model (and General Relativity) is all we need. Even if there are measurements significantly above reasonable doubt which are not covered by the Standard Model, they will never lead to new physics that changes anything in our lives.
sm++: Nature is not a collection of formulas. It just is. If we really want to cover nature by mathematics, then the formulas we call Standard Model are the best we can get. We won't come closer to the truth. The symbols in the Standard Model are good models for the particles and forces at play.
ss--: Definitely not Supersymmetry. Too much new stuff for too little gain. None of the extra particles materialized. Complete failure. We need something else.
ss-: Hardly Supersymmetry. Claiming that 50 % of all required particles have already been found sounds like a win, but it does not solve the problem of the missing supersymmetric partners. Unlikely, but not impossible, though.
ss0: May be Supersymmetry. Difficult to tell without any experimental confirmation.
ss+: It's probably Supersymmetry. But it is difficult to choose which one. Looks kind of arbitrary.
ss++: Supersymmetry it is. The future will tell which version exactly. Everything will fall into place.
st--: Vibrating non-entities is nonsense. You are not even allowed to ask what vibrates. Also, too many extra dimensions necessary. Nice try, but that is not how nature works.
st-: Probably not. String Theory is not even complete. Too many open questions and some answers are problematic.
st0: Could be true. Elegant approach, but too incomplete (yet?). Reproducing the observed spectrum of elementary particles needs arbitrary parameters. Unclear if they are emergent or if it is just another theory with a parameter set that is larger than desired.
st+: String Theory unifies the forces, produces dark matter and even has inflation. Needs more research, but the direction is clear.
st++: All particles are just oscillation modes of strings. Period. Problem solved.
st++/26: Bosonic string theory, 26-dimensions
st++/10: Superstring theory
Loop Quantum Gravity
lqg--: Spacetime is continuous on small scales. It is neither quantized, nor foamy.
lqg-: Kind of pointless. A theory that has no measurable effect is useless. It is not even a scientific theory, because that needs a model and fitting data.
lqg0: Undecided. Could be true or not. I don't care because it will never affect the real world.
lqg+: Space is quantized and spacetime fluctuates on the Planck length. There is just no way to prove it or use it.
lqg++: Loop Quantum Gravity merges quantum mechanics and general relativity. Experimentalists will find ways to show it.
mv--: There is only one universe: ours.
mv-: Practically there is no multiverse. Our universe seems to be much larger than our observable universe. We cannot even decide if our universe is infinite or just very large. Anything beyond our light cone should be beyond consideration.
mv0: Impossible to tell. Our universe may be part of a multiverse. But we will never know.
mv+: There are many universes. Very many, but the number would be countable if anyone could count. They might induce each other. They might follow after each other sequentially. Black holes might spawn new universes. Our universe might be the inside of a black hole. Or maybe universes are branes and a Big Bang happens when 2 universes collide in the higher dimensional multiverse.
mv++: There is a (practically or literally) infinite number of universes popping in an out of existence in the multiverse. Ours is just one of them that allows for life to emerge.
mv++/rp: Many with any combination of parameters. Some bear life, many do not, but who can tell what "some" and "many" mean with respect to an infinite number.
gr--: General Relativity only seems to work. It is made to fit the data. Actually, the universe is driven by other forces. Gravity is so weak compared to other forces that even unmeasurable discrepancies or imbalances in the other three would completely dominate the universe. This is much more probable.
gr-: General Relativity is an approximation. Galileo, Newton, Einstein, we are getting closer to the truth. But it is not the end of the road.
gr0: Just one theory. A good one, but it could be the wrong model. Imagine there are 2 models. Both returning the same numbers in the value ranges we are used to. But their interpretation is completely different. Both cannot be true. Yet we know only one such theory. We cannot compare. And because General Relativity works so well in practice, we are convinced that there is a 4-dimensional spacetime that is warped by matter and in turn creates geodesics for said matter. Even though General Relativity works, this may be the wrong picture. Just saying.
gr+: General Relativity works really well. But it is not compatible with Quantum Mechanics as we know it. There will be a theory that harmonizes probabilities of Quantum Mechanics with determinism of General Relativity.
gr++: General Relativity is all we need to understand the interaction of matter, energy, and spacetime.
sr--: FTL is possible. Tachyons are real. Special Relativity is an oppressive pseudo theory.
sr-: Quantum entanglement might be usable for communication in the future. An Alcubierre-like drive might move spacetime with less requirements than currently theorized and without being suppressed (I am talking to you, firewall).
sr0: There may be more elaborate theories in the future. Maybe a grand unified theory has loopholes for paths beyond Special Relativity.
sr+: Special Relativity doubtless works. Though we might find limited ways to work around it. Maybe causality is not "that badly violated" for purely inactive observers.
sr++: The fact is: c is the limit. Not just for speed, but also for effects. No FTL travel, no time machines. Any theoretically possible path that seems to circumvent Special Relativity is practically forbidden by infinitesimal probabilities, by firewalls, by event horizons. No chance.
Grand Unified Theory, Existence of
gut--: The three fundamental forces of the Standard Model plus General Relativity is all there is. Gravity is an effect of warped spacetime. It is an effective force, but not a fundamental one. There is no such thing as a spin 2 graviton (for that matter :-). No particle mediated force, no gauge theory, nothing to unify. Period.
gut-: A unified theory does not have to be a gauge theory, not even a field theory. Maybe the field theory thing is wrong. Maybe the standard model is wrong. Maybe String Theory is right. Maybe Richard Feynman made computations easy and erected a smoke screen at the same time. It needs a new way of thinking. For sure there is a theory that harmonizes quantum and relativity, because they do co-exist in this universe. So, there must be a way to unify them. But it is not the GUT you expect.
gut0: Not sure if we will ever get there. The ways of the universe might be above us mere humans, even if we let our best specimen work on it.
gut+: It takes longer than physicists thought, but sometime in the future the fundamental forces will be unified by a new theory, a new way to see the universe.
gut++: There is a unifying theory, and its name is X.
fp--: Not a paradox. There are no aliens.
fp-: Irrelevant until the aliens land
fp0: Many intelligent people talk about it, so it seems to be a thing. I don't know if we'll ever know and if we should care. Fun exercise, though.
fp+: Interesting question. I wonder what the solution is. We'll find out.
fp++: It looks like a paradox, but that's only because of our limited knowledge. It has a solution, and the solution has serious implications for humanity.
fp++/gf: The solution to the paradox is great filters in general
fp++/re: No aliens because of rare earth.
fp++/ri: No aliens because of rare intelligence that is detectable.
fp++/zoo: We live in a galactic zoo.
sh--: This is the real world.
sh-: There is no indication. I don't believe that we are living in a simulation. That would be weird.
sh0: Undecided and not interested. If this is a simulation, then it's a good one. It definitely looks like the real world, so I treat it like that. Knowing this would not change anything.
sh+: That could well be. The probability argument is very much in favor of this being a simulation. Also, the delayed-choice quantum eraser is a hint that the simulation fixes things after the fact if any simulated consciousness looks at the result. This is - in turn - a hint that consciousness is not emergent, but really special, because it gets extra treatment by the simulator. Which - again in turn - is an argument for a simulation. Furthermore, quantum entanglement works exactly as it would if the entangled particles were not really separated in space because they are all simulated. There is no spatial separation for the simulator, hence it is easy to update both states at once.
sh++: This is a simulation.
sh++/n: We are at the n-th level.