Placeholder, SPQS Universe

The Science of Placeholder, Pt.7

We’ve finally arrived at my favourite subject: computers.  Specifically, the quantum and optical computers used in the SPQS Universe.  Amusingly enough, I designed my first optical/quantum computer hybrid when I was 14 (it was the summer between grade 9 and 10, the same summer I postulated my first TOE, a hyperspace theory of quantum gravity—yeah yeah, it was hopelessly derivative and full of significant holes, but I was only a teenager and I had only just begun studying quantum mechanics and string theory.  Hyperspace seemed like a legitimate option until I realized it was the theoretical physics equivalent of the Aether).  At the time I didn’t really believe a solely quantum computer was possible, and I didn’t have much hope for any significant progress with nanocomputers either—even though around the same time I was a member of the Nanocomputer Dream Team.

Anyway, to the point.  In designing an optical/quantum computer hybrid, I sought to resolve certain issues at the time with early conceptions of quantum computers.  Definitions of Quantum data.  Quantum error correction.  Quantum logic gates.  All the things that made the dream of a quantum computer seem the stuff of technobabble.  Now, all of those issues have been resolved and my hybrid design is pretty much useless—quantum computer science has come a long way since 1996/97, and no longer needs to rely on interfaces for every different system within the quantum computer.  And most importantly, quantum processor cores are now a realistic possibility (and I’ve heard there are some working prototypes of single stage quantum logic gates, too), so the entire computing model can fully harness the principles of quantum mechanics, from processing to memory to storage.

Optical computers are really something else though.  They may not be quantum, but you can design them with quantum optics and metamaterials in mind (which is implied in the SPQS Universe).  And a lot of progress has been made in the field of optronics since ’97—we can certainly expect to have end-user optical computers before end-user quantum interfaces, decades in advance really.  Because the funny thing about quantum computers, is that everything we think we know about computer science has to be thrown out.  Quantum computers have their own logic, their own language, their own unique mechanics and conception of information.  Digital information isn’t equivalent to quantum, and won’t be transferrable in all cases.  It’s a whole new world in computing, which is scary, and one of the main reasons the field is being held back.  Optical computer science though… the hardware may be different, better, faster, more reliable, but ultimately, we can enforce the same electronic computing model onto the platform at no real (immediate) cost to us.

Of course, optronics are capable of much more sophisticated computing models, but our understanding of binary digital information is essential to maintaining functional computing as we migrate to better hardware concepts.  Later, once optical computing hardware is ‘perfected’, we can start playing with more sophisticated data models and error correction designed just for them.  I rather like a base-six data model (ie., instead of using binary code for the true computer language, you use senary), because quantum computers run optimally with a senary base code (although right now all quantum computer science is being done with binary in mind, because error correction for binary is the easiest to accomplish, and quantum states that have only two options are easiest to measure), and I think it would be useful to start migrating data to a datatype that is equally understood by quantum and optical computers (and as I already mentioned, binary information is not equivalent for quantum and electronic computers, but senary information would be equivalent across all platforms that used it as the basis of information, for several really good reasons).

So you see, optical and quantum computers can work well side-by-side, better than electronic and quantum computers can.  But it’s important to understand why you would need them to work side-by-side.  Why use optronics at all when you have ‘quantronics’?  Well, for one thing, even if you have a solely quantum computer where all data is processed and stored internally through manipulations of quantum states, you still need to build an interface that can interact with it so that a human can make any use of the data.  A quantum interface has to be sufficiently advanced to handle i/o of quantum information without itself being quantum.  An electronic quantum interface would suffice for most personal devices, but then, a quantum interface is serious overkill for your average end-user anyway.  Scientists need quantum computers, military personnel and field operatives need quantum computers, but say for example there was a quantum-based ipod.  That would be so over-the-top wasteful that you really have to laugh at it.  So for the people who legitimately need quantum computers, a quantum interface up to the tasks they have in mind is necessary.  The best candidate is an optical-based quantum interface, so that it can handle the amount of data a creative scientist can fathom.

Imagine being able to take a sample of human tissue and decode the DNA in under ten seconds, while simultaneously networking to other quantum computers and searching for a positive match through all international databases.  Imagine being able to reconstruct a complete interactive 3D model of any organism from its DNA sequence alone.  Imagine being able to take the known properties of any given planetary system and automatically generate a working gravitational model, including estimates regarding non-observable bodies.  Imagine being able to solve an n-body problem for a quantum system without generalization in as little time as it takes to read this sentence, where n is practically limitless.  Imagine being able to accurately estimate the probabilities of a given set of actions based on an individual’s known psychological profile, and then predict all their behaviour with 99% accuracy for as much as 6 months in advance.  These are just a few examples.  Quantum computers can do all these things, if they have an optical-based interface.  But if you restrict a quantum computer with an electronic interface no better than our current mobile technology, then the quantum computer will be limited to handling data on the same scale.

Of course, there is a work-around.  You can set up a program similar in concept to WolframAlpha.  A quantum core server does all the actual calculations, all the web searches, all the correlation of data.  All you put in is one simple equation, and all you get back is the result.  You could then do everything I listed in the previous paragraph, with an electronic-based quantum interface, with just one downside.  You’d only ever have access to the front-end interface, you’d never get to examine the raw data, and you’d have to rely on the accuracy of the back-end without ever being able to directly examine its work or process.  For most people, that’s just fine.  I use WolframAlpha for just about everything now, instead of Mathematica, because it replaced 90% of what I used Mathematica for.  But sometimes, you just need more.  And I’m just a hobby physicist at best, a writer of sci-fi; real scientists, universities, space agencies, law enforcement, militaries and intelligence agencies need something a little more robust than WolframAlpha can offer (although a streamlined tool that just scans someone’s DNA and returns a positive ID would be more than a little useful for law enforcement; add a back-end that reconstructs a 3d model of unknowns and is integrated with facial recognition software and you have the perfect system for catching criminals from just basic forensic evidence).

Well, that’s a little more background than I at-first intended.  Suffice it to say that quantum computers will make excellent tools, and we have yet to even fully grasp what they are capable of.  But since they require interfaces to use, we should make it our goal to perfect a computing platform that can handle quantum data.  Currently, the most feasible computing platform that can do that is optical-based.  And yeah, most of us will never need to use a quantum computer once we have optronic devices, but that may change.  Like I said, we have yet to fully grasp what a quantum computer can do, so who knows what use we might find for them?  In Placeholder, I really only scratch the surface—but I think it gives a pretty good idea of where humanity can go with them.


(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

The Basics of Optronics:

Optronics can and will entirely replace electronics; and just how not everything electronic is strictly a computer, not every optronic system will be an optical computer either.  In the SPQS Universe, electronic devices are considered ‘pre-war antiques’; as an average citizen of the SPQS, for example, if you were to pop open your ipod or radio equivalent, you wouldn’t see a circuit board and wires, you’d see an optical wafer and fibre-optic cables.  A particularly sophisticated (by our current standards) optronic system would be designed with metamaterials and quantum optics in mind, so that the fibre-optic cables and optical wafer wouldn’t have to be coated (because metamaterials can allow certain lightwaves to escape while trapping others, thus the data light remains unseen in the optical paths and cables, while additional indicator light escapes specifically to be seen), and an experienced optronics technician would know by the patterns of light whether something was wrong with the device.  You wouldn’t need to add sensors or indicator lights to the wafer, because the optical paths on the wafer would show you directly whether light pulses were passing through the correct channels and being processed correctly.  That may require slightly more background training, but ultimately, it makes the job of troubleshooting an optronic system much easier than an electronic.

But all that really isn’t enough to justify moving everything to optronic systems when it works perfectly well as an electronic device.  Sure, some manufacturers might say, “well, it’s too expensive to keep producing electronic equipment when most of our assembly line has moved to optronic systems.”  But certain manufacturers might never see the benefit of optronics over electronics unless they’re given something more compelling.  After all, it’s the big corporations who don’t see a need to change a product that keeps selling that hold back technological progress—just like the electric car, which could have gone commercial in the 70s, but still is barely commercially viable even now, thanks to the combined effort of petroleum giants and car companies that put more money into paying off lawsuits than into manufacturing.

One of the most compelling reasons to switch to optronic systems is their energy efficiency.  When self-contained, an optronic wafer that does the same job as an electronic circuit board uses less energy than the equivalent electronic system.  For commercial end-user mobile products, this is a big boon.  The same batteries we’re using now can be made to last twice as long between charges, so your ten-hour battery in your latest iphone or macbook suddenly becomes a twenty to twenty-four hour battery.  This, along with the potential for a boost to processing power for gaming and a variety of other mobile tasks which could always be improved, serves to increase the perceived product value over the competition, and thus encourages consumers to choose optronic systems over electronic.  The companies that switch to optronic systems first will definitely gain control over the markets.

Also, optronic home computers will use much less energy than electronic computers do, so consumers will notice a reduction in their domestic energy bills over the course of the year.  And offices with 50 computers or more basically always on… the drop in their energy uses will be so drastic that they’ll be able to move large amounts of capital around to other purposes, after only the first year.  For one thing, in lean years they won’t have to lay off as many employees, because their basic overhead is far less restrictive.

Optronic systems also offer some interesting possibilities for supercomputers and server rooms.  You already have improved processing power at reduced energy costs, and with the careful choice of the right metamaterials, you can reduce excess heat to negligible levels, and save even more energy normally spent on keeping supercomputers and server-rooms cool.  With less excess heat you can also pack more processing cores in the same space, so the same supercomputer tower can have two or three times as many cores.

The possibilities are endless.  The optronics revolution will be the next computing revolution, and it is entirely attainable within the next twenty years (which is something you can’t safely say for end-user quantum and nano-computers).  The only catch is whether or not the computer giants can be convinced to make the move, because optronic technology requires nano-scale engineering and robotic assembly lines.  You won’t see many garage-based optronic computer companies coming out of the woodworks until domestic robots are the norm, and sophisticated enough to replace a robotic nanoscale assembly line like Intel’s.  But in the end, that’s another compelling reason for the computing giants to jump on optronics now: they won’t have any competition for at least ten years, and thus they can control the optronics market.

And yeah, okay, metamaterials are a bit on the expensive side right now.  But the most research has gone into electromagnetic metamaterials for direct photonic manipulation, so their use in optical computers is ultimately their most natural purpose.  If metamaterials are going to be used for anything in the near future, the first choice would obviously be something that has mass-market appeal.  And other technology that relied on the same basic principles could piggyback on the success that is inherent in optical computing, creating a general environment suitable for widespread improvement of technology along optronic means.  The easiest way to get the cost of metamaterials down is therefore in their immediate use in optronics—the same assembly lines that produce optronic components could easily be converted to producing other metamaterials, since they are all made by the same process of nanoscale laser-etching and layering; because of the mass appeal inherent in optronics, thanks to its increased processing power and reduced energy requirements, we can create a metamaterial economy almost overnight; and that metamaterial economy can go on to give us the raw elements we need to start producing some seriously impressive gadgets at no more upfront expense than our current tech industry.  And most importantly, we can prepare our tech industry for adaptable (and most probably bi-pedal humanoid) multi-purpose robotics and quantum computers.

It may be slightly expensive at first, but those who do take the risk will profit more than they can even imagine.  Because optical computers and the metamaterials required to produce efficient models of them are the keys to all our future technology currently in the works.


The Basics of Quantum Computers:

Granted, quantum computers are more speculative than optical, but over the past few years great strides have been made, and the models have been demonstrated to be functional.  What we don’t have yet is a full quantum core, but thanks to the non-stop efforts of a few quantum computer scientists and engineers, we at least know that the effort isn’t wasted (which is a lot more than can be said for nanocomputers).  Actually, in a way you could almost say that quantum and nano-computers are related fields, only quantum computers have been demonstrated to work and nano-computers have not.  But that’s because a nano-scale molecular computer model ignores quantum phenomena, where a quantum-scale subatomic computer does not.  If nanocomputer aficionados decided to incorporate quantum chemistry into their molecular physics models, they might make a little more progress.  If nothing else, it’s an angle worth exploring.

In the meantime, while nanocomputer scientists continue to fail, quantum computer science has made some amazing strides.  One of the main problems with the field in the 90s was quantum error correction.  As many of you are surely aware, quantum states change when observed, and if you know one quantum state of a particle, you can’t know its others; or more precisely, the precision of measurement of one quantum state reduces the precision of measurement available for other quantum states at one given moment of (planck) time, and that measurement introduces a change to the whole quantum system so that you can’t just go back and measure the same particle again and get the same result for the same quantum state, or know the precise value of the other quantum states at the time of the original measurement.  A looped logic gate would demonstrate that quite aptly; it would be as if a whole new particle was passing through it each time.  Quantum error correction introduced a model that allows us to overcome that problem, though.  In measuring, the logic gate knows it is introducing a change, so the logic gate is designed with a second error correcting gate that resets the measured particle to the state it was measured at.  This is difficult and complicated even for a binary data system, where the logic gate only tests to see if the particle is charged or neutral.  Imagine testing for the flavour or colour of a quark in a senary data system.  The math gets pretty wild, let me tell you.  But at least we know that with binary qubits, quantum error correction works, and they’ve now layered the error correction processing in such a way that measurements are accurate effectively all the time.  This is a huge step forward in quantum information theory, by the way.  It’s almost unprecedented.

One of the other main problems was the very definition of quantum information.  But the model of the qubit has resolved that quite successfully.  A qubit is one unit of quantum information, just how a bit is the smallest unit of information on a traditional, classic computer.  The qubit model is “equivalent to a two-dimensional vector space over the complex numbers,” (, Qubit) which is probably a meaningless statement to most people, but to mathematicians is quite evocative.  Actually, since the preface to the Wikipedia article is so good, I feel compelled to quote it here:


In quantum computing, a qubit (pronounced /ˈkjuːbɪt/) or quantum bit is a unit of quantum information—the quantum analogue of the classical bit—with additional dimensions associated to the quantum properties of a physical atom. The physical construction of a quantum computer is itself an arrangement of entangled[clarification needed] atoms, and the qubit represents[clarification needed] both the state memory and the state of entanglement in a system. A quantum computation is performed by initializing a system of qubits with a quantum algorithm —”initialization” here referring to some advanced physical process that puts the system into an entangled state.[citation needed]

The qubit is described by a quantum state in a two-state quantum-mechanical system, which is formally equivalent to a two-dimensional vector space over the complex numbers. One example of a two-state quantum system is the polarization of a single photon: here the two states are vertical polarisation and horizontal polarisation. In a classical system, a bit would have to be in one state or the other, but quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.

— from, Qubit

Fascinating stuff.  I urge you to at least read the complete article on Qubits.

And then of course there was the breakthrough with the quantum logic gates.  They are a system so beautifully entangled, it is as if the quantum world has been turned into art.  Seriously, I lack the words to describe how I feel when I see a quantum system like that… awe is in there, but it’s more than that.  So much more…

Anyway, back to the point.  Quantum computers have come a long way.  But there are theoretical aspects to quantum computers that have not yet been demonstrated.  One thing a lot of sci-fi authors talk about is quantum information sharing.  Since the entire universe originated with the Big Bang, and all matter and energy that makes up the universe was contained within one single indefinably small point particle, you could reasonably argue that all particles in the universe are somehow entangled.  The truth is, some particles are more entangled than others, and the readiness at which you can exploit that entanglement is limited.  Also, certain recent experiments with quantum information sharing suggest that quantum information is also limited to propagating at the speed of light in a vacuum, which creates a problem for sci-fi authors who have been relying on instantaneous quantum information sharing for their stories.  Well, I too am guilty of that potential fallacy.  The quantum cores in the SPQS Universe contain modules which specifically harness instantaneous quantum information sharing, or ‘spooky action at a distance’ as Einstein called it.  In short, the SPQS has an original quantum communication module, devised and built shortly before the mythical last war.  All new quantum communication modules are made from entangled particles passed through the original, and thus remain in constant communication with each other and the original.  This may or may not be feasible and/or realistic.  But until it’s been conclusively proven that entangled particles don’t share information instantaneously, I’m going to keep using it.  Here’s hoping I don’t get made to be a complete fool (fingers-crossed).


Quantum and Optical Computers in the SPQS Universe:

As I’ve already said, both Quantum and Optical computers are used side-by-side in the SPQS Universe.  To be more precise, Optical computers are the norm, but integrate well with quantum computer systems when they are needed.  Quantum computers are considered more dangerous than nuclear weapons by SOLCOM, of course, so the technology is strictly regulated, and only a handful of Quantum Computer Programmers exist.  Konrad Schreiber is one of them, and while the SFAF was willing to put up with a lot from him, they were ultimately sending him and the rest of the crew of the SFS Fulgora to their deaths.

The apparent reason for such strict regulation of quantum computers is mentioned pretty early on in Placeholder.  As a central theme to the story, it had to be addressed in the earliest convenient passage.  In summary, quantum computers were blamed for the devastating “Last War of Earth.”  Supposedly, according to the running myth that keeps the SPQS together, when Sol Invictus (the Imperial Roman version of the Sun god) returned to Earth enshrouded in the flesh of a man, humanity had just finished nuking itself half-way to oblivion.  But it all started when a certain notorious group of terrorists that we’re all-too-familiar with got their hands on a quantum interface, and used it to hack into certain superpowers’ defence mainframes, using the power of quantum computation and decryption to clone the nuclear codes and keys and launch nuclear strikes against each other to trigger World War III and ensure mutual annihilation of all the superpowers.  Pretty scary idea, but I’m sure that certain steps are already being taken to prevent that eventuality.  Obviously, the military will have functional quantum computers a long time before we do, and any sensible government with nuclear weaponry will only be too eager to start using quantum cryptography techniques to start building unbreakable lattice-based keys (and when I say unbreakable, I mean unbreakable even to other better quantum decryption programs).  So when you really look into the matter, you realize that such a story cannot possibly be true.  What then actually happened?

I don’t reveal it until the very end of the book, and it would be a shame to spoil it here, even surrounded by so many other spoilers.  The myth is interesting, and sufficiently scary to keep quantum computers under lock and key, but the truth is much worse for the characters in Placeholder.  Plus, the real situation that led to the postwar power-grab by the SPQS is the premise of at least one of the planned sequels to Placeholder, but is important enough to my future history that it rightly should be considered as a series of its own, with one of the sequels in the Placeholder series merely leading into it.

So, Optical computers are the norm in the SPQS.  The SFAF uses them for pretty much everything, and appliance-like optical computers are provided to the average citizen, although they are designed in such a way to serve an obvious purpose, and it would be near impossible for any normal end-user to access or decompile the source code, or reverse-engineer the actual device.  Imagine Apple TVs, ipods, ipads and iphones being the only computing devices, with no access to Xcode or the other Apple Developer tools.  Or, imagine OS X with no terminal access and no utilities for revealing hidden files or the like.  You’d no longer be able to benefit from the UNIX experience of real computing.  Everything would be clean-cut, user friendly, for specific approved tasks—watching or listening to media, playing games, using various simple utilities.  Sure, it would probably make for a great user experience, but at the cost of any control over your computer.  That’s something we can’t allow to happen (and the reason why I created such a situation in my future history).  But in the SPQS Universe, not just anyone can learn to be a programmer.  You have to pass the SFAF’s criteria for recruitment, and make the cut to become a programmer, before you’re even given access to source code and information on programming languages and compilers.  Of course, all Officers in the SFAF are expected to use the Unix-based terminals, but you can’t have any real fun unless you get chosen to be a programmer.

I already talked a lot about the Officer computer terminals in previous posts, so the premise should be clear enough.  The terminals are handheld portable screens that you interact with through a simple input-only neural interface.  The neural interface is itself an optronic device inserted into your frontal lobe through the temple, with the transmitter external to your skull (which is the best place for it).  It is attached to your dominant hemisphere, which for right-handed individuals is the left hemisphere of the cerebral cortex.  Konrad is left-handed though, so his is attached through his right temple.  The neural interfaces are powered by excess bioelectric energy produced naturally by the brain (although historically I expect they would have been powered by atomic batteries inserted into the external casing of the neural interface to be easily replaceable), so you’d need something as energy efficient as an optronic chip, because the brain doesn’t have much energy to spare.  The terminals themselves are linked to pretty standard wireless access points, which are in turn linked to optical cores.  The optical cores are all networked, of course, through various stages of security.  In the case of the SFS Fulgora, the central core is the Identity core, and all terminals have to go through a security check first through the ID core before they can release access to their systems, data, and local network.  Except of course for the MRD terminal, which is on a network of its own with direct access to the MRD and Quantum core.  Quantum computers can have more rigorous security than optical computers anyway, so there’s no need to feed security through the ID core first (and actually, that set-up could compromise the security of the quantum core).  The one catch is that the MRD terminal has to be released by the Captain’s terminal before it can access the Quantum core for its security check, and within the story, that extra level of protection is a serious source of frustration for Konrad (until he manages to hack and reprogram the ID core).

So yeah, the set-up can get quite complex, but for military purposes security is more important than simplicity.  And while optronic systems are more than good enough for most military personnel, a quantum core is needed to control the MRD.  And when you realize the extra fact, that quantum computers interface best with optical computers, even if the SFAF didn’t normally use optical computers, they would have had to for Operation Storm Cloud.


That about covers it.  I was going to get into Konrad Schreiber’s particular misuse of quantum and optical computing, but that’s best left for the next post, where I will cover Quantum Computer Programming.  It’s even more exciting than metamaterials, and there are so many different approaches being undertaken at present, that it will be worthwhile to specify just which approach I standardized as QCL for the purposes of Placeholder.


— the Phoeron

Placeholder, SPQS Universe

The Science of Placeholder, Pt.6

Field-induced Molecular Reconstruction and Rearrangement.  A technology seemingly so fanciful that some might question its place in Hard Sci-Fi.  But as strange as it might be to say or conceptualize, it is a technology we can legitimately expect this century.  First, to dispel some obvious criticisms: no, I’m not talking about Star-Trek-like replicators.  The technology, if energetically feasible, will have certain inescapable limitations—and in regards to its applications for food preparation, those limitations will be in the area of preparing a very limited range of dishes, from mush to brick, with certain specific key ingredients.

More interesting is how the same basic set of principles behind field-induced molecular reconstruction and rearrangement for food production can be so readily transferred to any number of domestic tasks, all of which on Earth would normally require water.  But I’ll get into that later.

Granted, this field of research doesn’t actually exist—at least not to my knowledge—but the core ideas are already out there in physics and chemistry (and to a limited extent, already applied within chemistry without the controlling medium of a field).  They just need to be combined with applied intent.

Before I get too deep into the discussion of the science, it may be useful in this context to specify the rather small part of Placeholder that this technology is limited to.  Because it’s not something common in the SPQS Universe.  It’s a technology invented out of necessity on Mars, by Martian colonists, and for the most part stays exactly there.


(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

Konrad Schreiber’s visit to Mars:

In Placeholder, I was purposely vague about many aspects of Martian life and technology; as an outsider, a mere visiting researcher from an Earth university, Konrad was purposely left out of all the important facets of a Martian Colonist’s life.  The little he did learn about a colonist’s life on Mars was only enough for him to get by while causing as little inconvenience to the locals as possible.  And Mars being what it is, that which would strike Konrad the deepest would have to be the global water shortage.  A local may very well have a more mature perspective on their homeworld, as a human being who no longer knows or recognizes Earth as their birthplace—but that is also a perspective that is impossible for a visiting researcher to acquire during a short six-month visit, in which they spend most of their time in the lab.

Konrad barely scratched the surface of Martian life.  So in his journal, as he recollects his brief visit to the planet, he only really has the ability to discuss a handful of obvious topics.  The water shortage.  Food preparation.  Hygiene.  Sanitation.  And the apparent obsession with recycling.  He barely mentions his research, because within a lab setting, you could be anywhere if it wasn’t for the extremely expensive lab equipment that can’t travel as freely.  All he really hints at is that the Oxford/Humboldt lab at Villa Ius has equipment actually capable of verifying predictions of String Theory (or M-Theory in his case), and an impressive quantum computer.  And you’re also given to understand that the same great minds that brought the human race field-induced molecular reconstruction and rearrangement (and then kept it for themselves) also brought the human race lab equipment that could return a fundamental particle to its unbound string or p-brane state and analyze those in sufficient detail to turn the various string theories into the realm of applied physics.  Specifically, Konrad did state as much that he was able to experimentally demonstrate that his own interpretation of quantized m-theory was superior to the dominant version of relativistic m-theory.  I didn’t bother with any further detail primarily because he would take such technological abilities for granted; now, if I were to write a story about the scientists and engineers working endless days and nights trying to build a device that could experimentally validate string theory, then obviously I wouldn’t have glossed over the detailed construction of a plausible device.  But that’s not this story.  And whether you like it or not, in retrospect, Konrad cared more about the strange domestic appliances on Mars more than the lab equipment that got him one of his PhD’s and a Nobel prize (which, by the way, aren’t quite as big a deal in the SPQS Universe—kind of like how these days, a Bachelor degree is the new high-school diploma).


The Basics of Field-induced Molecular Reconstruction and Rearrangement:

The premise of FIMRR is simple enough at the face of it.  Instead of relying solely upon chemical reactions to effect changes between molecular constructions, you introduce a mediating field which can encourage certain chemical reactions according to the pattern of the field.  Obviously, to make sense of the ideas behind this, first-year university physics and chemistry isn’t enough—the ideas I’m working with actually come from Quantum Chemistry, Quantum Field Theory, and Molecular Physics.

You also have to consider the scale you’re working with and the complexity of the calculations involved in analyzing the source materials.  It would be easier, for example, to simply strip all the individual atoms of their electrons and reintroduce the electrons to the cation soup manually, than say, to maintain the molecular structure of proteins and other important nutrients, filter out harmful elements such as bacteria or toxins, and rearrange them into a food-paste.  Despite the complexity, the latter is what I am suggesting, at least for food preparation.  Other applications of the same technology may be much simpler to process, with their usefulness being just as restricted.  But even with the simplest of tasks imaginable with FIMRR, you would need a quantum computer; mainly because, in all quantum chemistry problems, you can only accurately calculate two-body problems.  With even just three, you start dealing with uncertainty, and then the equation becomes a matter of quantum probability.  A regular computer will get bogged down beyond belief with the probabilities involved with a ten-body problem (or better said, the complexity of the equations are exponentially increased by each additional body in the problem), so anything more complex than a 3-body problem needs to be addressed within a system that actually understands the nature of the problem.  Quantum computers are designed to specifically harness the properties of matter being explored with these sorts of problems, so calculating quantum probabilities is an inherent task best left for a quantum computer.

Complexity then becomes a non-issue, supposing anyone can figure out an algorithm that can analyze matter at the molecular level in several large masses.  Compared to the programming required for such a task, engineering a type of molecular scanner that can accurately identify every chemical element 100% of the time, and understand how the atoms are linked into molecular compounds, is surprisingly easy in comparison.  In the end, it only has to be slightly better than an electron microscope, with a wider spread to take in the entirety of a mass up to 30cm cubed.

The next question is obvious.  What sort of field can capture pretty much all molecules within a given mass, and be manipulated to filter out harmful organic and chemical compounds, while rearranging the good stuff into a homogenized paste?  Electromagnetic fields are the easiest to manipulate, but that would require ionization of all the constituent particles.  That would destroy protein chains and most other nutrients in any given foodstuff, so that’s a no go.  If gravitational fields could be manipulated to the same extent, via graviton emitters (similar to the artificially-generated true gravity mentioned in The Science of Placeholder, Pt.3), that could be a viable candidate; molecular compounds could be identified by their minute gravitational impact, and the graviton-field could be manipulated to draw away certain molecules through so-called gravity bubbles (highly localized pockets of gravity, warped to move a mass through space without interacting with the rest of space).  But that may end up being far too inefficient.  In the end, you have to face the fact that any ‘natural’ field has inescapable limitations and cannot be adapted to this purpose.

But turn your attention to Quantum Field Theory.  At the surface, it doesn’t seem to have anything to do with what I’m talking about—but the premise is an exciting one: “particles are regarded as excited states of a field (field quanta).” – Wikipedia, Quantum field theory In principle, by treating particles as fields, including complex molecules, any group of those excited states is also a field already.  So to a point, you don’t even have to think about the ‘how’.  You just need to accept that there is already a field regulating the structure of the given masses, and your task then is to impose a similar field that will cause the desired modifications.  I think the best device for accomplishing this is a hybrid of a field detector and field manipulator.  You can design the detector according to the observer effect, and thus detect the mass in such a way that the desired field generates itself (although that won’t get you your desired outcome 100% of the time), or, you can design a detector that recognizes the quantum fields as a product of its own detection, and thus imposes a new quantum state on the detected fields (yeah, a little weird to think about, but it would produce the desired effect more often than a simple observer-effect change).

So those are the basic ideas I’ve been working with.  I’m sure there are others, and when I have a need to put some further thought into the matter, I may come up with something better.


The “Murray Ovens” and other FIMRR technology in the SPQS Universe:

The main FIMRR tech I refer to in Placeholder is the Murray Oven.  In final presentation, it looks little different than a microwave oven, only it doesn’t really cook anything.  As described above, it uses Quantum field theory and Quantum chemistry to rearrange compounds from pre-cooked food into a homogenized, highly nutritional food paste stripped of all harmful and/or toxic compounds.  Containers for the ingredients and final product are made of the same neutral coated polymer compound that Martian clothing is made from; likewise, the inside of the Murray oven is so coated too.  The programs that operate FIMRR based equipment are designed to ignore this polymer compound, so they never get mixed in with the food, and the components of the equipment don’t get affected by the field manipulations either.

The ‘ovens’ are nicknamed “Murray Ovens” not because of any particular inventor or company that produces them; it is simply the accepted pronunciation of the acronym for “molecular rearranger” (MoRA).  Murray sounds better before ‘oven’ than Mora.  You could imagine, it might have originally been “Mora oven,” but the reduplication of the vowel in the compound was reduced to an “ee” and the long o reduced to a short u.

I mentioned that Martian clothing in the SPQS Universe is made from the same specific polymer as the food containers used in the Murray ovens.  Hence how laundry is done: instead of recombining the materials on the clothing, it simply identifies everything that is not the clothing and strips it away.  The ‘showers’ are similar too; they are programmed to recognize living human tissue and the protein chains that make up hair and leave them alone.  Dead skin, dirt, sweat, and other filth that we tend to accumulate is captured in the artificial field and whisked away the same way as the laundry appliance does.  Konrad mentions that these FIMRR-based Martian ‘showers’ didn’t leave him feeling clean, but did stop him from stinking.  This was to draw attention to how different the psychology of a people living with next to no spare water would be from us right here, right now on Earth.  We associate the damp mushiness of a long hot shower and the residual soap scum on our skin with the feeling of being clean.  But technically, a FIMRR pseudo-shower would actually make us a lot cleaner.  It would strip off all our dead skin, remove any dirt or bacteria from our bodies, and leave nothing but our natural organism intact.  You could probably even find a way to manipulate such a technology for the purposes of controlled depilation (removing specific patches of hair); in other words, get the effect of a perfect shave without any razor burn, or a wax without any nasty ripping.  But back to the point.  We associate very specific sensations with cleanliness.  A Martian colonist who’s never had the luxury of wasting water on a shower or bath wouldn’t know the feeling of water-saturated skin or residual soap-scum.  Their sensation of cleanliness would be more subtle; a sudden destruction and removal of all actual filth and dead skin.

In Placeholder, Konrad also mentions the sanitation technicians that protect the secrets of their trade with all the force allowed them.  Since there is no water to waste, and no liquid-based household chemicals that are suitable to the task on their own without dilution in water, cleaning your apartment is a task that requires specialist equipment.  The economy on Mars thus evolved around the task of sanitation, and the only task expected of an average Martian resident is spraying down their toilets to keep the bacteria at bay in an entirely closed environment (also mentioned in Placeholder, there are two types of toilets on Mars, one for solids only, and one for liquids.  The liquid waste is filtered and recycled as drinking water, the solid waste is incinerated and recycled for use in construction).  Thus, whether you want them to or not, the Martian Sanitation Technicians visit your place once a week, and make you leave your apartment while they do their work.  The cost is automatically deducted from your pay when stationed on Mars, much as taxes or room and board.  You can assume that the technology they use is FIMRR based, but the tools are a little more specialized and designed to require special training to use.  So there you have it—Mars in the SPQS Universe has a culture and economy where Janitors and Cleaners are as specialized as nuclear engineers.


That’s about all there is to say for FIMRR Technology in Placeholder.  Like I said, it’s a very small part of the story; Konrad only ever gets to see it in action or use it during his brief stay on Mars which he briefly recollects through his journal entries, and they don’t have anything like it on the SFS Fulgora.  Their food is cooked normally, and like many other spaceships, all their water is recycled as much as it can be, and fresh water is chemically produced by waste gases filtered out through the life support system and ionized hydrogen from the interstellar medium (in other words, they only ration water in a very limited sense of the word).  They live much as sailers on Earth might, without the need for a desalinization plant onboard to have a ready supply of fresh drinking water.

In my next post, I’ll be back to a more relevant topic of particular interest: computing technology in the SPQS Universe.

— the Phoeron

Placeholder, SPQS Universe

The Science of Placeholder, Pt.5

Metamaterials are perhaps one of the most exciting areas of 21st century physics and engineering, and will allow us to accomplish effects not possible with naturally-structured materials.  But I should make something clear up-front.  Metamaterials aren’t necessarily new synthetic chemical substances exhibiting strange properties, they are new periodic structures of existing materials (though certainly, custom-designed polymers could be useful for certain applications), that allow for desirable macroscopic effects.

Research has progressed the most with electromagnetic metamaterials, but a great deal of headway has also been made with acoustic and seismic metamaterials.  Seismic metamaterials will be of especial-use here on Earth for the construction of greatly improved earthquake-proof structures, but could also be extended into general kinetic absorption for body armour and spaceship hull design.  The vast array of sensor design coming from research into electromagnetic and acoustic metamaterials is also quite fascinating.  Basic ‘cloaking’ has already been achieved in the microwave spectrum (ie., objects have been rendered nearly invisible through metamaterial cloaking to microwave radiation).  Superlenses can achieve resolution beyond the diffraction limit (!).  Ultrasonic sensors can be designed.  Sound and light can be custom-modified, ‘shaped’ if you will; ultrasonic waves can be shaped down to audible wavelengths, ultraviolet light, x-rays, and gamma radiation can be shaped into visible light.  And you can even hope to see materials which are entirely transparent to the visible light spectrum, but entirely reflective to ultraviolet light, x-rays, and hard radiation (although reflecting infrared, microwave, and radio waves is much easier to accomplish).  The trick to that is the negative refraction index that can be achieved with metamaterials.  And you can even design metamaterial absorbers to trap high-energy particles from alpha decay (among other things).

That’s right, we can actually start thinking about designing sheets of ‘metaglass’ that are fully transparent to visible light, but shield against all forms of hard radiation, even alpha and beta decay.  We could stand three feet away from a reactor core, behind a sheet of metaglass, and not even worry.  Fine, that’s more than a few years away, and will require some major improvements to nanoscale engineering to accomplish.  But at least we now know such things are possible.

(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

Metamaterials in the SPQS Universe:

In Placeholder, I make it clear enough that just about every material used for space engineering is some form of metamaterial.  And this isn’t just me hyping a field of limited potential, it’s me recognizing the potential of metamaterials for creating lightweight but stronger structures, with enhanced durability and impact resistance, all coated on the exterior hull in a conveniently thin layer entirely reflective to hard radiation.  As I mentioned in the previous post, Cosmic Radiation is a serious danger for astronauts on long-term missions.  Metamaterials are one solution, and in my opinion also the best, due to their highly-customizable nature.

I’ve even incorporated metamaterials into the design of computers.  I didn’t feel the need to specifically state it, because it seemed kind of obvious, but metamaterials play a vital role in the design of optical computers and general optronics.  Specifically, I did mention how the terminal screens are constructed.  At first glance they just seem like a portable sheet of metaglass with comfortable hand-holds along the side and a projection display unit (with built-in wireless transmitter/receiver) attached to the bottom.  The projection unit communicates with the terminal base, which is linked to the local hub, but receives instructions through each individual’s neural interface (a basic brain-to-computer interface that provides input to the terminal, but does not receive output from it to be fed directly into the brain).  All output is projected onto the back of the sheet of layered metaglass, and each layer of the metaglass sheet is customized for a very specific wavelength frequency (dark red to bright red light)—it could easily be made full-colour, but red light preserves night vision, which is integral for astronauts.  The effect it creates is a fully 3-dimensional display with a misty cloudiness appearing vaguely somewhere behind it, and for security, you can only make out any details on-screen if you’re directly in front of it.  Otherwise, it just looks like a sheet of glass.  Pretty neat, eh?  Such a screen isn’t actually all that far away (although certain durability aspects involving high-pressure tempering are a little unrealistic, simply due to cost).

The wide array of sensors accessible from the SFS Fulgora’s Flight deck and Research bay also, naturally, make use of metamaterials.  When analyzing stellar spectrographs, it is useful to have the widest range of finely-tuned sensors possible, and metamaterials can be of great use in improving our current catalogue of sensors and telescopes for use in space.  Imagine, for example, being able to take detailed images of Pluto’s surface from low-Earth orbit.  Imagine being able to enter a new star system, and have a complete system model generated from sensor data within minutes.  Metamaterial-based sensors will allow us to do that.  Who knows where this new technology will lead us?

There’s a lot more I could get into, but metamaterials are an emerging field.  If you want to keep up to date with it, it never hurts to keep the wikipedia article bookmarked: (most of the specific sections are only summaries that link to complete articles on each of the main topics, so there’s plenty to read from that one page alone).

There’s also some interesting texts on metamaterials: Search for Metamaterials on

Specifically, Electromagnetic Metamaterials: Physics and Engineering Explorations (978-0471761020), and Metamaterials Handbook – Two Volume Slipcase Set (978-1420053623) look pretty good.  Though there are a lot of others to choose from on Amazon.

In my next post, I’ll finally be getting around to molecular reconstruction, which may not be as far off in the future as we think.

— the Phoeron

Placeholder, SPQS Universe

The Science of Placeholder, Pt.4

I have always felt that nuclear power was our future, because the power of the atom is undeniable.  But power that great needs to be respected, controlled, and very carefully monitored.  The current tragedy in Japan is a fair enough example of that, but the Three Mile Island and Chernobyl disasters should have been more than enough.

Fine, everyone knows that fission is dangerous and finicky, but if you treat it with the respect and care it deserves, meltdowns, coolant leaks, and gas explosions can usually be avoided.  And again, as the current tragedy in Japan has reminded us, if you build a reactor in an earthquake zone, you should make a point of earthquake-proofing it.  And if that cannot be reliably done to protect against the worst-case-scenario, then you should consider an alternative source of power.

Fusion reactors are a much better option, but unfortunately, they are still only experimental.  The proposed designs are similar in principle to fission-based power plants, but the primary hold-up is sustaining a thermonuclear reaction long enough to produce a reliable source of energy.  The two best approaches to confinement of thermonuclear reactions available today with present technology are magnetic and inertial.

The largest magnetic-confinement fusion reactor currently under construction is the ITER tokamak in Cadarache, France.  It is worth looking into:

The largest operational inertial-confinement fusion reactor is the NIF.  It uses lasers to ignite the tritium-deuterium mix.  You can check it out here:

As for the safety of fusion reactors over fission reactors, Wikipedia has a remarkably good summary with cogent points.  It is available here: but I will also quote the first section of it for your benefit in case the article is ever tampered with or moved (all the links within the quote point back to relevant articles on wikipedia).

Accident potential

There is no possibility of a catastrophic accident in a fusion reactor resulting in major release of radioactivity to the environment or injury to non-staff, unlike modern fission reactors. The primary reason is that nuclear fusion requires precisely controlled temperature, pressure, and magnetic field parameters to generate net energy. If the reactor were damaged, these parameters would be disrupted and the heat generation in the reactor would rapidly cease. In contrast, the fission products in a fission reactor continue to generate heat through beta-decay for several hours or even days after reactor shut-down, meaning that melting of fuel rods is possible even after the reactor has been stopped due to continued accumulation of heat (Fukushima I incidents demonstrated the problems that can rise in a fission reactor due to beta decay heating even days after SCRAM, an emergency shutdown of the fission reactor).

There is also no risk of a runaway reaction in a fusion reactor, since the plasma is normally burnt at optimal conditions, and any significant change will render it unable to produce excess heat. In fusion reactors the reaction process is so delicate that this level of safety is inherent; no elaborate failsafe mechanism is required. Although the plasma in a fusion power plant will have a volume of 1000 cubic meters or more, the density of the plasma is extremely low, and the total amount of fusion fuel in the vessel is very small, typically a few grams. If the fuel supply is closed, the reaction stops within seconds. In comparison, a fission reactor is typically loaded with enough fuel for one or several years, and no additional fuel is necessary to keep the reaction going.

In the magnetic approach, strong fields are developed in coils that are held in place mechanically by the reactor structure. Failure of this structure could release this tension and allow the magnet to “explode” outward. The severity of this event would be similar to any other industrial accident or an MRI machine quench/explosion, and could be effectively stopped with a containment building similar to those used in existing (fission) nuclear generators. The laser-driven inertial approach is generally lower-stress. Although failure of the reaction chamber is possible, simply stopping fuel delivery would prevent any sort of catastrophic failure.

Most reactor designs rely on the use of liquid lithium as both a coolant and a method for converting stray neutrons from the reaction into tritium, which is fed back into the reactor as fuel. Lithium is highly flammable, and in the case of a fire it is possible that the lithium stored on-site could be burned up and escape. In this case the tritium contents of the lithium would be released into the atmosphere, posing a radiation risk. However, calculations suggest that the total amount of tritium and other radioactive gases in a typical power plant would be so small, about 1 kg, that they would have diluted to legally acceptable limits by the time they blew as far as the plant’s perimeter fence.[12]

The likelihood of small industrial accidents including the local release of radioactivity and injury to staff cannot be estimated yet. These would include accidental releases of lithium, tritium, or mis-handling of decommissioned radioactive components of the reactor itself.

—, “Fusion Power” Pt. 3.1

As you can see, fusion reactors are much, much safer than fission reactors, and meltdowns are effectively impossible.  And once tritium containment is perfected so none whatsoever leaks into the atmosphere, it will be the cleanest, safest, and most abundant source of energy ever known to humanity.

We must not forget atomic batteries either.  They’re called batteries because they are portable, self-contained sources of energy, but in actuality they should be called nuclear-decay generators.  They represent a highly customizable technology, and currently can be made as small as a penny (for liquid semiconductor atomic batteries)—select your energy requirements, match it to an appropriate isotope, and stick it in the most efficient package for your needs.  Betavoltaic cells are one great option.  They rely on beta-decay, so don’t need much shielding at all.  The isotopes they use generally have really low alpha and gamma radiation, so even if you break them open (which is still a dumb idea, but I’m just saying), the harm is minimal.  And the best part?  We can customize atomic batteries to suit our current needs, swap out the chemical batteries and pop in an atomic, and we suddenly have mobile devices that never need to be charged.  Just swap the battery once the half-life of the isotope expires.  For some, that can be as high as 140 years (even longer, if your energy requirements are lower than normal, and the decay product of the isotope continues to produce enough of an electric charge).  The isotopes keep decaying at high energies for centuries.  The problem is actually not with them, but with the semiconductor material.  It breaks down over time, as the decay particles pass through it to produce a charge.  Obviously, liquid semiconductors are better than solid, and will last substantially longer.  This sort of technology will allow for stable, long-use portable power, in basically any environment, and at least 90% of the components (including all of the reaction mass) can be recycled.

Now that I’ve talked a little about the current options for nuclear power, I can get on to my actual point.  The use of nuclear power and propulsion in Placeholder.


(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

Nuclear Energy in the SPQS Universe:

The political system of the SPQS is (obviously) imperialistic with a transnationalist economy.  The military controls the civilian government, and education is restricted to military personnel.  Furthermore, higher education is restricted to Officers.  The best that a civilian can expect in this world is grade school (elementary), plus some trade school in their teenage years.  Every citizen of the SPQS is given rigorous personality, aptitude, and IQ tests throughout their childhood, plus a final placement test shortly before their twelfth birthday.  This is a very nasty system and means no freedom for anybody, not even the privileged Officer-class.  The Military government controls all resources, all supply lines, all employment, all education, and even all religions.  But it is functional, insofar as the safety and welfare of the human race as a whole is taken care of, and communist-style rationing isn’t necessary because a transnational military government has full control over nuclear energy (and weapons), and doesn’t have to answer to anybody.

Any functional government has to also promote loyalty within its citizenry; while a military government can often be cruel and cover it up with propaganda, it’s actually easier to play nice with the people who can’t get up to much trouble in the first place.  Your average citizen actually wants safety and comfort above freedom and education, as sad as that may seem to some.  So long as they are ultimately left to live their life in peace, have their needs and comforts attended to, and can find some amusement to distract them from their work (even if they happen to love their work), then society as a whole will remain stable.

Now, this point is important.  Nuclear power is perceived as dangerous, and has to be treated with respect and care.  Who better to be the stewards of the atom than our soldiers?  They are disciplined, sharp, used to an undue amount of stress that would make a normal person crack, and ultimately, extremely responsible individuals with a great respect for authority.  They are also willing to lay down their lives for a greater cause—a fact most civilians can’t even appreciate.  With specific training in nuclear engineering, they are the perfect candidates to work in a reactor.  So having nuclear power exclusively in the hands of the military might not be such a bad thing.  It would certainly boost public confidence in nuclear power.

Of course, even if the people didn’t like it, a military government could impose it’s own ideals on society.  With enough propaganda, anything is possible.  And a space age society needs nuclear power.  Nothing else is even close to good enough.

I also mentioned a lack of restrictions.  Allow me to elaborate.  There are certain aspects of nuclear energy that are currently unavailable to the general public, and even unavailable to our militaries and research institutions.  Thanks to the Partial Test Ban Treaty of 1963, the Nuclear Non-proliferation Treaty of 1968, and the Comprehensive Nuclear-Test-Ban Treaty of 1996 (which has not yet entered into force), our options for maximizing the beneficial uses of nuclear energy are severely limited.  Inertial confinement fusion is one workaround (which is a stipulation that the US is trying to enforce for the CTBT before ratifying it), because it allows simulation without the need for actual thermonuclear detonation.  But still, any hope of launching a nuclear pulse propulsion rocket, whether from orbit or from earth, was completely crushed by the partial test ban, and only further enforced by the later two treaties.  For feasible interplanetary and interstellar exploration and colonization, we need nuclear pulse propulsion at the very least.  There’s no way around that.  Chemical rockets are too inefficient, wasteful, and expensive to support a successful, active space program.  But that’s what we’re stuck with.  The SPQS has no such restrictions.  They control all nuclear energy, fuels, and weapons, so they can use them to their best ends.  And seriously, what use does a universal-military-government have for nuclear weapons?  They might maintain a stockpile, just in case, but in that situation at least 99% of nuclear R&D would go into power and propulsion tech.

Right now, the only real limitation on atomic batteries is the expense in making them.  NASA uses them when they have to.  They’re a good, reliable, long-term source of energy to power their space probes.  And they’ll certainly come in handy on manned missions to Mars (if NASA ever gets around to doing it).  But as any technology, now that we have it, it will be improved over time and the costs will go down as they become mass-produced.  Right now, atomic batteries are very much ‘special use’, and complete overkill for most purposes.  But as our mobile technology is steadily improved, and power requirements keep raising, the move from chemical to atomic batteries will be a natural one.  Obviously, in the SPQS Universe, atomic batteries are used exclusively, because even the ‘simplest’ devices have advanced AI systems that require multicore processing in a modular computing environment.  Also, they have moved entirely from electronics to optronics (except for a few select people who also have access to quantum computers), and optical computers are only more energy efficient when kept entirely self-contained.  Powering a ship-wide optical computer network with several supercomputer cores is actually very energy taxing, especially when many of the systems are constantly running and all of them need to be kept cooled (in addition to all the other electrical draws from life support, mechanics, internal and external sensors, etc., etc.).  It’s worth the cost though; optical cores are a big step up from our current processing capabilities, and don’t need much of a refinement to the logic behind it (whereas with quantum computers, it’s a whole new science; everything has to be redesigned from the ground up, to harness the unique properties of quantum systems).

Lastly, the SFS Fulgora is outfitted with two fission reactors; instead of hard water, they use an inert pressurized gas to drive the turbines.  The primary reactor is always running, whereas the secondary reactor is only used during peak times when strain needs to be taken off the primary.  The reactors have no connection with the nuclear pulse propulsion rocket, except for powering the systems which make it work.  Most of the electrical power is needed for the MRD, computer systems, and lab equipment.


Nuclear Propulsion in the SPQS Universe:

I’ve already talked a bit about Nuclear Pulse Propulsion, but there are other options.  I’ve only focused on nuclear pulse propulsion because that is what the SFS Fulgora is outfitted with.  Within Placeholder, it’s considered barebones backwards tech.  The SFS Fulgora is stuck with it simply because it was the cheapest practical option to meet mission requirements.

I should also mention why the SFS Fulgora needs a rocket in addition to the MRD—after all, it has a jump drive, so why does it need traditional propulsion at all, beyond simple ion drives for shuffling around local space?  Simply put, since they designed the MRD with Relativistic M-Theory in mind, the MRD can only be used to merge two ‘level’ points within spacetime: interstellar space.  The calculated ‘gravity wells’ caused by a star’s gravitational field are strongest within the heliosphere, and thus according to Relativistic M-Theory, are heavily warped regions of spacetime.  When a REZSEQ is performed between two disparate gravity wells, the object itself is reintegrated with massive distortion because of the errors introduced in the calculation of spacetime by the 4n model.  This problem is overcome when Konrad reprograms the MRD with his 11n model, allowing him to jump between stable orbits of planets.  But that is a unique capability of his ship (until the SOLCOM Celestine Corps catches up with him and reverse engineers his work).  Until he reprograms the MRD, though, the SFS Fulgora has to be piloted out of the solar system, beyond the heliopause to level space.  The ship can then only be jumped to a location beyond the heliosphere of another system, and piloted into it.  This requires a robust means of propulsion, to cover distances of up to 100 AUs within 3 months.  No mean feat, let me tell you.  But a nuclear pulse propulsion rocket is capable of achieving velocities upwards of .1c (10% of the speed of light).  After that, acceleration peters out.  The shaped nuclear charges no longer detonate quickly enough or produce enough of an effect to add any additional acceleration.  ie., you’re just wasting fuel at that point.

There are better nuclear rockets, of course.  For example, there are some good designs for fusion rockets and antimatter-catalyzed nuclear propulsion.  But since the SFS Fulgora is supposed to be a low-budget research vessel (without even a proper artificial gravity system), I thought it unlikely that they would go ahead with a top-notch antimatter or plasma-based propulsion system.


Radiation Shielding:

Anytime you start dealing with nuclear energy and/or propulsion in spaceship design, you have to think about shielding the crew from it.  But the truth is, most of the harmful radiation astronauts face is from cosmic radiation, not the reactors or engines they might some day get.  Granted, any extra radiation from these sources would only make matters worse, but my point is only that radiation shielding is already an essential aspect of spacecraft design for extended missions.

My solution for the SPQS Universe is the new and exciting field of meta-materials.  It has so much potential that I couldn’t leave it out—but it also requires its own post.  Suffice it to say for now, that I used a choice selection of metamaterials for radiation-shielding.  Combine that with more traditional means of radiation shielding for the reactor and nuclear rocket, and you have a safe and happy crew, unencumbered with radiation sickness and sterility, even on the longest of space voyages.


I suppose that about covers it.  My main points have been to identify the risks involved with nuclear energy, but also show how in the ‘right’ hands (emphasis on quote-unquote), it’s actually the cleanest form of energy we have.  It just needs to be respected.  Naturally, we need to have an invested commitment in perfecting fusion technology, so we can start building safer and cleaner reactors; but even fission, when managed properly, is cleaner than coal or natural gas, and is normally much less detrimental to the environment than hydro dams or windmills.  People who build and run fission reactors need to understand that you can’t cut corners with nuclear engineering, because a meltdown is as bad as a dirty bomb.  But again, if they’re built to standard, and managed with vigilance, they are exactly what we need to step into the future we’ve always dreamed of.

And as far as Placeholder and the SPQS Universe is concerned, I made my future history dominantly atomic because it’s still the best source of energy we have.  Maybe someday we’ll discover something better, but every other avenue that’s been explored requires more energy input than we can get back (such as with the models for antimatter-based power plants).  But you never know.

In my next post, I’ll deal with the new field of meta-materials.  It’s an especially exciting topic, since it’s such a new field that it hasn’t gotten much treatment in science fiction yet.  And in the SPQS Universe, it’s fundamental to just about everything.

— the Phoeron

Placeholder, SPQS Universe

The Science of Placeholder, Pt.3

As promised, today’s post will focus on legitimate and speculative (plausible, albeit beyond our current capabilities) methods of creating ‘artificial gravity.’  There are actually very few options, so for the SPQS Universe, I settled on two primary approaches: the sloppy and cheap way, which civilians and research vessels are stuck with, and the right way, which only military and luxury vessels/stations can afford.  Although, it should be stated up-front that “the right way” isn’t really ‘artificial gravity’ at all—because, technically speaking of course, ‘artificial gravity’ implies using a different force to achieve the impression of gravity where there is little to none (microgravity, or micrograv for short).


(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

Artificial Gravity:

The most obvious and most successful methods for creating artificial gravity today are Acceleration and Rotation; for a spaceship, of course, both come with problems.

For artificial gravity that is really acceleration, that requires the ship to be constantly accelerating at 1g or less for the duration of the trip through space in order to provide a constant artificial gravity; that may be okay for a ship with say, a nuclear pulse propulsion rocket like the SFS Fulgora in Placeholder, but a chemical rocket can’t carry enough fuel to maintain that acceleration for long (and a fusion or antimatter-catalyzed rocket would be total overkill for a civilian crew).  The fact that in any human-crewed space journey, the first half is accelerating towards the destination, and the second half is ‘decelerating’ (ie., accelerating away from the destination to reduce velocity enough to bring the ship into orbit around the desired planet), is actually irrelevant—there will be a brief pause where microgravity returns as acceleration stops and the ship is spun around its central axis to point the rocket at the destination to begin deceleration.  Thus, artificial gravity resumes as before.  Of course, a constant acceleration of 1g (or more) implies an interstellar journey and a generational ship.  Thus, relying on it is limited to specific cases where it would be most appropriate.

Ships and Space Stations that have rotating sections to achieve artificial gravity are a fairly common staple of near-future sci-fi, and are generally included in current proposals for civilian space habitation and tourism.  It’s entirely feasible technology, assuming of course anyone is actually willing to pay to send all the necessary components up to space.  Since right now, our only means of achieving escape velocity is through chemical rockets with strict payload restrictions, it is ultimately only the costs that are prohibitive.  But once we achieve construction of a rocket that has the same high specific-impulse of nuclear pulse propulsion without the radioactive fallout (ie., fusion or antimatter rockets), or something even better like a space elevator or magnetic propulsion, such cost prohibitions will disappear.  The only other catch with rotational artificial gravity is the need for either a counterweight, or an even multiple of rings with half going one way, and half going the other.  In Placeholder, I combined both approaches; one large habitation ring, one small medical ring, a rotating quarantine bay, and a short counterweight.  I also designed the rotating sections so that they could turn on a secondary axis.  Thus, when the SFS Fulgora was under acceleration, the ring rotation could be shut down, and the floors could be turned towards the aft of the ship, so that the artificial gravity was always pointing in (roughly) the same direction.  Naturally I made the mechanisms in the ship a little slow, so that artificial gravity would curve up the aft wall a little before settling back level to the floor of the habitation ring.  It gave it more of a nautical feel.


Artificially-produced True Gravity:

The other dominant form of “artificial gravity” mentioned in Placeholder is what the Military vessels and stations use.  Technically, it’s not really artificial gravity at all, it’s real gravity produced by artificial means instead of a large mass.  They use shaped graviton-emitters to establish a virtual centre of gravity that recreates 1g perfectly.  It could be set to reproduce any gravity of course, but other than high-gravity training, that would be of limited use.  Thanks to our evolution on this particular planet, our bodies will always inherently prefer our natural gravity.  Obviously, graviton emitters reproducing Earth-normal gravity creates a host of other interesting troubles—propulsion the least of them.  But that’s a problem for another book, because Konrad Schreiber had no direct experience with them.

Now, in order to create a graviton-emitter, one would first have to prove that the quantum of gravity exists; searches are underway at several of the world’s high-energy supercolliders, but so far, neither the graviton nor the Higg’s boson has been discovered.  In my future history, I decided to take a pretty dangerous leap.  Some might consider it irresponsible even.  But nonetheless, I felt confident in saying that both the Higg’s boson and the graviton had been discovered, and furthermore, they were connected.  (The following statement isn’t scientific, it is pure speculation on my part and could very well be coming from a mistaken understanding of the standard model)  It seemed to me, that if the Higg’s boson predicted by the Standard Model is real, and it is in fact responsible for matter having mass, then either the graviton must be a decay product of the Higg’s boson, or the Higg’s boson is an intermediary in the exchange of gravitons.  After all, where there is mass, there is gravity; and while other forces may be able to reproduce certain effects of gravity, they are still not gravity.  But masses always attract each other, unless another stronger force, such as electromagnetism, overpowers that attraction.  The Large Hadron Collider is one of the particle accelerators that is supposed to produce a Higg’s boson (if it even exists).  As I understand it, if the LHC fails to produce a Higg’s boson, then the Standard Model is also going to be considered a failure.  I suppose we’ll have to wait and see.  They can only test one energy range at a time, and there are still many, many more to go through.  Either way, we can expect to have a definitive answer within our lifetime.

If existence of the Higg’s boson and graviton can be proven, it will only be by either detecting them in nature or producing one artificially.  And amusingly enough, it’s a lot easier to make one artificially then to just go out and find one.  You wouldn’t think so—gravity is everywhere around us, inside us, permeating every dimension of spacetime (however many it has).  But apparently, for some very good technical reasons, gravitons cannot be directly detected on Earth.  We’ve got to build a graviton detector in space, and nobody is going to do that until we know exactly what we’re looking for and have a way to find it.  In Placeholder, all that has already happened and is old news.  Both bosons were produced artificially, detected within a particle accelerator, and then detected directly in nature. Since to even detect it in the first place, it had to be produced artificially, the large-scale production of gravitons for true gravity on a spaceship or station is then only a matter of refinement.  First, make the process as efficient as possible, and then find a reliable way to power it.


And I think that about covers it.  Because of the particular modifications I made to current rotational artificial gravity system proposals specifically for the SFS Fulgora, I had Konrad cover the subject in gross detail.  And like I said, the detailed application of graviton-emitters will be covered in another story, where it is actually a fundamental part of the story.

For my next entry, I will skip past the molecular rearrangement technology (since it plays such a small part in the story), to nuclear power and propulsion in space.  And due to the current tragedy in Japan, which makes the timing of my novel seem somewhat poor in taste, it is a subject I should deal with immediately.

— the Phoeron

Placeholder, SPQS Universe

The Science of Placeholder, Pt.2

One of the more important pieces of technology in Placeholder is the Lévi–Yang Field Generator.  In practice, it is similar in purpose and function to the Stasis or Hypersleep pods that commonly appear in science fiction; only, I took the time to think of how a piece of technology like that might actually be possible.  It’s not enough to say ‘it freezes time’, or as in the pilot episode of Red Dwarf, Todhunter puts it:

The stasis room creates a static field of time. See, just as X-rays can’t pass through lead, time cannot penetrate a stasis field. So, although you exist, you no longer exist in time, and for you time itself does not exist. You see, although you’re still a mass, you are no longer an event in space-time, you are a non-event mass with a quantum probability of zero.

— Red Dwarf, Ep. 1.01 “The End”

Funny?  Yes.  But effectively (and purposely) meaningless.  Red Dwarf can get away with it because it’s a parodic sci-fi comedy, but since I was going for Hard Sci-Fi, I felt I had a responsibility not to shrug it off.

An interesting area of research in theoretical physics has been the quantization of time.  There are two fundamental quanta of temporal units, the Planck Time, and the Chronon.  Whichever you decide to focus on, you have to quantize time to be able to freeze it.  As interesting as the Chronon is, its properties are system-dependent, whereas the unit of Planck time is an indivisible absolute, much like the speed of light in a vacuum.

I decided to work exclusively with Planck time in Placeholder.  Chronons, if they can be said to exist, are never absolutes and are a tricky thing to work with.  And because they are system dependent, I felt that they could only be plausibly manipulated to slow down time, create highly localized time dilation effects, things like that.  To stop time dead in its tracks by means of a so-called ‘stasis field,’ you need an absolute, an indivisible quantum of time, a true universal temporal constant that is stable in and across all reference frames, just like the speed of light in a vacuum.

As it turned out, I made the right choice, because I discovered later that Planck-time was essential to several of the systems I had designed, especially when it came to their interaction.


(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

The Lévi–Yang Field:

I named my version of the ‘stasis field’ after two of the important researchers behind the development of quantized time, Robert Lévi and C. N. Yang.  They have both made numerous and massive contributions to theoretical and quantum physics, and have already had some physical effects named after them (albeit separately with other researchers), so it seemed appropriate.  And for those of you who don’t know, newly discovered effects and phenomenon in physics are often named after researchers who laid the groundwork for their understanding—just like Hawking Radiation, Minkowski Space, Calabi—Yau manifolds, and Yang–Mills theory—and that is the proper convention for their naming.  You can refer to the Lévi—Yang Field as a type of ‘stasis,’ but a physicist would refer to it by the names of the researchers behind its development.  And since Konrad Schreiber is a physicist, I had him stick to the convention rigidly, even where a layman would never find a technical term to be appropriate.  Some readers may find this obnoxious, but I have a responsibility to maintain the integrity and consistency of my characters.

The principle behind the Lévi—Yang Field is simple enough; the quantization of time reveals time to be a property of matter, no different in principle than the spin or charge of a particle.  This is true for both the Planck time and Chronon models, but with the Chronon that additionally implies quantum-scale relativity.  As a property of matter, time has to exist, and also has to move in a certain direction, which is an interesting quantum justification for causality and also the primary reason I had Konrad firmly denounce the possibility of time travel.  So time has to move forward, no matter what, but if time is quantized it can also be manipulated, to a point—the Chronon shows us that time can even be relative on the quantum scale, and the Chronon can be used to effect large scale highly localized time dilation.  But you don’t necessarily need to consider the Chronon, since time is a property of matter anyway.  So remove the Chronon from the equation, consider its area of effect limited to the individual particle under consideration, and focus on the absolute of Planck time.  If you can detect the temporal property of matter, then you can effect a change upon that value, even if only by means of the observer effect—you then need to establish a system of effecting specific change on the temporal value, which is a little more tricky.  With nothing other than the observer effect, you need to work with probability theory, and detect the specified property a specified way to maximize the probability of the desired effect.  But just as a particle’s charge can be changed, I expect that the temporal property of matter could also be changed by more direct means.  And that is the Lévi–Yang field: an artificially generated abstraction of virtual temporal particles used to nullify the temporal state of all enclosed matter.

As I said, the quantization of time demands that time moves forward anyway.  Set the temporal state of matter to zero, and a single unit of Planck-time will pass for that matter anyway.  Thanks to the discoveries made by research into Chronons, we know that relativity holds on the quantum scale, but normally the effects are so minute that they go unnoticed.  But with a large scale device that can generate something like the Lévi–Yang field, so long as the field is active, the passing of time for the reference frame within the field is the absolute lower limit—just the one unit of Planck time.  And this effect is of extreme importance to how the Lévi–Yang field interacts with the MRD and the Quantum Core required to control it.  They all work together seamlessly, but only because of Planck-time.


The Lévi–Yang field and the MRD:

I state specifically in Placeholder that the MRD is only capable of functioning when paired with a Lévi–Yang field generator.  And this is fairly important to the story, because as I stated above, it relies on the absolute of Planck time, which in turn forces time to always move forwards, despite the seeming flexibility that one might expect from time being a property of matter that can be rewritten.  Not knowing this is what led Admiral Hupé and his team of physicists at SOLCOM to erroneously believe that the MRD could double as a time machine.  Whether or not Konrad knew that at the time of the conversation with the Admiral is left to speculation; the issue is only addressed in retrospect, while Konrad is in the midst of retconning his own life as a part of his psychological breakdown.

I mentioned the basic functioning of the MRD in my previous post on The Science of Placeholder; but I didn’t get into how the MRD and Lévi–Yang field work together.  In Placeholder, Konrad Schreiber gives a very thorough description in his journal app on his antique HalderTech datapad of how the system is supposed to function, so I will quote it here and elaborate as necessary:

As little an impression as the first REZSEQ made on my conscious mind, the second was all the stronger; if for nothing else, certainly because of the shock.  The initial excitement had passed, so I was free to analyze my sensations of the experience more closely.  I have to confess though, I still do not understand why I perceived the second jump so differently from the first, why I experienced it at all; I understand the physics of the membrane resonation sequence, I understand what the MRD is doing inside itself and to spacetime, and I understand how it is even possible when you look at all the steps to the sequence in order—because without a quantum computer core, the process is in fact impossible.  Not because of the processing requirements of the resonation equations—any optical supercomputer tower could calculate those within a reasonable amount of time.  Not because the format of data is any different when output from a quantum or optical core—both can output in binary, ternary, senary, octal, decimal, hexadecimal, sexagesimal, or whatever other radix may be preferred.  And certainly not because of the more commonly known additional features of a quantum core—as interesting a feature as quantum information sharing may be, its useful functioning is regulated to a single subsystem within the core, and has no bearing here.  The key factor that makes quantum computer cores essential to the resonation sequence is that the processing is time-independent; no matter how complex a program you feed into a quantum processor, the output is always returned within one measure of Planck-time.  Anything less than one unit of Planck time is effectively instantaneous, because that one unit is the absolute lowest measure of time, and is indivisible.  As such the MRD is able to receive its instructions prior to enclosing the entire ship in a Lévi-Yang field that temporarily nullifies the temporal state of all enclosed matter (MRD and Quantum core inclusive), because the effect of the Lévi-Yang field from an external perspective is measured in whole units of Planck-time only.  Interestingly enough, when the generator of a Lévi-Yang field is itself enclosed within the generated field, the maximum and minimum duration of its persistence are reduced to 1 measure, so the complete REZSEQ is actually performed within one single measure of Planck-time.

The steps of the REZSEQ are as follows—this breakdown is essential for comparison to my personal experience of the sequence, which, I might say again, should have been impossible.  First, the completed program is executed from the drive terminal on the Flight deck; it is the only terminal on the entire ship outfitted with a quantum interface, and as such the program is executed simultaneously within the quantum core itself, where the instructions actually mean something.  Second, since the MRD also contains a quantum interface for direct input from the core, the complete program output is fed directly to the drive during the same instant that it is executed on the Flight deck drive terminal.  Third, having received the complete set of instructions from the Quantum core, the MRD initiates the Lévi-Yang field generator to temporally isolate the desired mass, in our case the entire ship and all of its contents, using the gravitational impact on spacetime as the defining markers.  At the same time it resonates that entire temporally-suspended mass as if it was a single fundamental particle, just an extremely complex p-brane collapsed in on itself a trillion times over; this resonation creates an p-dimensional spatial potential vector, with a modulated frequency that defines all the matter within it.  Fourth, this spatial potential vector is forced (by the universe, actually) to collapse in on itself into a virtual singularity, while it maintains the same internal complexity; this energizes the resonated mass sufficiently to merge two disparate points in spacetime, the specific terminus point a function of the collapse—for the duration of the single measure of Planck time, the same mass could be said to exist simultaneously at both points in spacetime, but because the collapse has forced them to coexist as one point, there is no actual duplication of matter.  Fifth, when the single measure of Planck-time is completed, the merged points become unstable and separate; since the resonated mass is technically in both places, but in actuality still at the origin point, the universe in an attempt at self-correction forces the resonated mass to take the empty terminus coordinates, because it thinks that the origin point is occupied by something else.  If the MRD cannot resolve the terminus coordinates, then the resonated mass is rejected and returns to the origin coordinates anyway.  Either way, the mass is simultaneously released from the Lévi-Yang field and returned to its proper spatial definition, or ‘size’.  So being that, technically speaking, the first and last events of the entire sequence are the activation and deactivation of a temporal nullification field, there should be absolutely no way for a human being to perceive an interval between the origin and terminus of a REZSEQ—time dilation or otherwise.  You should not even have the time for the neurons to fire for a blink before ending up where you want to go.

— Placeholder, §§ X-A.2-03 (pg. 79–80)

First of all, ‘REZSEQ’ is just the acronym used by the crew instead of saying ‘membrane resonation sequence’ or the less accurate but more known term ‘jump.’  Within the text of Placeholder, you’ll notice that Konrad uses it both as a a noun and a verb.

You’ll notice that he specifically identifies the Lévi–Yang field as the most important aspect of the REZSEQ; it is activated first, simultaneously with the execute command from his computer terminal on the Flight deck, and the entire sequence takes place within the field and only takes one measure of Planck-time to complete.  It is as instantaneous as is possible for a change to take place in the universe, besides the simultaneity of quantum information sharing (but any physical change of state or location requires a time interval, without exception). You’ll also notice that simultaneity is an important function of the Quantum core necessary for controlling the MRD, but I’ll deal with that in its own post.

Other than that, I think the REZSEQ speaks for itself; but it is worth noting in conclusion the rationale for enclosing the system within a Lévi–Yang field in the first place.  You might wonder, why not skip that step and just resonate the mass?  The gravitational effect of a given mass on spacetime is enough to calculate the resonation equation itself, so why freeze the mass in time before virtualization of the macroscopic mass into a point-particle?  The answer is actually surprising.  If the mass isn’t frozen in time, the amount of time it takes to scan the ship and build a model of it for the actual resonation sequence, albeit negligible to everyday human perception, is not quite negligible enough to prevent temporal disparity upon reintegration.  To a truly stable system, such as a homogenous solid inanimate spherical object at rest in a vacuum, that might not even matter, because the mass is isolated, stable, and undergoing no change in state or position; but for a ship traveling at high velocities through space, and for the people within it who have a billion different chemical and biological processes going on within their bodies at any given moment, the result would be catastrophic.  Whatever final state the resonated mass reintegrated as, without the use of a Lévi–Yang field to focus the resonation into a single unit of Planck time, the result would not match the original mass.  But it can be safely assumed that the results would be catastrophic.


That’s it for now.  In my next post, I will address the discovery of the graviton in the SPQS Universe, its relation to the Higg’s boson, and other means of achieving ‘artificial gravity’ in a space ship.

— the Phoeron

Placeholder, SPQS Universe

The Science of Placeholder, Pt.1

If you’ve been working through Placeholder, you may have noticed that the first half of the book is a little science-heavy.  I determined right from the first moment I decided to write science fiction, that I wanted to write Hard Sci-Fi; no compromises, no fluff, and no technobabble.  Being only a hobby physicist at best, I naturally doubted in my ability to accomplish this feat, but with the kind guidance of the web and some choice textbooks on all the subjects that would have to be presented with an explanation to be believable, I went to work.  It’s up to my readers now to tell me how well I fared.

Most important to the story, I had to find a practical way for Konrad Schreiber’s ship, the SFS Fulgora, to travel the stars.  It had to be new, experimental, and come with a list of potential risks that could not be determined until it was tested out by a human crew.  But any believable piece of technology also has to come with certain limitations.  I’ve never liked ‘cure-all’ devices, and prefer to leave them out of my stories.  It’s one thing to have a MacGyver-like character, who has the rare ability to snatch victory out of the jaws of defeat with the most random and commonplace of objects, but that’s a very different circumstance, and a unique character type that doesn’t lend itself to regular reuse.  So I decided to make a piece of technology that works, but is also flawed because the science behind it is poorly understood.  I think I came up with something pretty special.


(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

Realistic Social and Scientific Advancement:

Placeholder is set just over 120 years in the future; not long in the grand scheme of things, but long enough for some drastic political changes without necessitating a drastic change in science, technology, and culture.  I made the assumption that science would continue to progress in most directions, but certain political changes are known from past experience to cause large-scale regression in specific areas, mostly in education, technology, and other standards of living.

To the point: the science of Placeholder is a century of refinement to the science of today; I established certain breakthroughs that would further the story, but I made a point of being stingy about it.  Let’s face it, major scientific breakthroughs that really change the world don’t come around all that often.  We haven’t even lived up to the technological expectations of the 1950s for our generation!  So I only allowed for about as much change: not nearly as much as we might hope for, but better enough that it’s worth imagining.

I made the further assumption that the physics community might actually come to some sort of agreement on an acceptable compromise T.O.E.  I figured it unlikely that you could ever hope for an ideal candidate in such a situation; it would have to be somewhere in the middle between String Theory and Quantum Loop Theory, a theory that no one would be truly content with, but which was ultimately functional.  And that’s the point of science; it doesn’t have to be true, it just has to work.  Case in point: General Relativity.  It works.  It accurately predicts everything it was designed to, and we wouldn’t have a working theory of gravitation in astronomical scales without it.  But General Relativity works under the assumption that Spacetime is warped by gravity; from what I’ve read of Hawking and Einstein, I’m inclined to acknowledge only that the relativistic model of spacetime is warped by gravitational fields for the purpose of analysis, not that spacetime is actually warped by gravity.  Thus, hypothetical devices like the Alcubierre drive are impossible; spacetime is not warped by gravitational fields, thus you cannot artificially warp spacetime by manipulation of gravitational fields to achieve effective FTL travel.  Of course, I’m not willing to bet that no artificially created field is capable of warping spacetime; I’m just saying, it isn’t in my opinion something that gravity does.

Another place that General Relativity fails is in the Quantum world.  This argument is the best known and most talked about.  But what’s not talked about enough is that General Relativity also falls apart on the Cosmic scale (ie., intergalactic distances and higher); and strictly speaking, it does a rather poor job of explaining the Big Bang on its own, even if it was the scientific revolution that allowed for the theory to be first proposed.  All sorts of unscientific fluff has had to be invented to make General Relativity keep working.  Dark matter, dark energy, MACHOs and WIMPs, you name it.  But all the same, it is not without its merit.  Einstein gave us far more than he intended when he summarized the fundamental equations of General Relativity as E = mc^2.  That equation isn’t just the basis for Relativity, it’s the basis of all modern science.


M-Theory and the MRD:

I decided to go with M-Theory as the dominant physical theory in my future history, but not without certain changes that will offend many string theorists.  First of all, I’m perfectly aware that M-Theory is incomplete and poorly understood.  But within Placeholder, I treat it as a complete Theory of Everything.  In order to accomplish this with minimal effort, I had the physicists of the SPQS force M-Theory to conform to General Relativity first, and largely ignore the disharmony between relativity and quantized theories of gravity.  This is, and everybody knows it, the easiest way to create a Theory of Everything; but obviously, the theory will have holes.  Firstly, all String Theories were designed to be fully quantized; General Relativity is a Classical system that deals with fields as abstractions of force, just like Newtonian Mechanics, instead of the quantum force-carrying intermediary virtual particles used in the Standard Model of particle physics and as the basis of all string theories.  If you try to make a quantum system fit into a classical system, you’re going to run into problems.  Secondly, no acceptable quantum theory of gravity exists, although several decent candidates have been proposed.  So far, I think String Theory’s interpretation of the graviton model is the best, but it doesn’t exactly fit the Standard Model.  But until someone actually discovers the graviton and/or the Higg’s Boson, it’s anyone’s guess as to which is more accurate.  Thirdly, M-Theory predicts 11-dimensional spactime, General Relativity only allows for 4—although, it’s interesting to note that they both approach spacetime in the same way.  11-dimensional M-Theory predicts 10 dimensions of space, and one dimension of time; three of these spatial dimensions are the familiar Euclidean dimensions of space, and the other seven are generally considered as ‘compact’ dimensions.  ie., they have collapsed in on themselves and assumed a number of strange geometries that prevent us from directly detecting them.  The same is the case for most versions of string and superstring theory, although the total number of dimensions can range anywhere from 5 to 26.  M-Theory is by far the cleanest when it comes to the handling of extra dimensions, but I still think it’s lacking something.

There are also a host of more technical problems inherent in forcing Relativity onto any interpretation of String Theory, but currently they are little more than mathematical musings.  If you’ve ever tried your hand at higher-dimensional vector fields, you’ll understand me when I say it’s an exercise in futility.

Despite the problems, I settled on a supersymmetric generalization of General Relativity fully integrated with M-Theory.  This is known as 11-dimensional maximal SUGRA, and has strong support.  And since it allows for the graviton, among other interesting possibilities, I felt it was best.

Now, Konrad Schreiber, the quantum computer programmer and physicist anti-protagonist of Placeholder, disagrees with Relativistic M-Theory, despite the obvious strides that 11-dimensional maximal SUGRA has made.  In the SPQS Universe, it has led to the discovery and taming of the graviton, a unified understanding of the physical universe, and the production of a highly useful device, the “Membrane Resonation Drive” (or MRD for short).  The MRD is able to measure a mass by the impact of its gravitational field on spacetime, define that mass as a single point-particle, and move that mass effectively anywhere in the known universe by merging two disparate points of spacetime into a single point via a singularity created as a product of the spin of the artificial point-particle.  Nevertheless, it’s not good enough for Konrad.  He’s the type of person that thinks he knows better even when he’s blatantly wrong.  Within the book, he goes to the trouble of rewriting M-Theory, to remove all artefacts of General Relativity and make it a fully quantized theory of everything.

In the book, I don’t specifically deal with who is more correct, Konrad or the rest of the Physics community.  Certainly, Konrad’s interpretation works for him, since he understands quantum mechanics better than general relativity (which is a running theme throughout the book).  As far as the physics community as a whole is concerned, there’s no reason why the MRD, a jump-drive in principle, shouldn’t double equally well as a time machine.  But Konrad’s conception is already limited because he doesn’t allow for that.  Also, his new model of the Universe prevents the MRD from jumping between galactic clusters.

The one thing that both systems do adequately well, is to jump beyond the confines of the physical universe; both the relativistic model of m-theory that he starts off using at the beginning of the mission, and his own quantized version that he develops once all his crewmates are dead, allow for the creation of an unstable 12th dimension when the MRD is forced to jump the ship to an invalid set of coordinates.  Konrad naturally blames a fault in the actual design of the MRD, but readers can take that as mere speculation.  There is, after all, a rather beautiful emerging field known as “Two-time Physics”—it has 10 spatial dimensions, and 2 temporal dimensions.  It is, without question, worth looking into.

Konrad’s approach to understanding this unstable 12th dimension, which he names “the Void,” is even more unorthodox than his usual work in physics: he considers temporal and spatial dimensions to be supersymmetric and intertwined, making 11 spatiotemporal dimensional couplets.  If you could separate them, that would make for 22 dimensions; but his model does not allow for that.  What it creates, though, is an interesting abstraction of 4-dimensional spacetime below the galactic distance scale. (For all you real physicists and mathematicians reading this post, this would be an appropriate juncture to check my work).


That’s it for now.  In my next posts on The Science of Placeholder, I will be addressing the Lévi–Yang Field, Artificial Gravity, Molecular Reconstruction, Implementation of Metamaterials in Space Engineering, Nuclear Propulsion and Power in Space, Quantum and Optical Computers, and of course, Quantum Computer Programming Languages.


— the Phoeron