Placeholder, SPQS Universe

The Science of Placeholder, Pt.7

We’ve finally arrived at my favourite subject: computers.  Specifically, the quantum and optical computers used in the SPQS Universe.  Amusingly enough, I designed my first optical/quantum computer hybrid when I was 14 (it was the summer between grade 9 and 10, the same summer I postulated my first TOE, a hyperspace theory of quantum gravity—yeah yeah, it was hopelessly derivative and full of significant holes, but I was only a teenager and I had only just begun studying quantum mechanics and string theory.  Hyperspace seemed like a legitimate option until I realized it was the theoretical physics equivalent of the Aether).  At the time I didn’t really believe a solely quantum computer was possible, and I didn’t have much hope for any significant progress with nanocomputers either—even though around the same time I was a member of the Nanocomputer Dream Team.

Anyway, to the point.  In designing an optical/quantum computer hybrid, I sought to resolve certain issues at the time with early conceptions of quantum computers.  Definitions of Quantum data.  Quantum error correction.  Quantum logic gates.  All the things that made the dream of a quantum computer seem the stuff of technobabble.  Now, all of those issues have been resolved and my hybrid design is pretty much useless—quantum computer science has come a long way since 1996/97, and no longer needs to rely on interfaces for every different system within the quantum computer.  And most importantly, quantum processor cores are now a realistic possibility (and I’ve heard there are some working prototypes of single stage quantum logic gates, too), so the entire computing model can fully harness the principles of quantum mechanics, from processing to memory to storage.

Optical computers are really something else though.  They may not be quantum, but you can design them with quantum optics and metamaterials in mind (which is implied in the SPQS Universe).  And a lot of progress has been made in the field of optronics since ’97—we can certainly expect to have end-user optical computers before end-user quantum interfaces, decades in advance really.  Because the funny thing about quantum computers, is that everything we think we know about computer science has to be thrown out.  Quantum computers have their own logic, their own language, their own unique mechanics and conception of information.  Digital information isn’t equivalent to quantum, and won’t be transferrable in all cases.  It’s a whole new world in computing, which is scary, and one of the main reasons the field is being held back.  Optical computer science though… the hardware may be different, better, faster, more reliable, but ultimately, we can enforce the same electronic computing model onto the platform at no real (immediate) cost to us.

Of course, optronics are capable of much more sophisticated computing models, but our understanding of binary digital information is essential to maintaining functional computing as we migrate to better hardware concepts.  Later, once optical computing hardware is ‘perfected’, we can start playing with more sophisticated data models and error correction designed just for them.  I rather like a base-six data model (ie., instead of using binary code for the true computer language, you use senary), because quantum computers run optimally with a senary base code (although right now all quantum computer science is being done with binary in mind, because error correction for binary is the easiest to accomplish, and quantum states that have only two options are easiest to measure), and I think it would be useful to start migrating data to a datatype that is equally understood by quantum and optical computers (and as I already mentioned, binary information is not equivalent for quantum and electronic computers, but senary information would be equivalent across all platforms that used it as the basis of information, for several really good reasons).

So you see, optical and quantum computers can work well side-by-side, better than electronic and quantum computers can.  But it’s important to understand why you would need them to work side-by-side.  Why use optronics at all when you have ‘quantronics’?  Well, for one thing, even if you have a solely quantum computer where all data is processed and stored internally through manipulations of quantum states, you still need to build an interface that can interact with it so that a human can make any use of the data.  A quantum interface has to be sufficiently advanced to handle i/o of quantum information without itself being quantum.  An electronic quantum interface would suffice for most personal devices, but then, a quantum interface is serious overkill for your average end-user anyway.  Scientists need quantum computers, military personnel and field operatives need quantum computers, but say for example there was a quantum-based ipod.  That would be so over-the-top wasteful that you really have to laugh at it.  So for the people who legitimately need quantum computers, a quantum interface up to the tasks they have in mind is necessary.  The best candidate is an optical-based quantum interface, so that it can handle the amount of data a creative scientist can fathom.

Imagine being able to take a sample of human tissue and decode the DNA in under ten seconds, while simultaneously networking to other quantum computers and searching for a positive match through all international databases.  Imagine being able to reconstruct a complete interactive 3D model of any organism from its DNA sequence alone.  Imagine being able to take the known properties of any given planetary system and automatically generate a working gravitational model, including estimates regarding non-observable bodies.  Imagine being able to solve an n-body problem for a quantum system without generalization in as little time as it takes to read this sentence, where n is practically limitless.  Imagine being able to accurately estimate the probabilities of a given set of actions based on an individual’s known psychological profile, and then predict all their behaviour with 99% accuracy for as much as 6 months in advance.  These are just a few examples.  Quantum computers can do all these things, if they have an optical-based interface.  But if you restrict a quantum computer with an electronic interface no better than our current mobile technology, then the quantum computer will be limited to handling data on the same scale.

Of course, there is a work-around.  You can set up a program similar in concept to WolframAlpha.  A quantum core server does all the actual calculations, all the web searches, all the correlation of data.  All you put in is one simple equation, and all you get back is the result.  You could then do everything I listed in the previous paragraph, with an electronic-based quantum interface, with just one downside.  You’d only ever have access to the front-end interface, you’d never get to examine the raw data, and you’d have to rely on the accuracy of the back-end without ever being able to directly examine its work or process.  For most people, that’s just fine.  I use WolframAlpha for just about everything now, instead of Mathematica, because it replaced 90% of what I used Mathematica for.  But sometimes, you just need more.  And I’m just a hobby physicist at best, a writer of sci-fi; real scientists, universities, space agencies, law enforcement, militaries and intelligence agencies need something a little more robust than WolframAlpha can offer (although a streamlined tool that just scans someone’s DNA and returns a positive ID would be more than a little useful for law enforcement; add a back-end that reconstructs a 3d model of unknowns and is integrated with facial recognition software and you have the perfect system for catching criminals from just basic forensic evidence).

Well, that’s a little more background than I at-first intended.  Suffice it to say that quantum computers will make excellent tools, and we have yet to even fully grasp what they are capable of.  But since they require interfaces to use, we should make it our goal to perfect a computing platform that can handle quantum data.  Currently, the most feasible computing platform that can do that is optical-based.  And yeah, most of us will never need to use a quantum computer once we have optronic devices, but that may change.  Like I said, we have yet to fully grasp what a quantum computer can do, so who knows what use we might find for them?  In Placeholder, I really only scratch the surface—but I think it gives a pretty good idea of where humanity can go with them.

 

(Spoiler Alert!  The rest of this post discusses technical details of Placeholder’s plot and primary characters.)

The Basics of Optronics:

Optronics can and will entirely replace electronics; and just how not everything electronic is strictly a computer, not every optronic system will be an optical computer either.  In the SPQS Universe, electronic devices are considered ‘pre-war antiques’; as an average citizen of the SPQS, for example, if you were to pop open your ipod or radio equivalent, you wouldn’t see a circuit board and wires, you’d see an optical wafer and fibre-optic cables.  A particularly sophisticated (by our current standards) optronic system would be designed with metamaterials and quantum optics in mind, so that the fibre-optic cables and optical wafer wouldn’t have to be coated (because metamaterials can allow certain lightwaves to escape while trapping others, thus the data light remains unseen in the optical paths and cables, while additional indicator light escapes specifically to be seen), and an experienced optronics technician would know by the patterns of light whether something was wrong with the device.  You wouldn’t need to add sensors or indicator lights to the wafer, because the optical paths on the wafer would show you directly whether light pulses were passing through the correct channels and being processed correctly.  That may require slightly more background training, but ultimately, it makes the job of troubleshooting an optronic system much easier than an electronic.

But all that really isn’t enough to justify moving everything to optronic systems when it works perfectly well as an electronic device.  Sure, some manufacturers might say, “well, it’s too expensive to keep producing electronic equipment when most of our assembly line has moved to optronic systems.”  But certain manufacturers might never see the benefit of optronics over electronics unless they’re given something more compelling.  After all, it’s the big corporations who don’t see a need to change a product that keeps selling that hold back technological progress—just like the electric car, which could have gone commercial in the 70s, but still is barely commercially viable even now, thanks to the combined effort of petroleum giants and car companies that put more money into paying off lawsuits than into manufacturing.

One of the most compelling reasons to switch to optronic systems is their energy efficiency.  When self-contained, an optronic wafer that does the same job as an electronic circuit board uses less energy than the equivalent electronic system.  For commercial end-user mobile products, this is a big boon.  The same batteries we’re using now can be made to last twice as long between charges, so your ten-hour battery in your latest iphone or macbook suddenly becomes a twenty to twenty-four hour battery.  This, along with the potential for a boost to processing power for gaming and a variety of other mobile tasks which could always be improved, serves to increase the perceived product value over the competition, and thus encourages consumers to choose optronic systems over electronic.  The companies that switch to optronic systems first will definitely gain control over the markets.

Also, optronic home computers will use much less energy than electronic computers do, so consumers will notice a reduction in their domestic energy bills over the course of the year.  And offices with 50 computers or more basically always on… the drop in their energy uses will be so drastic that they’ll be able to move large amounts of capital around to other purposes, after only the first year.  For one thing, in lean years they won’t have to lay off as many employees, because their basic overhead is far less restrictive.

Optronic systems also offer some interesting possibilities for supercomputers and server rooms.  You already have improved processing power at reduced energy costs, and with the careful choice of the right metamaterials, you can reduce excess heat to negligible levels, and save even more energy normally spent on keeping supercomputers and server-rooms cool.  With less excess heat you can also pack more processing cores in the same space, so the same supercomputer tower can have two or three times as many cores.

The possibilities are endless.  The optronics revolution will be the next computing revolution, and it is entirely attainable within the next twenty years (which is something you can’t safely say for end-user quantum and nano-computers).  The only catch is whether or not the computer giants can be convinced to make the move, because optronic technology requires nano-scale engineering and robotic assembly lines.  You won’t see many garage-based optronic computer companies coming out of the woodworks until domestic robots are the norm, and sophisticated enough to replace a robotic nanoscale assembly line like Intel’s.  But in the end, that’s another compelling reason for the computing giants to jump on optronics now: they won’t have any competition for at least ten years, and thus they can control the optronics market.

And yeah, okay, metamaterials are a bit on the expensive side right now.  But the most research has gone into electromagnetic metamaterials for direct photonic manipulation, so their use in optical computers is ultimately their most natural purpose.  If metamaterials are going to be used for anything in the near future, the first choice would obviously be something that has mass-market appeal.  And other technology that relied on the same basic principles could piggyback on the success that is inherent in optical computing, creating a general environment suitable for widespread improvement of technology along optronic means.  The easiest way to get the cost of metamaterials down is therefore in their immediate use in optronics—the same assembly lines that produce optronic components could easily be converted to producing other metamaterials, since they are all made by the same process of nanoscale laser-etching and layering; because of the mass appeal inherent in optronics, thanks to its increased processing power and reduced energy requirements, we can create a metamaterial economy almost overnight; and that metamaterial economy can go on to give us the raw elements we need to start producing some seriously impressive gadgets at no more upfront expense than our current tech industry.  And most importantly, we can prepare our tech industry for adaptable (and most probably bi-pedal humanoid) multi-purpose robotics and quantum computers.

It may be slightly expensive at first, but those who do take the risk will profit more than they can even imagine.  Because optical computers and the metamaterials required to produce efficient models of them are the keys to all our future technology currently in the works.

 

The Basics of Quantum Computers:

Granted, quantum computers are more speculative than optical, but over the past few years great strides have been made, and the models have been demonstrated to be functional.  What we don’t have yet is a full quantum core, but thanks to the non-stop efforts of a few quantum computer scientists and engineers, we at least know that the effort isn’t wasted (which is a lot more than can be said for nanocomputers).  Actually, in a way you could almost say that quantum and nano-computers are related fields, only quantum computers have been demonstrated to work and nano-computers have not.  But that’s because a nano-scale molecular computer model ignores quantum phenomena, where a quantum-scale subatomic computer does not.  If nanocomputer aficionados decided to incorporate quantum chemistry into their molecular physics models, they might make a little more progress.  If nothing else, it’s an angle worth exploring.

In the meantime, while nanocomputer scientists continue to fail, quantum computer science has made some amazing strides.  One of the main problems with the field in the 90s was quantum error correction.  As many of you are surely aware, quantum states change when observed, and if you know one quantum state of a particle, you can’t know its others; or more precisely, the precision of measurement of one quantum state reduces the precision of measurement available for other quantum states at one given moment of (planck) time, and that measurement introduces a change to the whole quantum system so that you can’t just go back and measure the same particle again and get the same result for the same quantum state, or know the precise value of the other quantum states at the time of the original measurement.  A looped logic gate would demonstrate that quite aptly; it would be as if a whole new particle was passing through it each time.  Quantum error correction introduced a model that allows us to overcome that problem, though.  In measuring, the logic gate knows it is introducing a change, so the logic gate is designed with a second error correcting gate that resets the measured particle to the state it was measured at.  This is difficult and complicated even for a binary data system, where the logic gate only tests to see if the particle is charged or neutral.  Imagine testing for the flavour or colour of a quark in a senary data system.  The math gets pretty wild, let me tell you.  But at least we know that with binary qubits, quantum error correction works, and they’ve now layered the error correction processing in such a way that measurements are accurate effectively all the time.  This is a huge step forward in quantum information theory, by the way.  It’s almost unprecedented.

One of the other main problems was the very definition of quantum information.  But the model of the qubit has resolved that quite successfully.  A qubit is one unit of quantum information, just how a bit is the smallest unit of information on a traditional, classic computer.  The qubit model is “equivalent to a two-dimensional vector space over the complex numbers,” (Wikipedia.org, Qubit) which is probably a meaningless statement to most people, but to mathematicians is quite evocative.  Actually, since the preface to the Wikipedia article is so good, I feel compelled to quote it here:

Qubit

In quantum computing, a qubit (pronounced /ˈkjuːbɪt/) or quantum bit is a unit of quantum information—the quantum analogue of the classical bit—with additional dimensions associated to the quantum properties of a physical atom. The physical construction of a quantum computer is itself an arrangement of entangled[clarification needed] atoms, and the qubit represents[clarification needed] both the state memory and the state of entanglement in a system. A quantum computation is performed by initializing a system of qubits with a quantum algorithm —”initialization” here referring to some advanced physical process that puts the system into an entangled state.[citation needed]

The qubit is described by a quantum state in a two-state quantum-mechanical system, which is formally equivalent to a two-dimensional vector space over the complex numbers. One example of a two-state quantum system is the polarization of a single photon: here the two states are vertical polarisation and horizontal polarisation. In a classical system, a bit would have to be in one state or the other, but quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.

— from Wikipedia.org, Qubit

Fascinating stuff.  I urge you to at least read the complete article on Qubits.

And then of course there was the breakthrough with the quantum logic gates.  They are a system so beautifully entangled, it is as if the quantum world has been turned into art.  Seriously, I lack the words to describe how I feel when I see a quantum system like that… awe is in there, but it’s more than that.  So much more…

Anyway, back to the point.  Quantum computers have come a long way.  But there are theoretical aspects to quantum computers that have not yet been demonstrated.  One thing a lot of sci-fi authors talk about is quantum information sharing.  Since the entire universe originated with the Big Bang, and all matter and energy that makes up the universe was contained within one single indefinably small point particle, you could reasonably argue that all particles in the universe are somehow entangled.  The truth is, some particles are more entangled than others, and the readiness at which you can exploit that entanglement is limited.  Also, certain recent experiments with quantum information sharing suggest that quantum information is also limited to propagating at the speed of light in a vacuum, which creates a problem for sci-fi authors who have been relying on instantaneous quantum information sharing for their stories.  Well, I too am guilty of that potential fallacy.  The quantum cores in the SPQS Universe contain modules which specifically harness instantaneous quantum information sharing, or ‘spooky action at a distance’ as Einstein called it.  In short, the SPQS has an original quantum communication module, devised and built shortly before the mythical last war.  All new quantum communication modules are made from entangled particles passed through the original, and thus remain in constant communication with each other and the original.  This may or may not be feasible and/or realistic.  But until it’s been conclusively proven that entangled particles don’t share information instantaneously, I’m going to keep using it.  Here’s hoping I don’t get made to be a complete fool (fingers-crossed).

 

Quantum and Optical Computers in the SPQS Universe:

As I’ve already said, both Quantum and Optical computers are used side-by-side in the SPQS Universe.  To be more precise, Optical computers are the norm, but integrate well with quantum computer systems when they are needed.  Quantum computers are considered more dangerous than nuclear weapons by SOLCOM, of course, so the technology is strictly regulated, and only a handful of Quantum Computer Programmers exist.  Konrad Schreiber is one of them, and while the SFAF was willing to put up with a lot from him, they were ultimately sending him and the rest of the crew of the SFS Fulgora to their deaths.

The apparent reason for such strict regulation of quantum computers is mentioned pretty early on in Placeholder.  As a central theme to the story, it had to be addressed in the earliest convenient passage.  In summary, quantum computers were blamed for the devastating “Last War of Earth.”  Supposedly, according to the running myth that keeps the SPQS together, when Sol Invictus (the Imperial Roman version of the Sun god) returned to Earth enshrouded in the flesh of a man, humanity had just finished nuking itself half-way to oblivion.  But it all started when a certain notorious group of terrorists that we’re all-too-familiar with got their hands on a quantum interface, and used it to hack into certain superpowers’ defence mainframes, using the power of quantum computation and decryption to clone the nuclear codes and keys and launch nuclear strikes against each other to trigger World War III and ensure mutual annihilation of all the superpowers.  Pretty scary idea, but I’m sure that certain steps are already being taken to prevent that eventuality.  Obviously, the military will have functional quantum computers a long time before we do, and any sensible government with nuclear weaponry will only be too eager to start using quantum cryptography techniques to start building unbreakable lattice-based keys (and when I say unbreakable, I mean unbreakable even to other better quantum decryption programs).  So when you really look into the matter, you realize that such a story cannot possibly be true.  What then actually happened?

I don’t reveal it until the very end of the book, and it would be a shame to spoil it here, even surrounded by so many other spoilers.  The myth is interesting, and sufficiently scary to keep quantum computers under lock and key, but the truth is much worse for the characters in Placeholder.  Plus, the real situation that led to the postwar power-grab by the SPQS is the premise of at least one of the planned sequels to Placeholder, but is important enough to my future history that it rightly should be considered as a series of its own, with one of the sequels in the Placeholder series merely leading into it.

So, Optical computers are the norm in the SPQS.  The SFAF uses them for pretty much everything, and appliance-like optical computers are provided to the average citizen, although they are designed in such a way to serve an obvious purpose, and it would be near impossible for any normal end-user to access or decompile the source code, or reverse-engineer the actual device.  Imagine Apple TVs, ipods, ipads and iphones being the only computing devices, with no access to Xcode or the other Apple Developer tools.  Or, imagine OS X with no terminal access and no utilities for revealing hidden files or the like.  You’d no longer be able to benefit from the UNIX experience of real computing.  Everything would be clean-cut, user friendly, for specific approved tasks—watching or listening to media, playing games, using various simple utilities.  Sure, it would probably make for a great user experience, but at the cost of any control over your computer.  That’s something we can’t allow to happen (and the reason why I created such a situation in my future history).  But in the SPQS Universe, not just anyone can learn to be a programmer.  You have to pass the SFAF’s criteria for recruitment, and make the cut to become a programmer, before you’re even given access to source code and information on programming languages and compilers.  Of course, all Officers in the SFAF are expected to use the Unix-based terminals, but you can’t have any real fun unless you get chosen to be a programmer.

I already talked a lot about the Officer computer terminals in previous posts, so the premise should be clear enough.  The terminals are handheld portable screens that you interact with through a simple input-only neural interface.  The neural interface is itself an optronic device inserted into your frontal lobe through the temple, with the transmitter external to your skull (which is the best place for it).  It is attached to your dominant hemisphere, which for right-handed individuals is the left hemisphere of the cerebral cortex.  Konrad is left-handed though, so his is attached through his right temple.  The neural interfaces are powered by excess bioelectric energy produced naturally by the brain (although historically I expect they would have been powered by atomic batteries inserted into the external casing of the neural interface to be easily replaceable), so you’d need something as energy efficient as an optronic chip, because the brain doesn’t have much energy to spare.  The terminals themselves are linked to pretty standard wireless access points, which are in turn linked to optical cores.  The optical cores are all networked, of course, through various stages of security.  In the case of the SFS Fulgora, the central core is the Identity core, and all terminals have to go through a security check first through the ID core before they can release access to their systems, data, and local network.  Except of course for the MRD terminal, which is on a network of its own with direct access to the MRD and Quantum core.  Quantum computers can have more rigorous security than optical computers anyway, so there’s no need to feed security through the ID core first (and actually, that set-up could compromise the security of the quantum core).  The one catch is that the MRD terminal has to be released by the Captain’s terminal before it can access the Quantum core for its security check, and within the story, that extra level of protection is a serious source of frustration for Konrad (until he manages to hack and reprogram the ID core).

So yeah, the set-up can get quite complex, but for military purposes security is more important than simplicity.  And while optronic systems are more than good enough for most military personnel, a quantum core is needed to control the MRD.  And when you realize the extra fact, that quantum computers interface best with optical computers, even if the SFAF didn’t normally use optical computers, they would have had to for Operation Storm Cloud.

 

That about covers it.  I was going to get into Konrad Schreiber’s particular misuse of quantum and optical computing, but that’s best left for the next post, where I will cover Quantum Computer Programming.  It’s even more exciting than metamaterials, and there are so many different approaches being undertaken at present, that it will be worthwhile to specify just which approach I standardized as QCL for the purposes of Placeholder.

Enjoy,

— the Phoeron

Advertisements