Notes on Vlatko Vedral’s book Decoding Reality: The Universe as QuantumInformation Oxford University Press 2010
Dr. Vedral’s thesis is that everything in reality is made up of information, or reality is information. One needs to be careful about a literal interpretation of parts of Vedral’s book. He appears to sometimes make statements which are not supportable. (examples in red text below) He also frequently digresses from the topic under discussion, and has also stated himself that his book is redundant. He borrows heavily from the work of others, however, he makes some interesting points.
My take on the issue is that information is important, but without mind or consciousness to be aware of it, information is meaningless. It seems to me that life, as well as healing, is the intelligent movement of information; which is probably not a new idea.
Vlatko Vedral
studied undergraduate theoretical physics at the Imperial College London, where
he also received a PhD for his work on ‘Quantum Information Theory of
Entanglement’ in 2009, he moved to
Prologue to Part One
As an undergraduate, Vlatko Vedral read three words that would have a profound effect of his future: “Information is physical.”
Is reality just made up from a random collection of unrelated rules, or is there a common underlying thread from which these al derive?
The most fundamental question is, why is there a reality at all and where does it come from?
How are things connected, and why are there things in the first place.
The author will argue that the notion of information answers both questions. This makes information far more fundamental than matter or energy.
Part One
Chapter 1: Creation Ex Nihilo:
Something from Nothing
Scientists are as stumped as anyone else as to why there is a reality and where it comes from. P. 6
Every time the author reads a book on religion or philosophy he cannot help but recognize many ideas are similar to the ideas of science. For example, the attitude of “reductionism”: the fact that we try to reduce everything to a single simple cause, is common to religion and science. 8.
One of the notions scientists hold in high esteem is Occam’s razor; meaning that the simplest explanation is
usually the correct one. Taking Occam’s razor to the
extreme would mean reducing all explanations about the universe to a single
principle. [ That is what the search for grand
unification is all about; string theory etc.] the
author asks why not try to get rid of even this principle ? Deduction without
any principles is what physicist John Wheeler called a “law without a
law”. Wheeler reasoned that if we can
explain laws of physics without invoking any apriori
laws of physics, we would be in a good position to explain everything. One of
Wheeler’s students,
Both Deutsch and Wheeler point out that whatever candidate is proposed for the fundamental building block of the universe also needs to explain its own ultimate origins too. The author claims that information is the common thread, and that information is the only concept we have that can explain its own origin. He also claims that when viewing reality in terms of information, the question of an all-explanatory principle no longer makes sense. P. 10
We equate a better understanding or our reality with a compression of the amount of information it contains. However, there is a fundamental argument that suggests that the total amount of information in the universe can only increase, as with “entropy”.
We compress information into laws from which we construct reality, and this reality tells us how to further compress information.
Chapter 2:
Information for all Seasons
A detailed look at a book by Italian fiction writer Italo Calvino
Chapter 3: Back to the Basics: Bits and Pieces
A common misconception is that the information age is just technological, however, the information age is all about better understanding anything in nature. P. 25.
The ancient Greeks laid the foundation for the definition of information by suggesting that the information content of an event depends on how probable that event is. P. 28.
The modern definition of information content of an event is proportional to the log of its inverse probability of occurrence:
I= log(1/p) p. 29
So all we need to have information content is an event and its probability of occurrence, and information theory can be applied to any event. This is why the author is able to argue that information underlies every process we see in nature.
One of the earliest applications of information theory to real world problems was Claude Shannon’s Information theory, applied to communications at Bell Labs. He found that the fundamental unit of information could be represented by the “bit”, a notion invented by George Boole, who showed that all algebraic manipulations can be done using only two numbers, zero and one.
The message “I love you” contains one bit of information;
say “1”, as
does the message “I hate you”.
Shannon’s measure already existed in physics under the name
of entropy, a concept developed by Rudolf Clausius
100 years before
Chapter 4: Digital Romance: Life is a Four Letter Word
Mathematician John von Neumann wrote a paper on self replicating automata to suggest how imperfect machines could last indefinitely. The holy grail of biology in the 1930s and 40s was the quest for the structure in the human cell that carries the replicating information so well articulated by von Neumann. When DNA was discovered, it showed the features von Neumann had suggested. Nature also uses the idea of redundancy to increase the chances of producing a successful copy. Nature also seems to have come up with a discrete (digital) coding for information. But instead of using two bases to encode things, as in Boolean logic, nature uses four discrete bases. Why? this is a key question in biology. P. 50.
There are two reasons why digital encoding might be preferable: one is the reduced energy overhead, the other is the increased stability of information processing; the reasons why we use digital information processing today.
Irwin Schrodinger deduced almost the same mechanism of cell reproduction as Watson, Crick, Wilkins and Franklin, years earlier. The one difference was that he thought the encoder in the replication must be a crystal (since crystals have a stable and periodic structure seemingly ideal for information carrying and processing) Watson and Crick later proved the encoder was an acid: DNA, and not a crystal. It turns out this issue has not been resolved. Some part of the encoding process may be done by some crystal like structure. P. 54.
DNA seems not to be the carrier of all information necessary to produce life. We know this because the relevant DNA content of bacteria, frogs, and humans is roughly the same. Are there other mechanisms for encoding?
There is also the question of where the DNA comes from. Did
it evolve from a simpler structure?
Thus, information, and how it is processed, is at the root of life
Chapter 5: Murphey’s Law: I Knew this Would
Happen to Me
p. 57f
The second law of thermodynamics tells us that every physical system must inevitably tend toward its maximum disorder, including life. How certain are we of the second law of thermodynamics? Life seems to be able to propagate forever, and seems to contradict the second law. So which is right? The entropy, or disorder of a system can be defined mathematically as
S=k log W, where W represents the probability of the different states the system can occupy, and the second law says that the entropy of a closed system always increases.
The entropy derived by physicists has the same form as
The first law of thermodynamics is conservation of energy; it says that energy cannot be created from nothing.
The second law of thermodynamics says that energy conversion from one form to another is not perfectly efficient. The energy loss is the increase in entropy.
Comments on global warming and energy efficiency.
While entropy in physics increases according to the second law, at the same time
so does the entropy of the genetic code, according to
Schrodinger, in his book What is Life, was the first to argue convincingly that life maintains itself on low entropy through increasing the entropy of its environment.
You can survive for a long time on the energy content of
candy bars, but you are missing crucial information required to keep your body
in a highly ordered (low entropy) state.
The less crucial the information contained in food, the higher its
entropy content. The more information, the lower the entropy
? This is the idea of a balanced diet. [Isn’t
this contradicting what was said above about The more information, the lower
the entropy ?]
Author conjectures that the entropy value of food is correlated to its completeness, in terms of nutrients available as well as bioavailability of the nutrients.
In a computer, if memory is properly configured, it can keep
track of all the information processing without increasing heat or disorder.
However, when information is “deleted”, it is actually displaced to the
environment; ie we create disorder in the
environment. [again an increase in information is an increase in entropy,
temperature, and disorder in the
environment] this is why computers have fans: to
remove the heat generated by components
as information is continually erased. [I am not sure that this is the only reason computer
components heat up, although even the heat generated by the resistance elements
of the computer system results in an increase in entropy, an increase in
disorder, and also an increase in information according to the author. ] The message from this is that real world information (for
example information on a computer) is not an abstract notion, but a real
physical quantity. This means it is at least as important as matter and energy.
Chapter 6: Place Your
Bets: In It to Win It
Maximizing profit in financial speculations is exactly the same
problem as maximizing the channel capacity for communication. P.80
Chapter 7: Social
Informatics: Get Connected or Die Tryin’
Its no surprise that more interconnected societies tend to be able to cope better with challenging events. The
first clue that information may play some role in sociology came in 1971 from
the
The use of information theory in social studies is nothing new. It was the use of statistical methods in the social sciences that prompted Boltzman to apply them within physics, where among other things, he came up with his entropy formula. Social information which may play a role in the functioning of societies includes connections between individuals, actions, states of individuals, and the ability of societies to process information. P. 93
The concept of “mutual information” is important and is a key in explaining the origin of structure in any society. It is used to describe the situation in which two or more events share information about one another. Ie, the events are no longer independent. Two things have mutual information if by looking at one you can infer something about the properties of the other.
The molecules of DNA share information about the protein they encode. Different strands of DNA share information about one another; the DNA molecules of different people (say a ater and a son) also share information.
Phase transitions occur in a system when the information shared between the constituents becomes great.
A high degree of information sharing often leads to a fundamentally different behavior. Philip Anderson, who received the Nobel prize in 1977 for his work, coined the term “more is different” [referring to “emergent poperties” of non-linear dynemics] The boiling and freezing of water are called “phase transitions”.
The formation of societies and significant changes in every society, such as revolution or civil war, can be understood using the language of phase transitions.
It is often claimed that each person on the planet is at most only six connections from any other person.
Why is this networking so important?
[Author asks why this networking is important. But connection
in this sense does not seem to be networking in the sense of conscious
communication; you may be at most six connections from any other person, but
you do not directly communicate through these six connections. I would ask
Why are these connections so
important?] you might argue that decisions made by
society are to a high degree controlled by individuals. It is clear however,
that individual thinking is based on common information shared by members of
the society.
How do all the people in a society
agree on something, if they only interact locally (ie,
with a limited number of neighbors)
How can local correlations lead to
the establishment of structure within society? An analogy would be a piece of
iron in a magnetic field. Initially all atoms are lined up randomly. Local
groups of atoms form coherent clusters. Eventually all the clusters will align.
The point at which all atoms spontaneously align is the point of phase transition;
the point at which a solid becomes a magnet. Ernest Ising
studied chains of systems, where each system interacts only with its neighbor.
He proved that there is no phase transition in such a model. However, 20 years
later, Lars Onsager showed that if you look at a two
dimensional array of atoms, phase transition is possible.
You can think of atoms in such a
system as human beings.
A period of fundamental
discoveries in phase transitions followed. The universe has to be at least
three dimensional. P. 98 f.
A society in which there is
limited communication will probably not have a phase transition. If we allow
everyone to interact with everyone else, phase transitions are possible. The
real world is somewhere between these two extremes. The number of people who
know lots of other people follows a power law distribution: the number of
people who know lots of other people is smaller than the number of people who
know few. The ratio of the two numbers follows a strict rule:
Some sociologists, such as Manuel Castells, believe the internet will inflict much more
profound transformations in society than any previous cause, such as the
industrial revolution. Increasingly, we are approaching the situation where
everyone can and does interact with everyone else.
Does the segregation of
communities imply racism? Schelling found that even
the most liberal communities could end up segregating. Segregation in this
sense occurs very much like a phase transition in physics. It was found that
even if one moves only when all one’s neighbors are of a different color, (as
opposed to a racist, when one moves when even one neighbor is of a different
color) the result will still be segregated communities.
If you understand Schilling’s
simple model, you see that it can be applied to any other grouping in society;
political, financial, cultural, intellectual. The Gaussian distribution, or
bell curve, is everywhere in nature. The distribution of wealth in societies is
not Gaussian, but may be described by a power law. It does not follow
Part 2
At its core, information theory
asks the most fundamental question: is event A
distinguishable from event B? The concept of distinguishability
between two different states is what
Chapter 8 (p. 116)
Author discusses wave particle
duality. In
This is known as quantum
entanglement, or “spooky action at a distance”.
Two electrons can be created in
such a way that there is no uncertainty about their overall state, but the
state of each considered separately is in a mess. In other words, they need
each other to completely describe their state. This goes back to the fact that
the two electrons are super-correlated, like identical twins; ie there is some additional mutual information.
The key upgrade in QIT for quantum
systems is to modify the notion of a bit to a quantum bit, or qubit. The qubit can exist in any
combination of the two states, zero and one. All other aspect of SIT remain.
To quantify quantum information,
instead of using the entropy of the bit, we take the entropy of the qubit. This was first done by a student of John Wheeler,
Ben Schumacher, who coined the term qubit.
Entropy we have seen to be the
measure of uncertainty of a system.
While in classical systems the
entropy of the whole must be at least as large as the entropy of any of its
parts, in quantum systems, the quantum entropy of two correlated quantum systems can be
smaller than the entropy of each separate system.
There is untapped potential in QIT
in terms of what we can achieve in information processing. QIT is
already being used to design a new order of super fast computers, highly secure
cryptographic systems, and teleportation of objects across vast distances.
Chapter 9: Surfing the Waves: Hyper-Fast Computers
Richard Fenyman
was the first to realize that information processing based on quantum mechanical computation
had enormous possibilities. Was he driven by the question of the fundamental
link between physics and computation?
Chapter
10: Children of Aimless Chance: Randomness versus Determinism
P. 152 f.
If the world is completely random,
then by definition we have no control over what will happen [not to mention we would not even exist] but if the world is completely deterministic, we also have
no control, since everything is prescribed. But are randomness and determinism
mutually exclusive, meaning that they cannot both exist in the same framework?
Quantum theory suggests the two
may be combined. Every quantum event is
basically random, yet we find that large objects behave deterministically. How
is that? Sometimes when we combine many random things, a more predictable
outcome can emerge. [ie, an
emergent property] For example, a single atom’s magnetic axis is
impossible to predict without external influences. However, even without
external influences, when randomly oriented magnetic axes are combined, a
magnet results with a clearly defined north and south.
But what exactly is randomness? In
the macroscopic world, we say the result of flipping a coin is a random
outcome, but if we had all the details of the coin toss; coin weight, initial
conditions, air characteristics, then the coin toss is not random but
deterministic. There is no randomness when we have the relevant
information.
This is not the case in the
quantum world. Even when we have all the information about the quantum system,
it is still indeterminate.
There is a beautiful quantum
protocol that illustrates how randomness and determinism can work hand in hand
to produce a stunning outcome. Does anyone really believe that teleportation
(as in Star-Trek) is possible? No? well you’d better
believe it! Teleportation dematerializes an object at position A only to make it
reappear at distant position B at some later time. Quantum teleportation
differs slightly, because we are not transmitting the whole object but just its
quantum information from particle A to
particle B, but the principle is the same (after all, the thesis of this book
is that we are all just bits of information.
All subatomic particles of a
certain type have identical characteristics, except spin. Ie,
the only distinguishing feature is spin, so if we manage to encode one electron
spin into another, then we can consider this a successful transfer of quantum
information between the two.
[since we are lacking
a macroscopic component, determinism also seems to be lacking, unless the
determinism is in the spin of the particle, but the result is still subatomic.]
One way of performing
teleportation is to assemble all of the information about an object, and send
this information to the location where the object is to be teleported. One
problem with this approach is that you cannot determine an electron’s spin by
measurement, so you cannot assemble all the information.
However, there is no need to learn
the state of a
system in order to teleport it. “All you need to do is use mutual quantum
information, of the sort that exists on a quantum computer.” This provides
super-correlation between locations A and B (also known as quantum
entanglement).
Currently we can transport only
individual atoms and photons over only a few meters. The basic principle has been experimentally
verified by Anton Zeilinger’s research group at the
Any particular random sequence of
say, coin tosses, is as likely as any other. So HHHHHHHHH is as likely as
HHTHTHHTH. Still, we have the strong feeling that HHTHTHHTH looks random, while
HHHHHHHHH looks highly ordered. Their probability seems not to be able to
capture this difference. This is why
The Russian mathematician Andrey Kolmogorov introduced a
solution the problem of quantifying randomness in the 1950s. His solution is
that the degree of randomness depends on how difficult the sequence is to
produce, say with a computer. A computer program to generate the sequence
HHHHHHHHHH needs only one instruction, print 10 heads, while
to generate the sequence HTTHTHTTHT the instruction needs to write out the whole
sequence. The quantity that tells us by how much programs for orderly things cn be compressed to shorter programs is called the Kolmogorov complexity.
Can Kolmogorov’s
logic be applied to understand the origins of reality? The “hard” laws of physics,
chemistry, biology, economics, sociology, etc could all be seen as short programs
describing the makeup of reality.
[there seems to be no
follow up to this question]
Part 3
Where does information come from?
Part 3 is more speculative
Chapter 11: Sand Reckoning: Whose Information is it,
anyway?
p. 173 f.
Perhaps the universe consists of a
great number of little quantum computers, making the universe itself the
largest quantum computer.
Since our view is that everything
in reality is composed of information, I would be useful to know how much
information there is in total and whether this total amount is growing or
shrinking. [it still has not been shown why this view is taken.]
The second law of thermodynamics
states that entropy can only increase. Since physical entropy has the same form
as
Optical holography was invented by
Denis Gabor. In doing so, he showed that two
dimensions were sufficient to store all the information about three dimensions.
Three dimensions are able to be represented due to light’s wave nature of
forming interference patterns. Author explains it this way: light carries an
internal clock, and in the interference patterns, the timing of the clock acts
as the third dimension.
Physicist Leonard Susskind proposed to call the relationship between entropy
(information) and surface area the holographic principle. The key property
behind this is quantum mutual information, which is defined as the area
interfacing any particular partition in the universe,
say an electron or an atom, and the rest of the universe.
Einstein’s general relativity
describes the effect of energy-mass on the geometrical structure of four
dimensional space. According to John Wheeler,
Einstein’s general relativity shows that matter tells space-time how to curve,
and shows that space-time tells matter how to move.
Can this relationship, which
describes gravity, be derived from quantum information theory? Ted Jacobson, in
the mid 1990s, gave an ingenious argument to support this notion.
The thermodynamic entropy is
proportional to the geometry of the system. It is also well known that the
thermodynamic entropy
of a system multiplied by its temperature is the same as the
energy of the system. Therefore, a larger mass, which represents a larger
energy, will imply a larger curvature of space-time.
[the assumption here
is that “larger curvature” is associated with “geometry”]
Assume an empty universe divided
by a flat sheet of light. Introduce a large mass in one of the divided “halves”
of the universe. This means the thermodynamic entropy changes, which from
holographic principle will effect the shape of the
sheet of light: it will be bent.
Information, as represented by
entropy, is now seen to underpin both quantum mechanics and gravity. Normally
quantum mechanics and gravity (Einstein’s general theory) are considered incompatible. [So if this theory were valid, it would represent a grand
unification.]
We know that information is
proportional to area. The Bekenstein Bound, named
after Israeli physicist Jacob Bekenstein, is stated
thus: “the number of bits that can be packed into any system is at most 10 to
the 44 power times the systems mass in kilograms and its maximum length in
meters. Astronomers have given us a rough estimate of the size and
mass of the universe: 15 billion light years in diameter and 10 to the 42 power
kg. pluging in this info
into the Bekenstein
Bound gives 10 to the 100th power bits of information.
The author says that “Karl Popper
has told us that you can view science as a machine that compresses bit in the
universe into laws, and these laws then are used to generate reality. “
[Karl Popper argued that scientific
theories are abstract in nature, and must be validated by external means. see: http://en.wikipedia.org/wiki/Karl_Popper ]
Chapter 12: Destruction ab Toto:
Nothing from Something
This book promotes the view that
underlying many different aspects of reality is some form of information
processing. [Most people would agree with this,
however, this in no way implies that reality “is” information. The author
perhaps runs the risk of “confusing the map with the territory” ]
The main aim of this book is how
to understand reality in terms of information. [ie, information is a map or grid]
The question “where does
information come from?” reduces to “where does quantum information come from?”
Galileo believed that the truths
of the universe are encoded in mathematics. We want to expand Galileo’s
sentiment by using information rather than mathematics (geometry), and we want
to explain how the information in the universe arises.
“once the
information is decoded and compressed into laws, we can then understand our
reality according to the information encoded in these laws.”[again, no issue here]
The laws themselves must be an
integral part of this evolving picture, otherwise we
are stuck in an infinite rregression.
The universe can therefore be seen
as a gigantic quantum computer.
“refuting
a model and changing part of the program is crucial to changing reality itself
because refutations carry much more information than simply confirming a
model.” [but the model is only a
model, and will always be “behind” reality.]
Physics is littered with “nogo” or negation principles. The second law of Thermo
prohibits any transfer of heat from a hot to a cold body (cold to a hot body?)
without any other effect. We cannot travel faster than the speed of light. Etc.
Applying negation to QM, it is not
true that ‘the object is in two places at once’ and it is also not true that
the object is ‘not in two places at once.’ It seems logically impossible that a
statement and its negation are both incorrect. While to some this may seem a
contradiction, to Bohr this pointed to a deeper wisdom: “A shallow truth is a
statement whose opposite is false; a deep truth is a statement whose opposite
is also a deep truth”.
There is no logical contradiction
in the case of QM: if we measure the position, the particle will always be in
one place, but if we do not measure it, but interact with it, the particle
behaves as if it were in two places at once.
A theological position close to
the Popperian philosophy of science is known as the Via Negativa,
a view apparently held by the Cappadocian Fathers of
the fourth century who based their whole world view on questions that could not
be answered. For example, while they believed in God, they did not believe that
God exists.
In Hinduism, the idea of God is
approached in terms of Neti:
“not this” The idea here is that God is beyond all categories. This list of what God is not is reminiscent
of the laws of physics and the general principles of science [I do not believe this statement would be generally accepted
by scientists]
Through this negative way of
describing reality, separating that which is not true from everything else, we
compress reality into a set of laws.
So where does the universal
quantum computer come from?
Through history, God seems to be
less and less involved in the creation. In ancient times, God had to do every
little thing. With
Is it possible to reach a point
where creation is so effortless that perhaps a creator is not needed?
In mathematics, Von Neumann
proposed that all numbers could be bootstrapped out of an empty set of numbers
by the operation of the mind: the mind observes the empty set; it is not
difficult to imagine this empty set containing an empth
set within itself. So we have created the number one.
If we think of the empty set within the empty set as having an empty set, we
have created the number two, all based on correlation. Outside logic, in reality,
correlations also occur and manifest themselves through mutual information.
[The
author originally noted that with informatics, we can resolve the problem of
creation out of nothing. But the original concept came out of mathematical
logic, so how does informatics transcend mathematics? ]
We might say that things and events have no
meaning within themselves, but only the shared mutual information between them
is “real”. This philosophy is called “relationalism”,
and is found in Eastern religion and philosophy. Buddhist “emptiness” just
means things do not exist in themselves, but only in relation to one another (yang and yin). Quantum
physics is in agreement with Buddistic “emptiness”.
The British astronomer Arthur Eddington put it this
way: “the term ‘particle’ survives in modern physic, but very little of its
original classical meaning remains. A particle can now best be defined as the
conceptual carrier o a set of variates…”