“In retrospect, it’s weird that as a kid I thought completely random outbursts made me seem interesting,
given that from an information theory point of view, lexical white noise is just about the opposite of interesting — by definition.” — XKCD
There is no “there” there. We impose structure on everything as order and narrative so that we can understand it.
Otherwise, there’d be nothing but chaos.
“In retrospect, it’s weird that as a kid I thought completely random outbursts made me seem interesting,
given that from an information theory point of view, lexical white noise is just about the opposite of interesting — by definition.” — XKCD
R andomness is important to the average person in ways not often considered (like gambling, statistics, data compression, detecting network anomalies, biology, data security, religion, and unbiased selection). Ideas about randomness and chance are pervasive throughout our culture and modern way of life. But whether the universe is actually random or not remains a mystery.
Randomness is distinct from chance in that randomness is defined by a lack of pattern, while chance refers simply to the likelihood of a single outcome. Flipping a coin a hundred times will show a random collection of results — while the chance of each individual coin flip coming up heads or tails is 50/50 (assuming a fair and balanced coin). Randomness requires the existence a sequence in which no pattern can be found.
While random events are individually unpredictable, the frequency of different outcomes over a large number of events (or “trials”) can be somewhat predictable — for example, when throwing two dice and counting the total, a sum of 7 will randomly occur approximately twice as often as 4, but the outcome of any particular roll of the dice pair is unpredictable.
Curiously, at any step in a sequence of coin flips, you’ll have either an excess of heads overall, an excess of tails, or have exactly 50% of each. If you’ve observed more heads than tails, how likely is it that the number of tails will “catch up” so that you then have as many tails as heads (or more)? From the fact that the observed proportion of heads gets closer and closer to 0.5 as more flips are done, it might seem that an excess of heads (or tails) will not last long. In fact, the opposite is true. As the number of flips increases, an excess tends to persist. From a gambler’s point of view, the fact that he’s losing means he’ll almost certainly never catch up — even in a fair game with the odds of winning each hand at 50%. The longer the game goes on, the less chance the gambler has of ever breaking even. Maybe he should just accept this and cut his losses?
In a cavern deep below the Earth, Ayn Rand, Paul Ryan, Rand Paul, Ann Druyan, Paul Rudd, Alan Alda, and Duran Duran meet together in the Secret Council of /(\b[plurandy]+\b ?){2}/i. The joke is an attack on Ayn Rand’s philosophy, which claims to be a completely fair mechanism for distributing resources, but (arguably) inherently favours those who start out with more resources, or are already in a position to acquire the resources. It also, again arguably, has a strong overarching theme that people that believe in objectivism are inherently better than other people, and thus deserve what extra resources they can get — as with the Ayn Random Number Generator, which claims to be completely fair and balanced, but actually favours some numbers.
Random things can’t be predicted; deterministic things theoretically can be, although it’s usually impractical to do so. A roulette wheel is a simple machine, but that doesn’t mean we can fully predict the next winning number (assuming it’s a fair machine), no matter how many past rolls we’ve seen. Yet the roulette wheel is deterministic. The outcome we get is a surprise, but if we could rewind the tape of time and re-experience that play again and again, we’d always see the same outcome. True randomness — if it exists — would have a different outcome every time you repeat the performance.
A fully random system is always uncertain. On the other hand, a fully-deterministic system would have NO uncertainty (if you’ve received and interpreted all necessary data). If everything is predetermined, then the future holds no further information than the present does (with perfect knowledge of the present, however).
But can anything in reality be fully deterministic? If it were, could we tell?
Nothing is really in our power but our will — it is on the will that all the rules and duties of Man are based and established.
— Michel de Montaigne, 1572
To prove that the universe is deterministic, we would need to take a nominally random source, and be able to predict it, meaning it wasn’t really truly random. But we have many sources of randomness which we cannot, even in theory, predict. Subjectively, therefore, we have sources of true randomness, and thus we can conclude that we live in a non-deterministic universe. And while you might say “well, we can’t prove that it isn’t secretly deterministic”, it would only be deterministic to an observer outside the universe — that is, a god. In fact, one working definition of a god would be “a being capable of predicting the outcome of sources of true randomness”.
When a person has incomplete knowledge of a system then chance is involved. At any level, the difference between a deterministic system and a completely random one depends on the observer. This is how randomness is utilised in most encryption protocols: a string (of numbers) is chosen at random, which is then used to encrypt messages in a non-random way. The random numbers have been determined for the client and host but remain random and un-guessable for an outside observer — so messages remains private and can’t be decoded.
In 1932 in Mein Glaubensbekenntnis (loosely translated as My Beliefs), Albert Einstein wrote “I do not believe in freedom of will. Schopenhauer’s words, 'Man can indeed do what he wants, but he cannot want what he wants’, accompany me in all life situations and console me in my dealings with people, even those that are really painful to me. This recognition of the unfreedom of the will protects me from taking myself and my fellow men too seriously as acting and judging individuals and [thereby] losing [my] good humour.” Einstein was not happy with the probabilistic nature of quantum mechanics. The Copenhagen interpretation says that when a photon is headed towards two slits, there’s no deterministic way to know which of the two it will pass through. Einstein and many others were uncomfortable with this, feeling there were hidden variables that would reveal in advance through which slit the photon would go. His feeling was that even though we might not be able to discern these variables, their presence made the question deterministic. But John Bell came up with a theorem that allows physicists to actually test for the presence of hidden variables, and the results that have come in over the years pretty much put to rest the notion that the universe can be completely deterministic.
Like a random number generator spitting out numbers, our future could be anything. At the point where the number is generated — or the future becomes the past — reality now holds only that possibility and no other. The most randomly generated string of numbers in the world becomes set in stone as soon as the string is generated, but is completely unknowable until that happens (and likewise so is our future).
Oh, many a shaft at random sent
Finds mark the archer little meant!
And many a word at random spoken
May soothe, or wound, a heart that’s broken!
— Sir Walter Scott, Lord of the Isles
The generation of random numbers is too important to be left to chance.
— Robert R Coveyou
Random numbers should not be generated with a method chosen at random.
— Donald Knuth
Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.
— John von Neumann
While many random samples try to use random numbers, they don’t have to. For example, if you’re pretty sure that the final digit of someone’s minute of birth is not correlated with their family income, you can draw a random sample of people’s incomes by choosing those whose birth minute ends in a 7. That process of choice isn’t at all random, but it’s effective. Or you may want to use random numbers to decide whether to include a given individual in a sample; to that end, large tables of pseudo-random digits have been produced, displaying no discernible order or pattern.
In mathematics, one of the things that algorithmic information theory studies is what constitutes a random sequence. The central idea is that a string of bits is random if and only if it’s shorter than any computer programme that can produce that string — this is called Kolmogorov randomness. Simplified, it means that random strings are those that can’t be compressed. A random sequence is one such that the shortest algorithm which produces it is approximately the same length as the sequence itself and no greater compression in the algorithm can be attained. Randomness and information are coupled. Soviet mathematician Andrey Kolmogorov defined several functions that data possess, including complexity, randomness, and information.
Simply storing the 24-bit colour of each pixel in this image would require 1.62 million bits, but a small computer programme can reproduce these 1.62 million bits using the definition of this Mandelbrot set and the coordinates of the corners of the image. Thus, the Kolmogorov complexity of the raw file encoding this bitmap is much less than 1.62 million bits.
Kolmogorov defines randomness as stuff you can’t compress. Information can also be defined in that way. If you know the first 100 pages of a book, you might ask “How well can I predict what the next page contains?” If your answer is 100% accurate, then is a 101st page even needed? Conversely, if you have 0% ability to predict any content of the 101st page, then it apparently will contain a lot of information. Or alternatively, you might say it is random. Compression is the process of removing the bits you can predict, and leaving the bits you can’t (including the bits you need in order to predict the ones you’ve removed).
For example, consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab
4c1j5b2p0cv4w1×8rx2y39umgw5q85s7
The first string has a short English-language description, namely “ab 16 times”, which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language. The Kolmogorov complexity of any string can’t be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string’s size, are not considered to be complex. Kolmogorov complexity is related to the entropy of the information source. Generally, “entropy” stands for disorder or uncertainty. The idea here is that the less likely an event is, the more information it provides when it does occur. This view of entropy was introduced by Claude Shannon in his 1948 paper “A Mathematical Theory of Communication”, so information entropy is also called Shannon entropy to distinguish it from its use in physics. Shannon entropy measures are used today in network anomaly detection.
Whenever you move from an average measure to a precise measure, you reduce uncertainty. To see how uncertainty can relate to a binary code, think about a game of 20 questions. If the object of the game is to guess a number between 1 and 100, and Player One asks if the number is larger than 50, an answer from Player Two (no matter if it is yes or no) reduces Player One’s uncertainty by 1/2. Before asking the question, Player One had 100 possible choices. After asking that single yes or no question, Player One either knows that the number is greater than 50 or that it is less than 50. One of the things Shannon demonstrated in his 1948 paper was that the entropy of a system is represented by the logarithm of possible combinations of states in that system — which is the same as the number of yes-or-no questions that have to be asked to locate one individual case. Entropy, as redefined by Shannon, is the same as the number of binary decisions necessary to identify a specific sequence of symbols. Taken together, these binary decisions, like answers in the 20 questions game, constitute a definite amount of information about the system. Entropy is a measure of the relationship between complexity and certainty.
Information entropy (in bits) is the log-base-2 of the number of possible outcomes.
With 2 coins there are 4 outcomes HH-HT-TH-TT, so the entropy is 2 bits.
According to the Wikipedia page on algorithmic information theory, “a 3,000 page encyclopaedia actually contains less information than 3,000 pages of completely random letters”. While true based on a narrow definition of information, it’s quite false based on the definition used by classical information theory (not to mention any dictionary you care to name). Or common sense. Kolmogorov is intriguing, but he isn’t the whole story.
Suppose we encounter a novel phenomenon, and attempt to formulate a theory of it. All we have to begin with is the data concerning what we observe. If that data is highly regular and patterned, we may attempt to give a deterministic theory of the phenomenon. But if the data is irregular and disorderly — random — we may offer only a stochastic theory (a guess). We can’t rely on knowing whether the phenomenon is chancy in advance of developing a theory of it, so it’s extremely important to be able to characterise whether data is random or not. We might think that we could simply do this by examination — surely the lack of pattern will be apparent to an observer? (Patternlessness is randomness, by definition.) Yet psychological research has repeatedly shown that humans are poor at discerning patterns, seeing them in completely random data, and (for precisely the same reason) failing to see them in non-random data. So the need for an objective account of the randomness of a sequence of outcomes is necessary for reliable scientific inference. Fortunately, the theory of algorithmic randomness, completed in the early 1970s, shows that a satisfactory characterisation of the randomness of a sequence of outcomes is possible.
An algorithmically random sequence (or random sequence) is an infinite sequence of binary digits that appears random to any algorithm. As different types of algorithms are sometimes considered, there are different notions of randomness including stronger and weaker forms. The first suitable definition of a random sequence was given by Per Martin-Löf in 1966. He said that random sequences are incompressible, pass statistical tests for randomness, and are difficult to make money betting on.
Roughly speaking, the Kolmogorov complexity of a string (of bits, words, symbols, and so forth) is the shortest description that allows an accurate reconstruction — or, in some variants, the length of the smallest programme which will output the original string. Cueball’s method of giving directions is very reminiscent of Kolmogorov’s method of determining complexity. These directions may have minimal Kolmogorov complexity, but they are non-intuitive and are likely not the shortest or quickest way to get there considering that they consist mostly of left turns. The joke is that Cueball just sent his friend to a GPS store to buy a device to give him the correct directions. (His friend gets really grumpy when he realises this.)
When it comes to arranging molecules, living organisms seem to have a great deal of information about how to take elementary substances and turn them into complex compounds. Somehow, living cells manage to take the hodgepodge of molecules found in their environment and arrange them into the substances necessary for sustaining life. From a disorderly environment, life somehow creates internal order. How? The answer, as we now know, is to be found in the way the DNA molecule arranges its elements — doing so in such a way that the processes necessary for metabolism and reproduction are encoded. The “negative entropy” that Schrodinger says is the nourishment of all life is information, and Claude Shannon’s information theories show exactly how such coding can be done — in molecules, messages, or in switching networks.
In biology, randomness is important if an animal needs to behave in a way that is unpredictable by others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories. It’s also useful when any population needs to spread out over an area. Any deterministic spreading technique will result in clumps; if every mouse turns left at the rocks, you’ll end up with too many mice in some spots and not enough in others. A random response to the rocks will result in a more even distribution.
According to several views of quantum mechanics, microscopic phenomena are objectively random — that is, even in an experiment that controls all causally relevant parameters, some aspects of the outcome will still vary randomly. To illustrate, if you place a single unstable atom in a controlled environment, you can’t predict how long it will take for the atom to decay, only the probability that it will do so in a given time frame. (Hidden variable theories reject the idea that nature — presumably including human nature — contains irreducible randomness, instead positing that properties are at work behind the scenes, determining the outcome in each case — but since these variables are unable to be determined, the theories aren’t useful.)
Good random numbers are fundamental to almost all secure computer systems. Without them everything from Second World War ciphers like Lorenz to the Transport Layer Security (TLS) your browser uses to secure web traffic are in serious trouble. Which is why you may read about randomness in the news from time to time. Computers are inherently deterministic machines, patiently processing data in a sequential and orderly fashion. Randomness is hard to extract from such a logical, predictable system, and some of the greatest data-security blunders and gaffes of history have centred around a clever individual finding out that “random” numbers were, in reality, anything but.
We can make simple decisions in our lives. Therefore, there is some degree of (apparent) randomness where humans are concerned because we often don’t understand ourselves, so we can’t accurately predict our impact on our surroundings and the part we’ll play in the immediate future. Isn’t that best? If we knew the future, there’d be no need to live it because there’d be no new information to be had there. We need a level of randomness. Hence Lotto. Tourism. Online dating.
Randomness can be seen as conflicting with the deterministic ideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past, present, and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition to evolution. Hindu and Buddhist philosophies state that any event is the result of previous events (as reflected in the concept of karma) and as such, there’s no such thing as a random event or a first event.
Throughout history, randomness has been used for games of chance and to select individuals for an unwanted task in a fair way (for example, drawing straws). Random selection is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, if we have a bowl of 100 marbles with 10 red (and any red marble is indistinguishable from any other red marble) and 90 blue (and any blue marble is indistinguishable from any other blue marble), a random selection mechanism would choose a red marble with probability 1 in 10. (Note that a random selection mechanism that selected 10 marbles from the bowl wouldn’t necessarily result in 1 red and 9 blue.)
Randomness serves two main roles in our modern-day society:
Does “true randomness” exist? We can’t say for sure. The difference between a “true” random number generator (pulling a number from nothingness every few milliseconds) and a “fake” (or pseudo) random number generator (grabbing the next number from some great Excel spreadsheet the size of the Andromeda Galaxy) is… well, there’s no difference at all until we know there is one. If we can never predict what the next column of the spreadsheet is, then for our purposes, it’s random.
So this means that until and unless we can predict literally every single thing that will ever happen with complete and perfect accuracy, we can’t disprove the assertion that we live in a non-deterministic universe. Given such an excessively specific, difficult, and unlikely method for proving — or indeed, seeing any indication whatsoever — that the universe is pre-determined, we’d be better served simply assuming that the future is unknown, and that all randomness is truly random unless we have reason to believe otherwise.
The time we’d normally spend wondering about it, we can instead use to talk to our friends or order something nice for ourselves online, safe in the knowledge that our randomly generated encryption keys keep our communications private and secure. The only way our data will ever be seen by anyone but the intended recipient is if we’re randomly selected for a statistical model to help the online provider we’re using deliver better services.
Helpful stuff, randomness.
A group of philosophers were arguing over determinism and free will, and split into two camps. One person couldn’t decide at first.
At long last, he decided he favoured determinism and went to their camp; they asked why and he said, “I came of my own free will” so they banished him to the free willers.
When he arrived at the free will camp, they asked why he had decided to join them, and he said, “I didn’t decide — I was sent over here” so they kicked him out, too.
“Algorithmically random sequence” from Wikipedia, last accessed 7 February 2015, http://en.wikipedia.org/wiki/Algorithmically_random_sequence.
“The Best Investors Never Attempt to Balance Their Portfolios” by Chris Meyer, Penny Sleuth, 21 February 2008, http://pennysleuth.com/the-best-investors-never-attempt-to-balance-their-portfolios/.
“Chance versus Randomness” from the Stanford Encyclopaedia of Philosophy, last accessed 30 January 2015, http://plato.stanford.edu/entries/chance-randomness/.
“Entropy (information theory)” from Wikipedia, last accessed 6 February 2015, http://en.wikipedia.org/wiki/Entropy_.
“Explain XKCD Wiki”, last accessed 6 February 2015, http://www.explainxkcd.com/wiki/index.php/Main_Page.
“Facticity and Transcendency” by James Betts, Diaries of an Existentialist, 4 January 2012, accessed 1 February 2015, https://diariesofanexistentialist.wordpress.com/2012/01/04/facticity-and-transcendency/.
“A False Dichotomy” by Edward Welbourne (Eddy), Free Will vs Pre-destination, last accessed 1 February 2015, http://www.chaos.org.uk/~eddy/human/FreeWill.html.
“Free Will” by Geir Isene, Geir Isene Uncut, last accessed 1 February 2015, http://isene.me/free-will/.
“History of Randomness Definitions” from Stephen Wolfram’s A New Kind of Science | Online, last accessed 30 January 2015, http://www.wolframscience.com/nksonline/page-1067b-text?firstview=1.
“Inside Information” by Howard Rheingold, Tools for Thought, April 2000, http://www.rheingold.com/texts/tft/6.html.
“Introduction to Randomness and Random Numbers” by Dr Mads Haahr, Random.org, https://www.random.org/randomness/.
“Is Free Will Real? Better Believe It (Even If It’s Not)” by David Rock, Psychology Today, 24 May 2010, last accessed 1 February 2015, https://www.psychologytoday.com/blog/your-brain-work/201005/is-free-will-real-better-believe-it-even-if-its-not.
“Kolmogorov complexity” from Wikipedia, last accessed 6 February 2015, http://en.wikipedia.org/wiki/Kolmogorov_complexity.
“The Lottocracy” by Alexander Guerrero, aeon, 23 January 2014, http://aeon.co/magazine/society/forget-elections-lets-pick-reps-by-lottery/.
“Noisy-channel coding theorem” from Wikipedia, last accessed 2 February 2015, http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem.
“Philosophy: Free Will vs Determinism” by Geoff Haselhurst, On Truth & Reality, last accessed 30 January 2015, http://www.spaceandmotion.com/Philosophy-Free-Will-Determinism.htm.
“Randomness” from Wikipedia, last accessed 28 January 2015, http://en.wikipedia.org/wiki/Randomness.
“Sortition” from Wikipedia, last accessed 7 February 2015, http://en.wikipedia.org/wiki/Sortition.
“The (f)Utility of Free Will” by David Zahl, Mockingbird, published 9 September 2010, accessed 1 February 2015, http://www.mbird.com/2010/09/futility-of-free-will/.
Vancouver is one of the most ethnically and linguistically diverse cities in Canada: 52% of its residents have a first language other than English. Its population density of about 5,250 people per square kilometre (13,590 per square mile) makes it the most densely populated Canadian municipality and the 4th most densely populated city over 250,000 residents in North America (behind New York City, San Francisco, and Mexico City). The original settlement, named Gastown, grew up on clear-cuts on the west edge of the Hastings Mill property, where a makeshift tavern had been set up on a plank between two stumps in 1867. Other stores and some hotels quickly followed and Gastown, now formally laid out as a registered townsite, was renamed Granville but was re-renamed Vancouver and incorporated shortly thereafter (in 1886). The transcontinental railway extended to the city to take advantage of its large natural seaport, which soon became a vital link in a trade route between the Orient, Eastern Canada, and Europe. Today, its port is the busiest and largest in Canada and the most diversified port in North America. Forestry remains Vancouver’s largest industry followed by tourism. Major film production studios have turned Metro Vancouver into one of the largest film production centres in North America, earning it the nickname Hollywood North. It is consistently named as one of the top 5 worldwide cities for livability and quality of life.
The first subway line in NYC opened 27 October 1904 — about 35 years after the first elevated train line. These early subway lines were owned and run by private companies that also started elevated train lines in Brooklyn. In 1913, NYC built and improved some subway lines to lease to these companies. In 1932 the first line owned and operated by NYC was established — the Independent Subway System (IND). NYC bought the two private systems and closed some elevated train lines the the goal of speeding things up so that more people could be transported more places. Passengers liked the system because it was faster and cheaper and not affected by bad weather underground. Today, the NYC subway system serves hundreds of thousands of people daily.
Imagine leaning out the open door of a helicopter at 7,500 feet over NYC on a very dark and chilly night to see views like these. The photographer feels that from high altitudes the streets of NYC at night look like brain synapses. He says that helicopters vibrate pretty significantly and you have to be able to shoot at a relatively high shutter speed (even with tools like a gyroscope); that makes it incredibly difficult to shoot after sunset. The flight required extensive planning and special clearances as they flew above airline traffic landing at Kennedy, LaGuardia, and Newark airports. Sadly, there’s no mention of how the intensely beautiful colours were achieved (I assume they enhanced in some way — perhaps it’s a trade secret? — or maybe it really looks just like that). Photographer Vincent Laforet also took the last photo above, though perhaps not during the same helicopter ride.
These are photographic illustrations for fairytales of a Brothers Grimm reprint. They were captured in remote rural areas in Middle Europe. (Unfortunately, the photographer didn’t identify the location of any of his photos.)
Except for the upper photo on the right, these are from a series entitled Amazing Worlds within our Worlds by artist Pyanek: