CONTEXT QUANTUM PHYSICS Bell’s math showed that quantum weirdness rang true 50 years ago, theorem found a way to dash Einstein’s hopes for physics sanity BY TOM SIEGFRIED 8:00AM, DECEMBER 29, 2014

There’s just enough time left in 2014 to sneak in one more scientific anniversary, and it just might be the most noteworthy of them all. Fifty years ago last month, John Stewart Belltransformed forever the human race’s grasp on the mystery of quantum physics. He proved a theorem establishing the depth of quantum weirdness, deflating the hopes of Einstein and others that the sanity of traditional physics could be restored.

“Bell’s theorem has deeply influenced our perception and understanding of physics, and arguably ranks among the most profound 

scientific discoveries ever made,” Nicolas Brunner and colleagues write in a recent issue ofReviews of Modern Physics.

Before Bell, physicists’ grip on the quantum was severely limited. Weirdness was well established, but not very well explained. Heisenberg’s uncertainty principle had ruined Newton’s deterministic universe 

the future could not be completely predicted from perfect knowledge of the present. Waves could be particles and particles could be waves. Cats could be alive and dead at the same time.

Einstein didn’t buy it, insisting that underlying the quantum fuzziness there must exist a solid reality, even if it was inaccessible to human eyes and equations.

But try as he might — and he tried several times — Einstein could devise no experiment showing quantum physics to be in error. The best he could do was demonstrate how unbelievable quantum physics really was. In 1935 he pointed out (as had Erwin Schrödinger at about the same time) that quantum rules apparently defied “locality,” the notion that what happens far away cannot immediately affect what happens here.

As Einstein described it, in a paper with collaborators Boris Podolsky and Nathan Rosen, quantum mechanics — the mathematical apparatus governing the subatomic realm — seemed incomplete. If two particles of light interact and then fly far apart, quantum math describes them as still a single system. Measuring a property of one of the particles therefore instantly tells you what the result would be when someone measured the same property for the other particle. In the language now used to describe this situation, the particles are “entangled.”

Typically, the property to be measured would be something like spin (the direction that a particle’s rotational axis points) or polarization (the orientation of the vibrations if you view the light as a wave). Depending on how you create the entangled particles, the spins or polarizations might turn out always to be opposite. That is, if one particle’s spin is measured to be pointing up, the other will surely point down.

At first glance, there seems to be a simple explanation for this mystery. It could be just like sending one of a pair of gloves far away. If the recipient sees a left-handed glove, the one you kept must be right-handed.

But quantum physics is not like that. It’s more like sending away one of a pair of mittens, and the mitten becomes a glove, assuming a handedness when the recipient puts it on. The stay-at-home mitten would then suddenly become a glove with the opposite handedness.

Or at least that is the standard view. Einstein sympathizers contended that maybe some unseen factors, “hidden variables,” controlled the outcome, forcing the mittens to have had a handedness all along. For nearly three decades, there seemed to be no way to resolve that dispute. Both views of quantum physics would, everyone believed, predict exactly the same outcomes for any possible experiments.

But Bell perceived the situation with more sophistication. In a paper published in November 1964, he worked out an ingenious mathematical theorem to show that a hidden-variables reality would produce different experimental results.

Bell’s insight incorporated the fact that quantum math predicts probabilities for outcomes, not definite outcomes. In real entanglement experiments (which at the time could just be imagined), many measurements would be made. If every day you send one of a pair of entangled particles to Alice in D.C. and the other to Bob in L.A., they both can choose to make any of several possible measurements. When they meet once a year in Dallas to compare results, they’ll find that the outcomes match more often than chance. In principle, that correlation could arise either from quantum weirdness or from hidden variables.

But Bell showed that the two explanations predicted different degrees of correlation. In one case, for instance, math using hidden variables predicted that the measurements would match 33 percent of the time. Quantum math, with no hidden variables, predicted a match no more than 25 percent of the time.

(If you want to see the more general logic worked out explicitly, you can find it in Brunner et al’s paper in Reviews of Modern Physics, preprint available at

These differences, the “Bell inequalities,” gave experiments something definite to test. By the 1970s such experiments had begun, and in the 1980s Alain Aspect and colleagues in France showed definitively that Bell’s inequalities were violated in real experiments. That meant that local hidden variables could not be causing the mysterious connections in quantum entanglement. Einstein’s hope for a deeper reality did not pan out.

“It is a fact that this way of thinking does not work,” Bell said at a physics meeting I attended in 1989. “Einstein’s view, we now know, is not tenable.”

It’s not that the speed of light limit set by Einstein’s special relativity is violated. Entanglement does not, as is sometimes implied, involve instantaneous faster-than-light signaling. Measurement of one particle does not actually immediately determine the property of the other. It simply tells you what that property will be when measured. (I hope I have always been careful to phrase this by saying one measurement seems to affect the other.) It’s just that if you know the result of one measurement, you also know the result of the other, no matter which one is measured first. (And in some cases, which one comes first can depend on how fast you’re moving with respect to them, as considerations of special relativity come into play, as I mentioned at the end of an essay in Science News in 2008.)

In any case, the deep impact of Bell’s theorem was not really about proving quantum weirdness. Its greater importance was to make the underlying foundations of quantum physics a topic worth pursuing.

“What Bell’s Theorem really shows us is that the foundations of quantum theory is a bona fide field of physics, in which questions are to be resolved by rigorous argument and experiment, rather than remaining the subject of open-ended debate,” Matthew Leifer of the Perimeter Institute for Theoretical Physics in Canada writes in a recent paper.

That debate has made enormous progress in identifying and clarifying quantum phenomena, opening the way to new fields of study such as quantum information theory and new technologies for quantum communication and computation.

Still, experts argue. Bell’s theorems admit some loopholes that may not all have been closed. Perhaps, for instance, hidden variables can still guide quantum particles if reality is not local. And an ongoing debate rages (at the quantum level) about whether the “quantum state” of a particle simply represents knowledge used to make predictions, or is in fact a real thing in itself.

Path integral formulation

The path integral formulation of quantum mechanics is a description of quantum theory which generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique trajectory for a system with a sum, or functional integral, over an infinity of possible trajectories to compute a quantum amplitude.

The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion andBrownian motion.[1] This idea was extended to the use of the Lagrangian in quantum mechanics by P. A. M. Dirac in his 1933 paper.[2] The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier, in the course of his doctoral thesis work with John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point.

This formulation has proven crucial to the subsequent development of theoretical physics, because it is manifestly symmetric between time and space. Unlike previous methods, the path-integral allows a physicist to easily change coordinates between very different canonical descriptions of the same quantum system.

The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s which unified quantum field theorywith the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is ananalytic continuation of a method for summing up all possible random walks. For this reason path integrals were used in the study of Brownian motion and diffusion a while before they were introduced in quantum mechanics.[3]

Quantum mechanics

Quantum mechanics (QM; also known as quantum physics, or quantum theory) is a fundamental branch of physics which deals with physical phenomena atnanoscopic scales, where the action is on the order of the Planck constant. It departs from classical mechanics primarily at the quantum realm of atomic andsubatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energyand matter. Quantum mechanics provides a substantially useful framework for many features of the modern periodic table of elements, including the behavior of atomsduring chemical bonding, and has played a significant role in the development of many modern technologies.

An interpretation of quantum mechanics is a set of statements which attempt to explain how quantum mechanics informs our understanding of nature. Although quantum mechanics has held up to rigorous and thorough experimental testing, many of these experiments are open to different interpretations. There exist a number of contending schools of thought, differing over whether quantum mechanics can be understood to be deterministic, which elements of quantum mechanics can be considered “real”, and other matters.

This question is of special interest to philosophers of physics, as physicists continue to show a strong interest in the subject. They usually consider an interpretation of quantum mechanics as an interpretation of the mathematical formalism of quantum mechanics, specifying the physical meaning of the mathematical entities of the theory.

Environmental Experts or Expertise

Wastewater is any water that has been adversely affected in quality by anthropogenic influence. It comprises liquid waste discharged by domestic residences, commercial properties, industry, and/or agriculture and can encompass a wide range of potential contaminants and concentrations. Industrial site drainage (silt, sand, alkali, oil, chemical residues), industrial cooling waters, industrial process waters, organic or bio-degradable waste, including waste from abattoirs, creameries, and ice cream manufacture, organic or non bio-degradable/difficult-to-treat waste (pharmaceutical or pesticide manufacturing), extreme pH ranges (from acid/alkali manufacturing, metal plating), toxic waste (metal plating, cyanide production, pesticide manufacturing, etc.) solids and emulsions (paper manufacturing, foodstuffs, lubricating and hydraulic oil manufacturing, etc.), agricultural drainage, direct and diffuse.

Why I won’t teach pair trading to my students By Lex van Dam Published: Oct 1, 2012 8:47 a.m. ET

A few years ago, a German billionaire had a go at pair trading with Volkswagen’s two share classes. He ended up jumping in front of a train.

The pair-trading strategy — essentially buying one stock while selling short another within the same sector — sounds good in theory, but it can be a real portfolio killer.

Here’s how it works: When you pair trade stocks, you buy the underperformer, and you sell the outperformer. You are betting on mean reversion. In other words, you think the stock that has fared relatively badly will make up for that over the next period and start outperforming the one that had done well.

In the oil sector, for example, think Exxon Mobil XOM, -2.26%  vs. Royal DutchRDS.A, -2.28% whilst in the health-care sector, it would be something like GlaxoSmithKline GSK, -0.88% vs.Pfizer PFE, -0.06%

It is a popular strategy, and the opportunity can be easily spotted on a chart where both stocks are plotted versus each other, i.e. a relative chart.

Here you can see the chart of consumer goods company Unilever UN, -0.49% vs. its peer Procter & Gamble PG, +0.42% This is a 3-year chart, and when the line has gone up it means Unilever outperformed Procter & Gamble, and when the line has gone down, Procter & Gamble outperformed Unilever.

They have been stuck in a tight range. They are two well-managed global in a very stable sector, so when one stock underperforms, the other company should catch up sooner or later. Seems easy enough!

Unfortunately, the reality is that I have seen a lot of people do this kind of pair trading over the past 20 years, but not met any individual traders who have consistently made money doing it. It might be different for computer programs, which trade intraday, but for people without that kind of computer power, it is a loss-making strategy as far as I am concerned.

Why do I think that is the case? Well, first of all, there is normally a good reason why a certain stock outperforms its competitor over a certain period. It might well be a fundamental change in the business, or maybe new management has arrived, or perhaps the two stocks weren’t as comparable as first thought.

Let me give you an example of a pair trade that went terribly wrong.

Here you see the price ratio between General Motors GM, -2.82%  and FordF, -1.72%  between 2002 and 2012. You could argue that they were trading in a range between 2002 and 2008, and if you had enough patience a pair trading strategy would have made money.

However, it would have given you the position in 2008 of being long the underperformer General Motors vs. short Ford Motors, at a ratio of between 2.5 and 3.That position would have lost you all your money as the ratio went to zero when General Motors went bankrupt in 2009. So it really would have been a bad strategy to bet on the underperformer being the place to put your money.

Other issues with pair trading are that you pay a lot of commission to your broker, and that the time period of mean reversion might be much longer than you initially hoped for. Also, as the spread goes out further and further, more and more traders will put on this trade just as you did, leading to an enormous consensus position, where all the traders are on the same side of the trade and are all losing money and getting nervous.

The chances are that the spread will go out even further as these traders start to cut their positions.

If pair trading can drive a billionaire to suicide, I think that tells you that you should stay away as well. My recommendation: Keep your life simple — don’t do pair trading.