Sunday, September 14, 2014

Pure Magic

Nikola Tesla in his laboratory, 1899.

When people mention longevity, I always joke that I intend to live to 200, for one simple reason.

For all its destructiveness, selfishness, and arrogance, the human race is endowed with the most amazing capacity - its creative ability.

Human ingenuity has given rise to such glorious creations as Chartres Cathedral and Beethoven's Ninth, the Large Hadron Collider and the Hubble Space Telescope. To our early ancestors, all this would surely seem the fruits of powerful magic. And I, for one, can't wait to see our next magic trick.

Here are some of my current favorites:

1. The PC
Since the Dark Ages of DOS, I've earned my keep with a PC. It's my invaluable, multipurpose tool - a typesetter, photolab, publicist, accountant, recording studio, publisher, scout and personal secretary.

It's evolved significantly - my homebuilt PC now functions as an alarm clock, fitness coach, interpreter, tutor, stereo, home theater and game center. I can use it to design curricula, and write, illustrate, edit and publish books, blogs, brochures, business cards, or even songs and videos. And, when the workday is done, it lets me shed my cares like a dusty overcoat, and lose myself in a movie or game. I heart PCs.

2. The Internet
The Japanese animation Doraimon recounts the adventures of a boy and a robotic blue cat from the future equipped with an amazing bit of technology - the どこでも ドア docodemo doa, the anywhere door. To me, this is a metaphor for the Internet.

Via the Internet's magic, I can walk the halls of the most magnificent edifices in history, browse the Louvre, speak in tongues, earn a degree from Harvard, Oxford, and Columbia, chat face-to-face with friends across the planet, trace my family history, or order up a gene sequence as easily as a fresh pizza.

I can learn from history's most brilliant minds, and read wisdom as ancient as the earliest Sumerian cuneiform tablets or as fresh as text streaming in real time across a CNN broadcast. I can learn to dance, publish a best seller, or attend a conference without ever leaving my home, thanks to the doco demo door.

3. Google Maps
Feed it an address, and Google maps will instantly give step-by-step directions for travel on foot, or by public and private transport, with estimated arrival times and links to schedule, fare and route information. It's even possible to virtually walk the entire route via "street view" images, which comes in handy when one is traveling abroad. Access it on a cell phone, and I can track my progress toward my destination in real time. It's a spectacular innovation.

4. Facebook
Facebook is the greatest party in history. I have literally found everybody who ever meant anything to me - from my very first girlfriend to my old school chums and workmates from across the planet. Via Facebook, we can chat any time of day or night, via text, voice or video - or play mindless video games till dawn together, should the mood strike us.

5. The inflatable bicycle tire
To save money, stay in shape and reduce pollution, I ride a bicycle everywhere. It's given me a deep appreciation for something beneath my notice for decades. Every time I ride, I'm a bit awed at the ingenuity that went into engineering the simple bicycle tire - mounted upon load-distributing spokes and sporting its clever little auto-sealing valve. It feels oddly marvelous to glide effortlessly across the ground upon a freshly inflated tire, and to feel the tremendous difference in energy expenditure. This wonderful innovation was Irish inventor John Dunlop's 1888 gift to mankind.

6. The electric light bulb
I suspect that when Roald Dahl wrote Charlie and the Chocolate Factory, he was basing his main character, Mr. Willy Wonka, upon the Wizard of Menlo Park, who authored over a thousand patents, and gave the world its first electric power station, motion picture camera and sound recording.

Thomas Edison's amazing productivity was not just due to his brilliance, but also his Herculean patience. Forbes Magazine tells of how a reporter once asked him how it felt to spend years producing nothing but failed prototypes for a commercial light bulb, and his paradigm-shifting reply was reputedly: “I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.”

Perhaps Edison can be forgiven for indulging in a bit of self-aggrandization in the interest of marketing, but historians Robert Friedel and Paul Israel say that actually no less than twenty-two people invented various forms of electric lights before Edison filed his own 1878 patent. Chief among them was John W. Starr, who died shortly after filing an 1845 patent. And, says the Smithsonian, Edison and his team actually tested 1600 filaments of various types (including coconut fiber and human hair) before settling on carbonized fibers extracted from a folding bamboo fan he found in his factory.

7. Direct current
As if merely lighting up the world wasn't enough for Edison, he would soon create an entire electric power system to run his invention. The world's first power station, launched in 1882, would bring direct current power - and light! - to the masses.

Direct current is generally too expensive for large-scale long-distance distribution, so it's now primarily used in electronic devices, including the Integrated Circuit chips which manipulate the on and off states of binary code - the magnificent Lingua franca of the Digital Age.

8. Alternating current
One wonders if Edison's former employee and arch rival Nikola Tesla knew he was destined to reshape the world someday, by inventing the very lifeblood of our civilization. Try to visualize alternating current at work: electrons streaming across atoms, building speed to a climax, then slowing, and switching direction. The cycle repeats endlessly 50 to 60 times every second.

Tesla used AC to power induction motors - essentially the same variety which run most household appliances, such as vacuum cleaners and blenders. AC flows through copper coils wrapped around a cylindrical iron stator. This generates a magnetic field which induces a rotor inside to turn, running machines used by billions worldwide every day.

9. Binary code
The level of profound genius required to convert a machine's simple on or off state into a Lingua Franca that conveys everything from the entire works of Shakespeare to the launch control codes for artificial satellites is mind-boggling. Particularly when one realizes its inventor, the rather eccentric hypergenius Gottfried Leibniz, first devised the system over 300 years ago, in 1679 - based, no less, upon ancient Chinese mysticism. Beyond cool.

10. The internal combustion engine
This magic box harnesses the power of explosions to move cars, trucks and motorcycles. I remember first learning its inner workings in elementary school, daydreaming my way through the process - imagining spark plugs igniting oxygenated gasoline, and the explosions forcing pistons through cylinders hundreds of times every minute, rotating a crankshaft whose motion is ultimately transmitted via gears and axles to the wheels, propelling the vehicle forward.

All hail the late, great Nikolaus August Otto.

Thursday, September 11, 2014

On the Recent Silence....

Apologies, kind readers. I have been rather busy as of late, because I am trying to enter graduate school at Harvard Extension to earn my masters in psychology.

My original degree is in journalism, with several certifications in IT, but I spent a year preparing for the general GRE, and earned the highest possible score for verbal reasoning (169) and was on the median (151) for quantitative reasoning. On the psychology GRE I scored in the 83rd percentile (90th for experimental psychology).

At any rate, if you're interested and/or able, you can get a tax write off by sponsoring me. Or if you like, you can purchase one or both of my books. Right now, I'm trying to raise the funds to attend the obligatory first semester on campus. My target for the semester is $17,000. After getting my foot in the door, I would then be eligible for federal funding and/or Harvard scholarships and grants.

You can also help by steering me toward any useful information on private funding.

Thanks for your time.


Monday, September 1, 2014

Welcome to the Connectome

Why does everyone's brain function so differently? Some of us are extroverted, some not; some of us are experts at language, some not; some of us are afflicted with pathologies like schizophrenia, some not; some of us are compassionate, some not. 

According to MIT neuroscientist Dr. Sebastion Seung, author of Connectome: How the Brain's Wiring Makes Us Who We Are, the differences are all due to our neural wiring. Personality, IQ and memories are encoded by our connectomes - the neural wiring which is every bit as individual as a fingerprint, but on a massively more complex scale.

The connectome describes both the brain's overall wiring, and how genes organize and express the proteins that form neural connections.

The brain's architecture gives rise to specific capabilities and tendencies, including perception, evaluation, behavioral selection, and personal traits. Much of it seems to be based upon a hierarchical system. 

For example, neurons related to perception comprise one such hierarchical network. Neurons at the bottom of the visual hierarchy respond to the simplest stimuli - individual spots of light. In your eye, each photoreceptor on your retina responds to a tiny spot of light at a specific location, much like a digital camera's multiple sensors, which detect light in terms of individual pixels. Moving upward in the hierarchy, neurons process progressively more complex data, with those at the top detecting the most complex stimuli, such as a person's identity. The neurons which detect parts send excitatory signals to neurons which eventually perceive the entirety.

UCLA's Dr. Itzhak Fried discovered one such "top neuron" able to respond specifically to images of actress Halle Berry. Interestingly, the subject's Halle Berry neuron also responded to the actress' written name, suggesting this cell participates in both perceiving and thinking about the actress, meaning it corresponds to an abstract representation of her.

According to Dr. Seung, such neurons are interconnected in cell assemblies, which hold the associations used in forming thoughts. These cell assemblies also interconnect and overlap.

Philosophers say specific principles govern how these associations are learned. The main method is through coincidence detection, finding a contiguity (sequence or series) in time or place. For example, since you often see toast eaten with butter, you have learned to mentally connect the two. 

Repetition is another method for learning associations. As a baby, the first time you saw your parents buttering their toast, perhaps your brain didn't form a permanent association, but after you saw it every morning at the breakfast table, you eventually formed a permanent mental association - and a synaptic connection. 

Sequences are also important for building associations. Reciting the days of the week and the months of the year repeatedly eventually allowed you to learn them by heart. Since each day and each month always followed its predecessor, you learned to associate them in sequence. Episodic memories, for example, involve a sequence of events, the synapses must be activated in one direction, allowing for memories to be recalled in chronological order.

This type of association will be linear, but if you always see two things appearing together, the association will be bidirectional.

Perception may seem effortless, but memory often seems difficult. If your brain only contained one cell assembly with a single memory, recall would undoubtedly be simple, but since a huge number of these cell assemblies overlap, it creates the potential for memory errors. 

Imagine the first time you ever rode a ferris wheel. You were in the fairgrounds surrounded by the cacophony of rides, electronic games and screams of delight, the smells of smoke, cotton candy and hot dogs wafting on the air. 

If you have a second memory which includes hot dogs - perhaps a Fourth of July barbecue with your family - both memories will differ, but the cell assemblies will share hot dog neurons, so when one cell assembly is activated, it can trigger the second. Delighted squeals might trigger a mixup of both memories. Perhaps this is what leads to faulty memory retrieval.

Says Dr. Seung, a high firing threshold might prevent such haphazard activation spread: if a given neuron can't activate without two excitatory inputs, two cell assemblies which only share one neuron would not be able to have such indiscriminate firing. But such a protective measure becomes problematic because it makes memory recall more difficult. To trigger an entire memory would require a minimum of two cell assembly neurons firing, so recalling your ferris wheel ride might require both the ringing of electronic bells and the smell of hot dogs cooking together. 

This means sometimes your memory may not work even when you need it to, because memory requires a delicate balance:  if there's too much activity, your memories may be hazy, but if there's too little, you may not remember at all. 

In forming associations, synapses "reweight" - either strengthening or weakening, and this is the physical basis of memory in the brain. Strengthening (long term potentiation) occurs as synapses grow more neurotransmitter-filled sacs (vesicles) on the transmitting neuron and more neurotransmitter-sensitive receptors on the receiving neuron. Synaptic weakening (long term depression) or dendritic atrophy occurs when neural pathways fall into disuse. Synapses may also be synthesized or eliminated, in the process called reconnection.

When two neurons repeatedly fire in sequence, the connection from the first to the second will strengthen; if they repeatedly fire simultaneously, connections will strengthen in both directions. This strengthening is the long-lasting basis of memory. These "Hebbian principles" of synaptic plasticity are activity-dependent, because the change in synaptic strength (plasticity) is triggered by the repeated firing of neurons. The changes last for weeks or even a lifetime, depending upon repetition and the subjective importance of the information.

A collection of such strengthened synapses acts as a cell assembly, a group of excitatory neurons interconnected together with strong synapses. There will be a number of additional weak synapses, but they aren't part of the cell assembly, having not been fired remained, and thus remaining unaltered. These weak synapses won't affect recollection, because firing will spread among the neurons in the cell assembly, but not to the unrelated neurons, because the synapses are too weak to activate the unrelated neurons.

Dr. Seung summarizes the idea thusly: "Ideas are represented by neurons, associations of idea by connections between neurons, and a memory by a cell assembly or synaptic chain. Memory recall happens when activity spreads after ignition by a fragmentary stimulus. The connections of a cell assembly or synaptic chain are stable over time, which is how a childhood memory can persist into adulthood."

Monday, August 18, 2014

Making Memories

Neuroscience's Holy Grail has long been the engram - a neuron or set of neurons which physically hold a memory. To a large extent, that search has ended, thanks to scientists like Nobel prizewinner Eric Kandel, who won the 2000 Nobel Prize for his discoveries in the neurochemistry of learning.

Dr. Kandel began his groundbreaking work by using Pavlov's dog-conditioning techniques to study neural changes in the California sea snail Aplysia as it learned. Aplysia has only 20,000 neurons, and they are the largest in the animal kingdom, visible to the naked eye. This makes the neural network easy to manipulate and identify, and makes Aplysia to neurobiologists what fruit flies are to geneticists and rats to behavioral psychologists.

He began by stimulating Aplysia’s sensory neurons with electrodes, then used the process of elimination, neuron by neuron, to map out the entire neural circuit controlling gill withdrawal - a simple behavior in which Aplysia adapts and learns from its environment.

Just as you would jerk your hand away after touching a hot stove, Aplysia reflexively withdraws its gills in response to aversive (unpleasant) stimulation. In comparison with yours, its brain is primitive, essentially two neural bundles called ganglia; but just as with more complex animals, it can learn. In Aplysia, just as in humans, "practice makes perfect"; repeating a stimulus converts a short-term memory - lasting minutes - into a long-term one, lasting days, weeks or a lifetime.

In Aplysia, the gill withdrawal reflex is controlled by just 24 sensory neurons which enervate (send signals to) six motor neurons. Between them are "middle managers" - interneurons which act as modulators, either excitatory (dialing up the likelihood of firing) or inhibitory (dialling down the likelihood of firing). Stimulating the tail activates these interneurons, which excrete the neurotransmitter serotonin. This triggers an excitatory response, and the motor neurons fire, causing muscular contractions. The end result is that the animal withdraws its gill in response to a shock.

Aplysia's neurons (like yours) are hardwired - physically and functionally fixed by instructions encoded in DNA - so they aren't capable of significant change, such as increasing in number or changing in function or location. However, the connections between them are extremely flexible. Learning changes their signalling efficiency.

Neurons communicate via these connections, across tiny gaps called synapses, which excrete chemical messengers called neurotransmitters.

Lasting memories are preserved by the growth and maintenance of these synapses. In other words, memories are encoded in the connections between neurons.

Dr. Kandel's early experiments involved training Aplysia in a learned fear response called sensitization, in which repeated exposure to an aversive stimulus makes a creature more sensitive to that stimulus. For example, a war veteran sensitized to sounds like gunfire might jump at the slamming of a car door. Similarly, an Aplysia snail which has become sensitized to shocks on its mouth organ (siphon) will also respond to stimuli applied to its tail. This is because, simply stated, "practice makes perfect" - repeated exposure converts short term memory into long-term memory, via physical growth (protein manufacture).

After tracing the specific neural circuits which control gill withdrawal, Dr. Kandel devised a technique for growing cell cultures in a petri dish from larval snails, creating the absolutely simplest learning circuit - just two live neurons, one sensory, one motor.

He and his colleagues could then substitute tail shocks with a squirt of serotonin, and investigate the specific molecular processes which lead to memory formation.

He discovered there are two separate chemical sequences, one for building short-term, and one for building long-term memories. Short-term memory - which lasts for seconds to minutes - comes from increasing a presynaptic neuron's ability to emit neurotransmitters.

This neurotransmitter increase develops from a six-step chemical sequence called a signalling cascade:
When the snail's tail is given a mild shock, the neurotransmitter serotonin is released by interneurons (intermediate neurons which amplify or dampen sensory neuron input to targets such as motor neurons). This neurotransmitter binds to protein receptors embedded in the membrane of a recipient (postsynaptic) neuron.

These serotonin-activated (serotonergic) receptors prompt the conversion of ATP, the cell's natural "fuel" into a special chemical signal, a secondary messenger called cAMP. cAMP is called a secondary messenger because it transfers an external signal from the membrane to molecular machines inside a cell.

In Aplysia, the cAMP signal activates a protein on-off switch called a kinase (PKA). This kinase migrates back to the membrane and causes a shape change (phosphorylation) in special calcium channels, proteins embedded in the membrane which act like selective gateways.

The shape change temporarily opens these channels to calcium ions, which flow into the synaptic terminals of the neuron.

Calcium ions function as the chemical switch that triggers increased neurotransmitter release. This increased neurotransmitter release is the physical basis of a short-term memory formation.

To convert this short-term memory into a long-term memory, repeating the stimulation triggers an additional chemical cascade which activates physical growth - protein manufacture.

This anatomical change is the sprouting of new synapses and/or synaptic branches called dendrites. Dendrites are tendril-like appendages which reach out to connect with neighboring neurons.

This anatomical change is called long term potentiation, so-called because, over the long term, the potential for a neuron to fire has been enhanced - the postsynaptic neuron has become more sensitive to stimuli and fires more frequently. This phenomenon involves the long-term modification of the synaptic connection.

In 2010, MIT's Gertler Lab created a spectacular film of this growth process in living, cultured mouse neurons. Here you can see the growth of neurites, neural buds which sprout into dendrites.

In Aplysia, repeating the stimulation (mild shocks) creates higher amounts of serotonin release, triggering a release of higher amounts of the secondary messenger cAMP. This more persistently activates the kinase (PKA) switch. Just as in short-term memory formation, this kinase migrates back to the cell membrane and opens protein channels, and calcium ions rush into the synaptic terminals, triggering neurotransmitter release.

However, the repeated stimulation also results in a second action, that ultimately results in physical growth of the neuron, through protein synthesis.

A subunit of the kinase moves into the neuron's cell nucleus, where it activates a special protein called a transcription factor (CREB protein or cAMP Responsive Element Binding protein). This transcription factor binds to specific DNA sequences (genes), activating them to start manufacturing new synapses, from proteins building blocks like neurexin and neuroligin.

Genes are segments of the DNA molecule arranged in specific sequences to guide the synthesis of messenger RNA (mRNA).

mRNA is a molecule which copies the genetic code from DNA and uses it as a template to guide the manufacture of proteins - the most complex molecules on Earth, which carry out virtually all biological functions, from forming tissues to carrying out chemical processes vital for life.

To accomplish this, mRNA peels off from its parent DNA, then travels outside the cell nucleus to a special region of the cell body - a mazelike series of tubes called the endoplasmic reticulum. Here the mRNA is used by mini protein factories called ribosomes as a blueprint for assembling short building blocks (amino acids) into proteins.

Dr. Robert Singer and colleagues at the Albert Einstein College of Medicine developed a means of filming this neural mRNA synthesis. They attached harmless flourescent chemical tags to mRNA molecules so they could be filmed within live mouse neurons. Here is the world's first film of a memory being formed in real-time:

Mammals like mice have evolved a special memory-creating and storing brain structure called a hippocampus, named after the Greek word for seahorse, because of its curly shape.

Dr. Singer's team stimulated neurons in the mouse's hippocampus, where "episodic" and "spatial" memories are formed and stored (episodic memories are the conscious mental record of our life events and the sequences in which they occur, while spatial memories constitute navigational guides through an organism's environments).

The hippocampus acts as a sort of amplifying loop, receiving signals from the cortex - the area where conscious thought and ultimate brain control resides - and sending signals back. As memories are consolidated - stabilized into potentially lifelong memories - they are gradually transferred from the hippocampus to the cortex during sleep.

Dr. Singer's team targeted the mRNA which carries the code for a structral protein called beta-actin, central to long-term memory formation. In mammals, beta-actin proteins strengthen synaptic connections by building and altering dendritic spines.

Within 10 to 15 minutes of stimulation, beta-actin mRNA began to emerge, proving their neural stimulation had triggered transcription of the beta-actin gene. The film shows these fluorescently glowing beta-actin mRNA travelling from neural nuclei to their destinations in the dendrites, where they will be used to synthesize beta-actin protein.

Neural mRNA uses a unique mechanism the Einstein team calls "masking" and "unmasking", which allows beta-actin protein synthesis only when and where it is needed. Because neurons are comparatively long cells, the beta-actin mRNA molecules have to be guided to create beta-actin proteins only in specific regions at the ends of dendrite spines.

According to Dr. Singer, just after beta-actin mRNA forms in the nuclei of hippocampal neurons, it migrates out to the cells' inner gel (the cytoplasm). At this point, the molecules are packed into granules whose genetic code is inaccessible for protein synthesis.

Stimulating the neuron makes the beta-actin granules fall apart, unmasking the mRNA molecules, and making them available for beta-actin protein synthesis. When the stimulation stops, this protein synthesis shuts off: after synthesizing beta-actin proteins for only a few minutes, the mRNA molecules abruptly repack into granules, returning once again to their default inaccessible mode.

In this way, stimulated neurons activate protein synthesis to form memories, then shut it down. Frequent neural stimulation creates frequent, controlled bursts of messenger RNA, resulting in protein synthesis exactly when and where it's needed to strengthen synapses.

Of course, the process involves more than just a single gene - in fact a dizzingly huge number are involved. Learning involves large clusters of genes within huge numbers of cells.

Florida U. Neuroscience Professor Leonid Moroz has tracked specific gene sequence expression in Aplysia neurons, and estimates that any memory formation event will alter the expression of at least 200-400 genes. This is out of over 10,000 which are active every moment of the simple marine snail's life.

Moroz's team zeroed in on genes associated Aplysia's feeding and defensive reflexes, and found over 100 genes similar to those linked to every major human neurological illness, and over 600 similar genes which control development. This shows these genes emerged in a common ancestor and have remained nearly unchanged for over half a billion years of both human and sea slug evolution.

Human brains contain about one hundred billion neurons, each expressing over 18,000 genes, at varying levels of expression (protein synthesis rates), with over 100 trillion connections between them. Aplysia has a comparatively much simpler nervous system, with only 10,000 easily identifiable, large neurons. However, it is still capable of learning, and its neurons communicate using the same general principles as human neurons.

According to Dr. Moroz, if genes use a chemical alphabet, there is a kind of molecular grammar, or set of rules controlling the coordinated activity of gene expression across multiple neural genes. To understand memory or neurological diseases at a cellular level, scientists need to learn these grammatical rules.

More sophisticated
Like the Einstein team, Dr. Eric Kandel also went on to study the mouse hippocampus, and found it makes use of a chemical cascade similar to that of Aplysia neurons. But in mouse - and human - hippocampi, the chemical sequence for memory formation is slightly different:

In the hippocampus the changes which lead to memory formation occur in the receiving (postsynaptic) neuron rather than the sending (presynaptic) neuron. This process occurs in two stages:

During Early LTP, the excitatory neurotransmitter glutamate is released. This glutamate acts upon two types of receptors, NMDA (N-methyl-D-aspartate) and AMPA (a-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptors. Normally, only the AMPA receptors respond, but repeated stimulation activates the NMDA receptors as well. These open ion channels in the membrane, allowing calcium ions to flow into the neurons, and this activates a (different) second-messenger kinase, CAMK2A (calcium calmodulin kinase). CAMK2A triggers a chemical cascade that results in the growth of additional AMPA receptors.

In Late LTP, repeated stimulation activates a second system, which releases the modulatory neurotransmitter dopamine.

Like serotonin in Aplysia, dopamine acts upon the hippocampus like a sort of volume switch, dialing up neural activity. (Disruptions in the dopamine system are at the heart of disorders like schizophrenia and Parkinson's disease.)

Dopamine binds to its own receptors on the cell membrane, increasing production of cAMP, activating the PKA-CREB sequence which starts protein synthesis for new dendrite/synapse formation.

Mice are also capable of much more sophisticated memory feats than sea slugs, such as memorizing spatial layouts in a manner much like humans.

In his experiments with mice, Dr. Kandel found three principles at work in learning. First, when two or more neurons converge, stimulating a third neuron, it creates a logic circuit known as a “coincidence detector”. This circuit underlies associative learning - we link A with B, cause with effect, touching a hot stove with pain, etc. Variations of this circuit underlie many forms of mental computation.

Secondly, he found that mice hippocampi use place cells, special pyramidal neurons which fire in response to specific locations. These place cells act as mental markers, firing at the sight of certain landmarks, allowing mice to encode internal navigational maps.

The permanency of spatial memory varies based upon how much attention is applied to navigation, and levels of attention appear to correspond to levels of the neurotransmitter dopamine. In other words, generally speaking, the more attention (and dopamine) that is applied, the more effectively a route will be memorized. Dopamine modulates the effects of learning - acting like a volume knob that controls how much data is converted from short-term into long-term memory.

Thirdly, Dr. Kandel's team studied motor learning, and concentrated their search in the cerebellum (latin for "Little Brain"), known to be the master coordinator of complex movements.

All animals practice complex sets of coordinated movements until they perfect them, from learning to drive a car (for humans) to catching flies with the tongue (for frogs). Their first attempts may be unsuccessful, but with perseverance, eventually there will be mastery. These are procedural memories, a form of motor learning which is central to animal behaviour. And just as with episodic and spatial memory, procedural memory is also dependent upon changes in synaptic strength.

With repeated practice, there will be growth (of new dendrites and synapses) in a "midline" (superior medial) strip atop the brain's outer surface, the motor cortex, which plans and issues commands for voluntary movements - sequences of muscle contractions.

Coordinating the process, the cerebellum sits behind the brainstem at the top of the spine. It acts as a kind of coach, comparing performance - the actual changing positions of limbs, trunk and head in space - with intentions. Through this "feed forward" mechanism, the cerebellum corrects motion, ensuring smooth timing and orchestration of the many muscle contraction signals in complex movements. Learning to orchestrate these movements depends to a large extent upon special Purkinje cells.

Neurons fire when special protein channels embedded in the cell membranes open, allowing electrically-charged ions to flow inward. Among these, the HCN1 ion channel (Hyperpolarization-activated, Cyclic Nucleotide-regulated nonselective cation), is key to motor learning in Purkinje cells of the cerebellum.

To study the role of HCN1 ion channels in motor learning, Dr. Kandel and his team bred mutant mice with neurons that lacked HCN1 channels in different brain regions.

They then ran their mutant mice (along with a control group of normal mice for comparison) through a series of complex motor tests, which included swimming through water mazes and balancing on rods. These tests required complex, repetitive and coordinated motor output. The mice were also conditioned with simpler motor behaviours like eye blinking.

Mice with no HCN1 channels in their Purkinje cells could still perform simple movements like eye blinking, but had extreme difficulty in performing complex behaviours like swimming and balancing. In contrast, mice without HCN1 channels in their forebrain but with normal cerebella had no problem in performing complex behaviours. This shows that HCN1 channels in Purkinje cells are the key to complex motor learning.

When negative currents were applied to Purkinje cells, those lacking HCN1 channels took longer to return to normal firing activity levels than normal Purkinje cells. This means HCN1 channels stablilize Purkinje cells, allowing them to quickly recover from activation and return to normal functioning. In complex, repeated behaviours, Purkinje cells receive repetitive bursts of input, and this ability to recover quickly is vital to influencing motor activity.

In the end, while it is good to understand the principles by which your brain manufactures memory, here is some more practical advice on how to make the most of what you've got:

1. Sleep at least seven to eight hours at a regular time every night

2. Take fish oil supplements

3. Exercise every day

4. Limit or eliminate your intake of alcohol, tobacco, and fatty or sugary foods

5. Pay full attention to what you want to remember

6. Space your learning out; cramming does not help long-term memory retention, especially all-night study sessions.

7. Think of how the information you want to learn relates to you or to things you already know

8. Study a foreign language and a musical instrument

9. Study in the same environment, at the same time every day

10. Keep your study environment free from clutter and distractions

11. Get up and do something different about every 15 minutes before returning to your studies

12. Spend time in mentally stimulating environments where you have new experiences and interact with many different sorts of people from many cultures and backgrounds

13. Reduce your life stresses to a minimum

If you want to read about the Princeton, MIT, Oxford, Harvard, Yale, Tokyo University and other studies upon which these recommendations are based, I invite you to get a copy of my book, The Path Book II. It also explains the different systems of your body, and the science-supported, most effective nutrition, exercise, love, happiness and success advice found in any book to date.

Sunday, August 17, 2014

Learn Physics at Yale, Irvine and MIT for Free!

I teach English to a wide range of students here in Tokyo, from age three to 70, and, while most are elementary, high school and college students,   I also have a few businesspersons and scientists, including medical doctors and theoretical physicists. Since we use textbooks from their particular fields of specialization, it affords me the opportunity to learn a lot on my own.

For one of my PhD candidate students, I have compiled a list of free resources on the Internet for studying physics in English - at the best schools on the planet. Please enjoy:

Fundamentals of Physics 1 - Yale videos:

Course notes:

Fundamentals of Physics 2 - Yale videos:

Course notes:

MIT Quantum Physics I:

MIT Quantum Physics II:

The MIT audio courses are a bit more of a challenge, as they are only audio, with no transcripts:

Physics I: Classical Mechanics:

Physics II: Electricity and Megnetism:

From University of California Irvine. The videos are found by clicking "course lectures" on the left:

Physics I:

Physics II:

Physics III:

Math Methods in Physics:

Classical Physics:

Einstein's General Relativity and Gravitation:

TV series Manhattan (a lot of pop-up advertising you must close to watch):

Monday, August 11, 2014

On the Passing of Robin Williams: A Psychiatric Nurse Explains Suicide

Oregon psychiatric nurse Shauna Hahn shares the following insight into suicide:

RIP Robin Williams.

On the heels of another suicide, the hanging death of a local mother, I feel compelled to share something about the science of suicide. Too often, I have heard or read comments suggesting that the suicide victim was selfish or did not consider her own family, etc. How I educate patients about this serious topic is to liken suicide to having a heart attack. For example, we know the risks for Coronary Artery Disease: smoking, obesity, hypertension, hyperlipidemia, yet a heart attack doubtless feels surprising to its sufferer. Suicide is a lot like this. We know the risks: depression, substance abuse, risk-taking, history of other aggressions, etc yet the great deficiency in serotonin (a happy neurotransmitter or brain chemical implicated in both depression and anxiety) actually happens quite precipitously.

How do we know this? We can measure levels of serotonin metabolites in the cerebral spinal fluid and we find that, in individuals who have completed suicide, their levels are much lower than in individuals simply struggling with depression. And there is no difference in serotonin metabolites of the lightly depressed versus the seriously depressed. These dangerously low levels of serotonin mean that not only do we have despondency and despair but also poor impulse control. What a lethal combination.

Individuals who have survived high lethality suicide attempts (jumping off the Golden Gate bridge, shooting themselves in the head) mostly remark that they "did not know what they were thinking" and allude to being "not in [their] right mind." Obviously, individuals affected by mental illness have serious problems thinking clearly.

Kant believed that suicide was *the* philosophical problem. (He was very punitive and unforgiving in his view). Certainly, I empathize with individuals not being able to "understand" suicide, but what I would definitely encourage would be to at least try.

Do not judge men by mere appearances; for the light laughter that bubbles on the lip often mantles over the depths of sadness, and the serious look may be the sober veil that covers a divine peace and joy. - Edwin Hubbel Chapin, 1845

Sunday, August 10, 2014

The Crystal Forest at the Center of the Earth

Dr. Kei Hirose spends his days in his Osaka laboratory creating Hell on Earth - heating metal to 4,500C at pressures equivalent to three million Earth atmospheres, conditions found at our planet's core. Iron-nickel alloys at these extremes transform into something amazing - they crystallize, creating an entirely new form of metal.

Dr. Hirose's experiments show that Earth's inner core is a "forest" of such iron crystals, some likely as massive as 10 km long, all pointing toward magnetic north.

This is only one of many amazing advances geolophysicists have made in recent years as they piece together Earth's birth. It's a fascinating story.

An excerpt from my book The Path: Origins:

Consider for a moment the staggering alignment of circumstances that have led to your existence: 13.7 billion years ago – give or take some pocket change – the universe was born in a primal blast of plasmic energy that rocketed outwards with incomprehensible force and speed, expanding trillions upon trillions of times from a single point within the space of 10–32 seconds. At the outset, the universe was so superhot that no stable particles could emerge, and fundamental forces such as gravity and electromagnetism were merged into a single, tremendously powerful unified force, which blasted all of space outward. Gradually the plasma cooled, allowing the separation of the fundamental forces, and the first particles began to form.

The cooling of this great plasma cloud allowed the formation of particles such as those which make up our present-day universe, primarily the simplest of elements, hydrogen. Over billions of years, giant molecular clouds of superhot gases called stellar nurseries would give birth to successive generations of supermassive primeval stars. Hot gases and stardust ejected from these blazing primeval stars clustered in great swirling clouds, forming galaxies and planets.

The Milky Way Galaxy is among them, 100,000 light years in diameter and about 1,000 light years thick on average, containing somewhere on the order of 100 billion stars, with possibly 300 billion more tiny dwarf stars we are not yet able to detect. At its core, a light-devouring supermassive black hole seems to lurk, some four and a half million times the mass of our Sun.

The Solar System itself lies on the edge of the galaxy’s spiral Orion arm, some two-thirds of the way out from the core. It formed 4.6 billion years ago, as a region of molecular cloud underwent gravitational collapse. Gravity, pressure, rotation and magnetic fields flattened and contracted its mass, creating a dense, hot protostar at its core. This core condensed and grew even hotter, causing hydrogen atoms to fuse together, giving birth to our Sun, while the outer reaches of the disk cooled and condensed into mineral grains. Dust and grains collided and clumped, growing ever-larger – forming chondrules, meteoroids, planetesimals, protoplanets and finally planets, as gravitational attraction swept up additional fragments encircling the Sun. About a third of the age of the universe itself, Earth was born some 4.54 billion years ago, and now once every 365 days faithfully orbits that tremendous ball of blazing hydrogen and helium 93 million miles away. Just 100 million years after its initial formation, the infant Earth was smashed by another planetary body, knocking off a massive chunk to be trapped in eternal orbit as our Moon.

Our home is a rather unlikely object – a massive, spinning sphere of liquid, rock, metal and gas, fluid layers and plates, all held in delicate suspension within a great void by the invisible, binding force of gravity. Deep under the crust, molten metal sloshes around a solid iron-nickel core, creating a geodynamo - a planet-enveloping magnetic field which deflects most of the the sun’s solar wind, thus preventing it from blasting our atmosphere into space. It’s the interplay between this solar wind and our atmosphere that creates the breathtaking night-sky light show known as the Aurora Borealis or the Northern Lights.

Some nine billion years after the universe’s explosive birth, the first forms of life began to swim about this watery, star-born mass of rock. A very precise sequence of events occurred, allowing these fragile organisms to gradually grow in ever-greater complexity, to the point where they became self-aware, able for the first time to stare into the vast, cold reaches of the cosmos and wonder what miracle begat them.

Sources: The Path: Origins, copyright 2014, Eric A. Smith, Polyglot Studios 
"Earth's core far hotter than thought", April 26, 2013, Jason Palmer, BBC News
Video: "What is at the centre of the Earth?" August 31, 2011, BBC News

(Earth's surface is 71% water, nearly of it [97.5%] salt water. The crust is just 50 km thick on average; below that is a mantle, 2,900 km thick . Still deeper is a sea of molten metal some 2266 km deep, sloshing about at a scorching 4000-5700 degrees C. The core is solid iron-nickel crystals some 1200 km in radius, at temperatures found on the surface of the sun - 6000 degrees Celsius [10,832 F]).

Thursday, August 7, 2014

Small Wonder, Big Blunder

Image: A comparison of the LB1 skull with that of a typical modern human, photo: David Ferguson, 2014, The Raw Story
“There is nothing like looking, if you want to find something. You certainly usually find something, if you look, but it is not always quite the something you were after.” 
― J.R.R. Tolkien, The Hobbit

The news was sensational, a jaw dropping anthropological find: a mysterious new species of prehistoric humans, no bigger in stature than a modern child, had been found in the Indonesian island cave of Liang Bua. Named after Flores, the island where it was discovered, Homo floresiensis was a complete enigma. The mysterious speciman, LB1, included a complete skull and thighbones. All the other cave remains were fragments of several individuals.

LB1 was said to have a tiny cranium (only 380 milliliters or 23.2 cubic inches), housing a brain under a third of the average modern human's. Its thighs indicate it stood only 1.06 meters (3.5 feet) tall. When compared to Homo erectus and Australopithecus, it seemed utterly unique, surely an example of a previously unknown species.

Anthropologists have conclusively demonstrated that the first wave of prehistoric ancestors to leave Africa some 1.8 million years ago had been Homo Erectus, the so-called "wolves with knives", who crossed into Eurasia over the Levantine corridor and the Horn of Africa.

But Homo floresiensis seemed to have emerged with no known predecessors. How such a creature evolved completely outside documented human evolution was baffling, leaving anthropologists to surmise that limited island resources had led to dwarfism as an adaptation - just as it had with the island's indigenous pygmy mammoths, pygmy elephants, pygmy hippos and others.

It was a romantic idea, the notion of "hobbits" living just 15,000 years ago. Unfortunately, it was a matter of making the evidence fit the narrative rather than vice versa.

An international team of American, Chinese and Australian researchers have demonstrated a much more plausible explanation: Homo Floriensis never existed. Penn State evolutionary geneticist Dr. Robert B. Eckhardt, along with University of Adelaide anatomy and pathology professor Maciej Henneberg and Chinese geologist and paleoclimatologist Kenneth Hsu examined LB1, the single known specimen. Instead of a new prehistoric human species with no ancestry, they found the "less strained explanation" was a typical human with Down syndrome.

The team immediately saw signs of such a developmental disorder, and further evidence supported this conclusion: a mismatch between the skull's left and right halves, the craniofacial asymmetry typical of Down syndrome. Such Down syndrome characteristics were only found in LB1 but none of the other Liang Bua remains, further indicating how atypical LB1 was.The creature's cranial volume and stature had been "markedly" underestimated: subsequent measurements consistently showed a cranial volume of 430 milliliters (26.2 cubic inches), a "significant" difference, and within the range of modern Indonesians with Down syndrome. Down syndrome patients are also comparatively short, consistent with the recovered thighbone.

Dr. Eckhardt's team eventually concluded the Liang Bua remains weren't sufficiently atypical to require creating an entirely new human species. Down syndrome, however, is one of the most common modern human developmental disorders, affecting more than one in a thousand babies worldwide.

Sources: Flores bones show features of Down syndrome, not a new 'hobbit' human, press release, David Pacchioli, August 4, 2014, Penn State University;
The Path Book I: Origins, 2014, Eric A. Smith 

Saturday, August 2, 2014

Than are dreamt of in your philosophy

  Analysis by UC Berkeley and University of Hawaii astronomers shows that one in five sun-like stars have potentially habitable, Earth-size planets. (Animation by UC Berkeley/UH-Manoa/Illumina Studios)

Exoplanets are planets beyond our solar system, usually orbiting some other star or stellar remnant. The first to be discovered was Gamma Cephei Ab, about one and a half times the size of Jupiter, within the Errai binary star system in the northern constellation of Cepheus (the King), approximately 45 light-years from Earth. It was spotted in 1988 by Canadian astronomers at Victoria and British Columbia Universities.

Since that time, over 1700 more were confirmed by 2014. A little less than a third are parts of multiple planetary systems, while some are free floating, outside of any stellar orbit. The nearest so far discovered is about 12 light-years away.

NASA's Kepler space telescope spent four years scouring our galaxy for potentially habitable planets, tracking 156,000 stars with snapshots every 30 minutes. In its four-year mission, Kepler detected 4,229 possible exoplanet candidates, and, though not yet confirmed, astronomers are confident that at least 90% of them are genuine.

Kepler went offline in 2013 when its stabilizing system failed, after having searched only a tiny fraction of our galaxy - a patch of sky which includes part of the constellation Cygnus (the Swan), also called the Northern Cross. The planets it revealed were limited to those which transited (crossed in front of) their host stars from Kepler's vantage point. Vastly more exoplanets are likely to be found in the future.

Astronomers estimate the Milky Way's star population by measuring relative mass or luminosity (brightness), and most set the number at 100 to 200 billion stars. However, our instruments aren't sensitive enough to measure many smaller dwarf stars, so the number could be as high as one trillion in our galaxy alone.

What's truly exciting is that Earth-like planets are "relatively common throughout the Milky Way," according to the University of Hawaii's Dr. Andrew Howard: each star hosts at least 1.6 planets on average. A little over one out of five stars is of the same class as our sun and hosts an Earth-sized planet within the "habitable" zone - a distance at which liquid water can exist, potentially hosting life.

The research team based their calculations by focusing on 42,000 G type yellow stars similar to our sun in size and heat production.

The most conservative estimate of 100 billion stars (with 1.6 planets, one-fifth in the habitable zone), means our galaxy contains over 160 billion exoplanets, 32 billion of which orbit sunlike stars within the water-sustainable zone.

There could, however, could ten times as many, depending again upon the number of as-yet-undetectable smaller stars. The estimates also don't account for free-floating rogue planets, found outside any stellar orbit; these may outnumber stellar-bound exoplanets by as much as 50 percent, or two rogue planets for every star in the Milky Way.

That said, Earth-sized planets within the habitable zone may not necessarily be able to support life as we know it. Exoplanet diversity is, say researchers, "stunning"; some are much bigger than Earth, solid rocky giants with thick atmospheres like Neptune, dense gas giants like Jupiter, fantastically light, airy gas giants like Saturn (which could float in water, given  a large enough ocean), or perhaps even dead stars that hardened into solid diamond planets (though maybe not). Some may also have dense atmospheres that cook the planetary surfaces like our irascible twin Venus. However, it's likely that many are rocky and capable of supporting liquid water.

We inhabit, it seems, a very crowded galaxy indeed.

Sources: Number Of Alien Planets Confirmed Beyond Our Solar System Nears 1,000, Data Shows, September 29,2013, Mike Wall, 
Astronomers answer key question: How common are habitable planets?, Press release, November 4, 2013, Robert Sanders, Berkeley University
One in Five Stars Has Earth-sized Planet in Habitable Zone, News Release, November 4, 2013, Erik Petigura, W. M. Keck Observatory

Thursday, July 31, 2014

MIT lab demos rapid evolution in fish, driven by natural environmental change

Photo: Two varieties of the same species of Mexican Tetra, Astyanax mexicanus - in its surface and eyeless cave forms. Although the most obvious morphological traits (changes in body structure) are the loss of pigmentation and eyes in the cave variety, a number of other traits undergo rapid changes in response to the environment.
Image: Nicolas Rohner
The long-standing view of how evolution unfolds is that organisms experience spontaneous genetic mutations; these mutations can result in new traits which are either helpful or deleterious. Beneficial traits increase the odds an individual will survive, breed and pass its traits on to offspring.
Though this evolutionary process unquestionably occurs, it requires substantial time. Organisms under the pressure of extreme environmental changes must adapt rapidly to survive, however, pointing to a much more rapid adaptive strategy which scientists knew must exist. An MIT-Harvard research team has discovered such a strategy in cavefish, which make use of a heat shock protein called HSP90.

Heat-shock proteins are a protein group carried by virtually every living creature, including bacteria, plants, animals and humans. They help control the folding and unfolding of other proteins. 

The numbers indicate the molecular weight of each type of heat shock protein, such as HSP90. HSP90 controls protein folding of key growth and development gene regulators. Such folding must be very precise for proteins to perform their proper functions.

Several thousands of years in the past, a Mexican tetra (Astyanax mexicanus) population was transported from its native river habitat into the radically different environs of underwater caves. Forced to adapt to almost complete darkness, they lost their pigmentation, sharpened their sensitivity to nearby prey and to water pressure fluctuations, and completely lost their eyes. 

While the latter change may seem maladaptive, it is in fact advantageous, because maintaining a set of complex but pointless sense organs is biologically "expensive". Shedding unneeded eyes allowed Astyanax to reallocate limited biological resources to functions more appropriate for a cave environment.

According to lead author Dr. Nicolas Rohner, Astyanax mexicanus' striking adaptations are an example of standing genetic variation, which says that populations carry a number of silent but potentially beneficial genetic mutations - genes which have been switched off. Under specific environmental stresses, the genes for these mutations can switch on, guiding protein manufacture that results in visible phenotypes (observable traits which come from a creature's genes).

Says MIT biology professor and Howard Hughes Medical Institute investigator Dr. Susan Lindquist, HSP90 usually keeps such genetic variations dormant in a wide range of organisms, ranging from primitive yeasts to plants and fruit flies.  

Subjecting cells to heightened temperatures or other stresses reduces HSP production; in her research, Dr. Lindquist discovered that normally large reserves of cellular HSP90 dwindle during such physiologically stressful periods. These decreases in HSP90's suppressive control result in the rapid emergence of various phenotypic changes; some of these emergent traits are neutral or deleterious, while some are clearly beneficial.

Environmental changes alter protein folding, causing minor changes in the genome which can have major effects. Because HSP90 controls folding of important gene regulators of growth and development, it acts as a fulcrum for evolutionary change.

Dr. Rohner's research on the genetic changes behind Astyanax' eye loss caught Dr. Lindquist's attention, and they began to collaborate on researching HSP90's role in the process. 

Experiments on both cave and surface fish varieties of Astyanax yielded fascinating results. Raising surface fish with a drug that suppresses HSP90 - causing the same effect as rapid environmental changes - resulted in significant eye size variation, clearly showing HSP90's central role in the trait. 

While the cavefish variety is eyeless, their skulls retain ancestral orbital cavities. Cavefish raised in the same environment displayed no increase in variation of eye orbit size, but they grew smaller orbits, showing that eye size can vary based upon the presence of HSP90.

Because the team used artificial means to achieve their results, however, it was uncertain whether or not such (HSP90-altered) conditions would in fact naturally arise in the environment. 

To determine the answer, the team looked into the factors of the fish's two different natural environments, including oxygen levels, temperatures, and pH levels (potential Hydrogen, the tendency for water to be acidic - containing more positively-charged hydrogen ions - or alkaline - to contain more negatively charged OH hydroxide ions). 

The biggest difference between the surface and cave water environments was in conductivity - the ability of salts to transfer electrical charges. The cave environment had low salinity (conductivity levels), which induced a heat shock protein response, naturally generating lower levels of HSP90 protein, and thereby lifting the protein's constraints on growth and development regulators. As a result, surface fish raised in water with the same low salinity as the cave fish environment showed significant variations in eye sizes. This demonstrated that a natural environmental stressor could induce the same effects as artificially suppressing HSP90 activity.

This study expands upon Dr. Lindquist's previous experiments with HSP90-induced evolutionary changes in yeast, and showing the same mechanism comes into play throughout the plant and animal kingdoms.

Source: Rapid evolution of novel forms: Environmental change triggers inborn capacity for adaptation, press release, December 12, 2013, Matt Fearer, Whitehead Institute for Biomedical Research, Massachusetts Institute of Technology

Tuesday, July 29, 2014

Breathtaking Photorealism....

The first trailer for Assassin's Creed Unity was unveiled yesterday. And it is... pardon the cliche... revolutionary.

The Cat That Was and Wasn't Dead or A Short Trip Through the Multiverse

One of the most puzzling problems facing modern quantum physics theories is the central paradox of how elementary particles like photons can simultaneously exist in multiple states until the point of being observed.

In 1935, Austrian physicist Dr. Erwin Schrödinger illustrated the improbability of the situation with a famous thought experiment: a cat is placed into a box with a radioactive particle, a Geiger counter (which measures particles shed by decaying, unstable subatomic particles), and a vial of poisonous gas. If the particle decays, shedding radioactive energy, the Geiger counter triggers a release of poison gas, killing the cat. If there is little or no nuclear decay, the poison-gas vial remains intact, and the cat survives. The question is, when the box is opened, do you find a live cat or a dead one?

According to accepted theories of physics, you could have both - at the same time. This absurd paradox illustrates the central problem with quantum mechanics - forcing us to either conclude that A) matter works much differently on the subatomic scale than it does in the observable world; or B) something is wrong with the theory itself, even though it has consistently held up to mathematical and experimental scrutiny and led to many scientific advances over the last century.

Most physicists resolve the paradox by saying that observation collapses particles into a single state, but this still raises troubling questions, such as: if there is more than one observer, which one is entitled to cause the collapse? and where does the border lie between the laws of the subatomic world, in which these paradoxical states (superpositions) can exist, and the observable world, in which they cannot?

In other words, at the quantum level, it seems that objects are not really definable until they're observed - it seems particles don't actually resolve themselves into a single, measurable state unless they're being observed, and return to a multiple set of potential states when they're not.

In 1996, Dr. Christopher Monroe and colleagues Dawn Meekhof, Brian King and David Wineland at the National Institute of Standards and Technology in Boulder, Colorado cooled a single beryllium ion to near absolute zero, trapping it in a magnetic field and bringing it to a near motionless state. Stripped of two outer electrons, the ion's single remaining outer electron could be in two quantum states, having either an up or down spin. Using lasers, the team applied a tiny force one way to induce an up state in the electron, then in the opposite direction to induce rapid oscillation between both the up and down spins.

The electron was eventually induced to spin in both orientations simultaneously - in what physicists call a superposition. The team then used lasers to gently nudge the two states apart physically, without collapsing them to a single entity, so that the two states of the single electron were separated in space by 83 nanometers, 11 times the size of the original ion.

In other words, while the old proverb says You can't have your cake and eat it, too, it appears that, at least on the quantum scale, you certainly can.

Schrodinger's cat is alive and well... dead. Simultaneously.

We Can't Be Certain
In 2010, in a darkened laboratory at the University of California Santa Barbara, a tiny metal paddle the width of a human hair was refrigerated, and a vacuum created in a special bell jar. The paddle was then plucked like a tuning fork and observed, as it simultaneously moved and stood still. Superposition was being directly observed – objects in the visible world existing in multiple places and states simultaneously.

In quantum mechanics, there is an inherent uncertainty to reality; the location of an electron can never be precisely pinpointed at any moment in its orbit. Instead, its position can only be predicted in terms of probability. According to some physicists, if there are a thousand possibilities, eventually all thousand possibilities will occur, and so, at the quantum level, the outcome of experiments cannot be 100% predicted. All things are only based on probabilities.

A great deal of modern technology works upon the paradoxical principles of quantum physics, but this seems to contradict everyday common sense - when we observe objects in the real world, they always exist in only one place and one state at a time.

Quantum physicists call this paradox the measurement problem, and have long resolved it by saying that particles collapse into single states at the time of observation. But in the 1950s, some physicists first began to hypothesize that in fact all possible states exist, separating into different realities at the moment of change.

Infinite Realities
Wavefunction collapse is the point at which a particle existing in several potential states resolves into a single state. Some physicists now believe that at this moment, reality branches off into every one of these potential states, and our path of reality simply continues along the path we've observed. And this Many-Worlds theory means that an infinite number of alternate realities exist.

Quantum physicist Hugh Everett says that this is because a quantum particle doesn't collapse into a measurable state, but actually causes a split in reality, with a universe existing for every possible state of the object. Superposition, he asserts, actually means parallel worlds exist - one arising anew out of each of a particle's states.

For example, photons beamed through double-slits strike an opposing surface in both single streams and in spread-out wave patterns, depending upon whether or not they're being observed. When the particle is observed, it functions as a particle; but when it's not observed, it acts like a wave. At the moment of observation, according to Dr. Everett's Many-Worlds theory, the universe splits, and both results occur, but in different realities.

Objects you can observe, he hypothesizes, can thus exist simultaneously in parallel universes - the reality in which you continue, and those alternate ones in which the other yous exist. This implies that anytime alternative actions can be taken, the universe splits into alternate realities, where each decision was taken.

According to Berkeley's Dr. Raphael Bousso and Stanford's Dr. Leonard Susskind (among others), this means an infinitely expanding range of multiple universes exists, some with distinctly different laws of physics than the ones governing ours. Our reality is one "causal patch" among an infinite number of others.

The theory suggests that every conceivable possibility eventually comes to pass in one of these infinite realities and perhaps information from separate causal patches of reality can leak across between universes. Although parallel universes are still much under debate, recent cosmological models support their existence, reality splitting into an infinite number of paths for every event occurring in space-time.

More, if you're interested, can be seen in this pair of MIT lectures:

Peeping into the Box
Cutting-edge technology developed in 2011 at the National Research Council of Canada is allowing researchers to quickly measure the position and momentum of photons - a complex, 27-dimensional quantum state - in a single step. This is a huge leap in efficiency over former quantum tomography, which required multiple measurement stages and a significant amount of time, analogous to photographing a series of 2D images from different angles and assembling them into a 3D image.

Such a precise, rapid, accurate and efficient means of measuring states with multiple dimensions is likely to be critical for advancing our knowledge of quantum mechanics, not to mention the development of advanced-security quantum communications technology, which will in theory be impossible for interceptors to decode.

Image: This diagram illustrates the setup for the experiment, which incorporates a HeNe laser, beam-splitters, lenses, a special "fan-out hologram", wave-plates, and additional equipment. One can conceive of light as a spiral and the degree of "twist" to that spiral is called the orbital-angular-momentum quantum number. The spiral is "untwisted" before being measured. Illustration: M. Malik, Nature Communications.

According to Dr. Robert Boyd, Optics and Physics Professor at the University of Rochester and Canada Excellence Research Chair at the University of Ottawa, this new type of direct particle measurement is likely to play an increasingly important role in future quantum communications technology.

A cooperative effort between the Universities of Rochester and Ottawa and Glasgow demonstrated how direct measurement can be used to circumvent Heisenberg's uncertainty principle, simultaneously, accurately measuring two aspects of a quantum state without sacrificing accuracy in either measurement. This is because the act of measuring a quantum state alters either the particle's position or its motion, "collapsing the wave function".

But direct measurement performs two different measurements one after the other: an initial "weak" measurement followed by a "strong" one. The initial measurement is gentle enough to only slightly disturb the particle's state, avoiding wave function collapse.

According to the study's main author, the University of Vienna's Dr. Mehul Malik, this experiment allows physicists to peek into Schrödinger's box, without fully opening it. He says that the weak measurement is faulty, leaving uncertainty about the state of particles, but repetitions of it lead to near certainty of the particle's state, since it doesn't destroy the particle's state, it allows for a subsequent "strong" measurement of the second variable. The alternating sequence of weak and strong measurements can then be repeated for several identically-prepared particles, until a measurement of the wave function at the required precision is derived.

Sources: The Path Book I: Origins, Eric A. Smith, Polyglot Studios, KK

"Peeking into Schrodinger's Box", press release, January 20, 2014, Leonor Sierra, University of Rochester