Monday, August 18, 2014

Making Memories




Neuroscience's Holy Grail has long been the engram - a neuron or set of neurons which physically hold a memory. To a large extent, that search has ended, thanks to scientists like Nobel prizewinner Eric Kandel, who won the 2000 Nobel Prize for his discoveries in the neurochemistry of learning.

Dr. Kandel began his groundbreaking work by using Pavlov's dog-conditioning techniques to study neural changes in the California sea snail Aplysia as it learned. Aplysia has only 20,000 neurons, and they are the largest in the animal kingdom, visible to the naked eye. This makes the neural network easy to manipulate and identify, and makes Aplysia to neurobiologists what fruit flies are to geneticists and rats to behavioral psychologists.

He began by stimulating Aplysia’s sensory neurons with electrodes, then used the process of elimination, neuron by neuron, to map out the entire neural circuit controlling gill withdrawal - a simple behavior in which Aplysia adapts and learns from its environment.

Just as you would jerk your hand away after touching a hot stove, Aplysia reflexively withdraws its gills in response to aversive (unpleasant) stimulation. In comparison with yours, its brain is primitive, essentially two neural bundles called ganglia; but just as with more complex animals, it can learn. In Aplysia, just as in humans, "practice makes perfect"; repeating a stimulus converts a short-term memory - lasting minutes - into a long-term one, lasting days, weeks or a lifetime.

In Aplysia, the gill withdrawal reflex is controlled by just 24 sensory neurons which enervate (send signals to) six motor neurons. Between them are "middle managers" - interneurons which act as modulators, either excitatory (dialing up the likelihood of firing) or inhibitory (dialling down the likelihood of firing). Stimulating the tail activates these interneurons, which excrete the neurotransmitter serotonin. This triggers an excitatory response, and the motor neurons fire, causing muscular contractions. The end result is that the animal withdraws its gill in response to a shock.

Aplysia's neurons (like yours) are hardwired - physically and functionally fixed by instructions encoded in DNA - so they aren't capable of significant change, such as increasing in number or changing in function or location. However, the connections between them are extremely flexible. Learning changes their signalling efficiency.

Neurons communicate via these connections, across tiny gaps called synapses, which excrete chemical messengers called neurotransmitters.

Lasting memories are preserved by the growth and maintenance of these synapses. In other words, memories are encoded in the connections between neurons.

Sensitized
Dr. Kandel's early experiments involved training Aplysia in a learned fear response called sensitization, in which repeated exposure to an aversive stimulus makes a creature more sensitive to that stimulus. For example, a war veteran sensitized to sounds like gunfire might jump at the slamming of a car door. Similarly, an Aplysia snail which has become sensitized to shocks on its mouth organ (siphon) will also respond to stimuli applied to its tail. This is because, simply stated, "practice makes perfect" - repeated exposure converts short term memory into long-term memory, via physical growth (protein manufacture).

After tracing the specific neural circuits which control gill withdrawal, Dr. Kandel devised a technique for growing cell cultures in a petri dish from larval snails, creating the absolutely simplest learning circuit - just two live neurons, one sensory, one motor.

He and his colleagues could then substitute tail shocks with a squirt of serotonin, and investigate the specific molecular processes which lead to memory formation.

He discovered there are two separate chemical sequences, one for building short-term, and one for building long-term memories. Short-term memory - which lasts for seconds to minutes - comes from increasing a presynaptic neuron's ability to emit neurotransmitters.

This neurotransmitter increase develops from a six-step chemical sequence called a signalling cascade:
When the snail's tail is given a mild shock, the neurotransmitter serotonin is released by interneurons (intermediate neurons which amplify or dampen sensory neuron input to targets such as motor neurons). This neurotransmitter binds to protein receptors embedded in the membrane of a recipient (postsynaptic) neuron.

These serotonin-activated (serotonergic) receptors prompt the conversion of ATP, the cell's natural "fuel" into a special chemical signal, a secondary messenger called cAMP. cAMP is called a secondary messenger because it transfers an external signal from the membrane to molecular machines inside a cell.

In Aplysia, the cAMP signal activates a protein on-off switch called a kinase (PKA). This kinase migrates back to the membrane and causes a shape change (phosphorylation) in special calcium channels, proteins embedded in the membrane which act like selective gateways.

The shape change temporarily opens these channels to calcium ions, which flow into the synaptic terminals of the neuron.

Calcium ions function as the chemical switch that triggers increased neurotransmitter release. This increased neurotransmitter release is the physical basis of a short-term memory formation.

To convert this short-term memory into a long-term memory, repeating the stimulation triggers an additional chemical cascade which activates physical growth - protein manufacture.

This anatomical change is the sprouting of new synapses and/or synaptic branches called dendrites. Dendrites are tendril-like appendages which reach out to connect with neighboring neurons.

This anatomical change is called long term potentiation, so-called because, over the long term, the potential for a neuron to fire has been enhanced - the postsynaptic neuron has become more sensitive to stimuli and fires more frequently. This phenomenon involves the long-term modification of the synaptic connection.

In 2010, MIT's Gertler Lab created a spectacular film of this growth process in living, cultured mouse neurons. Here you can see the growth of neurites, neural buds which sprout into dendrites.

In Aplysia, repeating the stimulation (mild shocks) creates higher amounts of serotonin release, triggering a release of higher amounts of the secondary messenger cAMP. This more persistently activates the kinase (PKA) switch. Just as in short-term memory formation, this kinase migrates back to the cell membrane and opens protein channels, and calcium ions rush into the synaptic terminals, triggering neurotransmitter release.

However, the repeated stimulation also results in a second action, that ultimately results in physical growth of the neuron, through protein synthesis.

A subunit of the kinase moves into the neuron's cell nucleus, where it activates a special protein called a transcription factor (CREB protein or cAMP Responsive Element Binding protein). This transcription factor binds to specific DNA sequences (genes), activating them to start manufacturing new synapses, from proteins building blocks like neurexin and neuroligin.

Genes are segments of the DNA molecule arranged in specific sequences to guide the synthesis of messenger RNA (mRNA).

mRNA is a molecule which copies the genetic code from DNA and uses it as a template to guide the manufacture of proteins - the most complex molecules on Earth, which carry out virtually all biological functions, from forming tissues to carrying out chemical processes vital for life.

To accomplish this, mRNA peels off from its parent DNA, then travels outside the cell nucleus to a special region of the cell body - a mazelike series of tubes called the endoplasmic reticulum. Here the mRNA is used by mini protein factories called ribosomes as a blueprint for assembling short building blocks (amino acids) into proteins.

Dr. Robert Singer and colleagues at the Albert Einstein College of Medicine developed a means of filming this neural mRNA synthesis. They attached harmless flourescent chemical tags to mRNA molecules so they could be filmed within live mouse neurons. Here is the world's first film of a memory being formed in real-time: https://www.youtube.com/watch?v=6MCf-6It0Zg

Mammals like mice have evolved a special memory-creating and storing brain structure called a hippocampus, named after the Greek word for seahorse, because of its curly shape.

Dr. Singer's team stimulated neurons in the mouse's hippocampus, where "episodic" and "spatial" memories are formed and stored (episodic memories are the conscious mental record of our life events and the sequences in which they occur, while spatial memories constitute navigational guides through an organism's environments).

The hippocampus acts as a sort of amplifying loop, receiving signals from the cortex - the area where conscious thought and ultimate brain control resides - and sending signals back. As memories are consolidated - stabilized into potentially lifelong memories - they are gradually transferred from the hippocampus to the cortex during sleep.

Dr. Singer's team targeted the mRNA which carries the code for a structral protein called beta-actin, central to long-term memory formation. In mammals, beta-actin proteins strengthen synaptic connections by building and altering dendritic spines.

Within 10 to 15 minutes of stimulation, beta-actin mRNA began to emerge, proving their neural stimulation had triggered transcription of the beta-actin gene. The film shows these fluorescently glowing beta-actin mRNA travelling from neural nuclei to their destinations in the dendrites, where they will be used to synthesize beta-actin protein.

Neural mRNA uses a unique mechanism the Einstein team calls "masking" and "unmasking", which allows beta-actin protein synthesis only when and where it is needed. Because neurons are comparatively long cells, the beta-actin mRNA molecules have to be guided to create beta-actin proteins only in specific regions at the ends of dendrite spines.

According to Dr. Singer, just after beta-actin mRNA forms in the nuclei of hippocampal neurons, it migrates out to the cells' inner gel (the cytoplasm). At this point, the molecules are packed into granules whose genetic code is inaccessible for protein synthesis.

Stimulating the neuron makes the beta-actin granules fall apart, unmasking the mRNA molecules, and making them available for beta-actin protein synthesis. When the stimulation stops, this protein synthesis shuts off: after synthesizing beta-actin proteins for only a few minutes, the mRNA molecules abruptly repack into granules, returning once again to their default inaccessible mode.

In this way, stimulated neurons activate protein synthesis to form memories, then shut it down. Frequent neural stimulation creates frequent, controlled bursts of messenger RNA, resulting in protein synthesis exactly when and where it's needed to strengthen synapses.

Of course, the process involves more than just a single gene - in fact a dizzingly huge number are involved. Learning involves large clusters of genes within huge numbers of cells.

Florida U. Neuroscience Professor Leonid Moroz has tracked specific gene sequence expression in Aplysia neurons, and estimates that any memory formation event will alter the expression of at least 200-400 genes. This is out of over 10,000 which are active every moment of the simple marine snail's life.

Moroz's team zeroed in on genes associated Aplysia's feeding and defensive reflexes, and found over 100 genes similar to those linked to every major human neurological illness, and over 600 similar genes which control development. This shows these genes emerged in a common ancestor and have remained nearly unchanged for over half a billion years of both human and sea slug evolution.

Human brains contain about one hundred billion neurons, each expressing over 18,000 genes, at varying levels of expression (protein synthesis rates), with over 100 trillion connections between them. Aplysia has a comparatively much simpler nervous system, with only 10,000 easily identifiable, large neurons. However, it is still capable of learning, and its neurons communicate using the same general principles as human neurons.

According to Dr. Moroz, if genes use a chemical alphabet, there is a kind of molecular grammar, or set of rules controlling the coordinated activity of gene expression across multiple neural genes. To understand memory or neurological diseases at a cellular level, scientists need to learn these grammatical rules.

More sophisticated
Like the Einstein team, Dr. Eric Kandel also went on to study the mouse hippocampus, and found it makes use of a chemical cascade similar to that of Aplysia neurons. But in mouse - and human - hippocampi, the chemical sequence for memory formation is slightly different:

In the hippocampus the changes which lead to memory formation occur in the receiving (postsynaptic) neuron rather than the sending (presynaptic) neuron. This process occurs in two stages:

During Early LTP, the excitatory neurotransmitter glutamate is released. This glutamate acts upon two types of receptors, NMDA (N-methyl-D-aspartate) and AMPA (a-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptors. Normally, only the AMPA receptors respond, but repeated stimulation activates the NMDA receptors as well. These open ion channels in the membrane, allowing calcium ions to flow into the neurons, and this activates a (different) second-messenger kinase, CAMK2A (calcium calmodulin kinase). CAMK2A triggers a chemical cascade that results in the growth of additional AMPA receptors.

In Late LTP, repeated stimulation activates a second system, which releases the modulatory neurotransmitter dopamine.

Like serotonin in Aplysia, dopamine acts upon the hippocampus like a sort of volume switch, dialing up neural activity. (Disruptions in the dopamine system are at the heart of disorders like schizophrenia and Parkinson's disease.)

Dopamine binds to its own receptors on the cell membrane, increasing production of cAMP, activating the PKA-CREB sequence which starts protein synthesis for new dendrite/synapse formation.

Mice are also capable of much more sophisticated memory feats than sea slugs, such as memorizing spatial layouts in a manner much like humans.

In his experiments with mice, Dr. Kandel found three principles at work in learning. First, when two or more neurons converge, stimulating a third neuron, it creates a logic circuit known as a “coincidence detector”. This circuit underlies associative learning - we link A with B, cause with effect, touching a hot stove with pain, etc. Variations of this circuit underlie many forms of mental computation.

Secondly, he found that mice hippocampi use place cells, special pyramidal neurons which fire in response to specific locations. These place cells act as mental markers, firing at the sight of certain landmarks, allowing mice to encode internal navigational maps.

The permanency of spatial memory varies based upon how much attention is applied to navigation, and levels of attention appear to correspond to levels of the neurotransmitter dopamine. In other words, generally speaking, the more attention (and dopamine) that is applied, the more effectively a route will be memorized. Dopamine modulates the effects of learning - acting like a volume knob that controls how much data is converted from short-term into long-term memory.

Thirdly, Dr. Kandel's team studied motor learning, and concentrated their search in the cerebellum (latin for "Little Brain"), known to be the master coordinator of complex movements.

All animals practice complex sets of coordinated movements until they perfect them, from learning to drive a car (for humans) to catching flies with the tongue (for frogs). Their first attempts may be unsuccessful, but with perseverance, eventually there will be mastery. These are procedural memories, a form of motor learning which is central to animal behaviour. And just as with episodic and spatial memory, procedural memory is also dependent upon changes in synaptic strength.

With repeated practice, there will be growth (of new dendrites and synapses) in a "midline" (superior medial) strip atop the brain's outer surface, the motor cortex, which plans and issues commands for voluntary movements - sequences of muscle contractions.

Coordinating the process, the cerebellum sits behind the brainstem at the top of the spine. It acts as a kind of coach, comparing performance - the actual changing positions of limbs, trunk and head in space - with intentions. Through this "feed forward" mechanism, the cerebellum corrects motion, ensuring smooth timing and orchestration of the many muscle contraction signals in complex movements. Learning to orchestrate these movements depends to a large extent upon special Purkinje cells.

Neurons fire when special protein channels embedded in the cell membranes open, allowing electrically-charged ions to flow inward. Among these, the HCN1 ion channel (Hyperpolarization-activated, Cyclic Nucleotide-regulated nonselective cation), is key to motor learning in Purkinje cells of the cerebellum.

To study the role of HCN1 ion channels in motor learning, Dr. Kandel and his team bred mutant mice with neurons that lacked HCN1 channels in different brain regions.

They then ran their mutant mice (along with a control group of normal mice for comparison) through a series of complex motor tests, which included swimming through water mazes and balancing on rods. These tests required complex, repetitive and coordinated motor output. The mice were also conditioned with simpler motor behaviours like eye blinking.

Mice with no HCN1 channels in their Purkinje cells could still perform simple movements like eye blinking, but had extreme difficulty in performing complex behaviours like swimming and balancing. In contrast, mice without HCN1 channels in their forebrain but with normal cerebella had no problem in performing complex behaviours. This shows that HCN1 channels in Purkinje cells are the key to complex motor learning.

When negative currents were applied to Purkinje cells, those lacking HCN1 channels took longer to return to normal firing activity levels than normal Purkinje cells. This means HCN1 channels stablilize Purkinje cells, allowing them to quickly recover from activation and return to normal functioning. In complex, repeated behaviours, Purkinje cells receive repetitive bursts of input, and this ability to recover quickly is vital to influencing motor activity.

In the end, while it is good to understand the principles by which your brain manufactures memory, here is some more practical advice on how to make the most of what you've got:

1. Sleep at least seven to eight hours at a regular time every night

2. Take fish oil supplements

3. Exercise every day

4. Limit or eliminate your intake of alcohol, tobacco, and fatty or sugary foods

5. Pay full attention to what you want to remember

6. Space your learning out; cramming does not help long-term memory retention, especially all-night study sessions.

7. Think of how the information you want to learn relates to you or to things you already know

8. Study a foreign language and a musical instrument

9. Study in the same environment, at the same time every day

10. Keep your study environment free from clutter and distractions

11. Get up and do something different about every 15 minutes before returning to your studies

12. Spend time in mentally stimulating environments where you have new experiences and interact with many different sorts of people from many cultures and backgrounds

13. Reduce your life stresses to a minimum

If you want to read about the Princeton, MIT, Oxford, Harvard, Yale, Tokyo University and other studies upon which these recommendations are based, I invite you to get a copy of my book, The Path Book II. It also explains the different systems of your body, and the science-supported, most effective nutrition, exercise, love, happiness and success advice found in any book to date.

Sunday, August 17, 2014

Learn Physics at Yale, Irvine and MIT for Free!

I teach English to a wide range of students here in Tokyo, from age three to 70, and, while most are elementary, high school and college students,   I also have a few businesspersons and scientists, including medical doctors and theoretical physicists. Since we use textbooks from their particular fields of specialization, it affords me the opportunity to learn a lot on my own.

For one of my PhD candidate students, I have compiled a list of free resources on the Internet for studying physics in English - at the best schools on the planet. Please enjoy:

Fundamentals of Physics 1 - Yale videos:

Course notes:

Fundamentals of Physics 2 - Yale videos:

Course notes:

MIT Quantum Physics I:

MIT Quantum Physics II:

The MIT audio courses are a bit more of a challenge, as they are only audio, with no transcripts:

Physics I: Classical Mechanics:

Physics II: Electricity and Megnetism:

From University of California Irvine. The videos are found by clicking "course lectures" on the left:

Physics I:

Physics II:

Physics III:

Math Methods in Physics:

Classical Physics:

Einstein's General Relativity and Gravitation:

TV series Manhattan (a lot of pop-up advertising you must close to watch):

Monday, August 11, 2014

On the Passing of Robin Williams: A Psychiatric Nurse Explains Suicide

Oregon psychiatric nurse Shauna Hahn shares the following insight into suicide:

RIP Robin Williams.

On the heels of another suicide, the hanging death of a local mother, I feel compelled to share something about the science of suicide. Too often, I have heard or read comments suggesting that the suicide victim was selfish or did not consider her own family, etc. How I educate patients about this serious topic is to liken suicide to having a heart attack. For example, we know the risks for Coronary Artery Disease: smoking, obesity, hypertension, hyperlipidemia, yet a heart attack doubtless feels surprising to its sufferer. Suicide is a lot like this. We know the risks: depression, substance abuse, risk-taking, history of other aggressions, etc yet the great deficiency in serotonin (a happy neurotransmitter or brain chemical implicated in both depression and anxiety) actually happens quite precipitously.

How do we know this? We can measure levels of serotonin metabolites in the cerebral spinal fluid and we find that, in individuals who have completed suicide, their levels are much lower than in individuals simply struggling with depression. And there is no difference in serotonin metabolites of the lightly depressed versus the seriously depressed. These dangerously low levels of serotonin mean that not only do we have despondency and despair but also poor impulse control. What a lethal combination.

Individuals who have survived high lethality suicide attempts (jumping off the Golden Gate bridge, shooting themselves in the head) mostly remark that they "did not know what they were thinking" and allude to being "not in [their] right mind." Obviously, individuals affected by mental illness have serious problems thinking clearly.

Kant believed that suicide was *the* philosophical problem. (He was very punitive and unforgiving in his view). Certainly, I empathize with individuals not being able to "understand" suicide, but what I would definitely encourage would be to at least try.

Do not judge men by mere appearances; for the light laughter that bubbles on the lip often mantles over the depths of sadness, and the serious look may be the sober veil that covers a divine peace and joy. - Edwin Hubbel Chapin, 1845

Sunday, August 10, 2014

The Crystal Forest at the Center of the Earth

Dr. Kei Hirose spends his days in his Osaka laboratory creating Hell on Earth - heating metal to 4,500C at pressures equivalent to three million Earth atmospheres, conditions found at our planet's core. Iron-nickel alloys at these extremes transform into something amazing - they crystallize, creating an entirely new form of metal.

Dr. Hirose's experiments show that Earth's inner core is a "forest" of such iron crystals, some likely as massive as 10 km long, all pointing toward magnetic north.

This is only one of many amazing advances geolophysicists have made in recent years as they piece together Earth's birth. It's a fascinating story.



An excerpt from my book The Path: Origins:

Consider for a moment the staggering alignment of circumstances that have led to your existence: 13.7 billion years ago – give or take some pocket change – the universe was born in a primal blast of plasmic energy that rocketed outwards with incomprehensible force and speed, expanding trillions upon trillions of times from a single point within the space of 10–32 seconds. At the outset, the universe was so superhot that no stable particles could emerge, and fundamental forces such as gravity and electromagnetism were merged into a single, tremendously powerful unified force, which blasted all of space outward. Gradually the plasma cooled, allowing the separation of the fundamental forces, and the first particles began to form.

The cooling of this great plasma cloud allowed the formation of particles such as those which make up our present-day universe, primarily the simplest of elements, hydrogen. Over billions of years, giant molecular clouds of superhot gases called stellar nurseries would give birth to successive generations of supermassive primeval stars. Hot gases and stardust ejected from these blazing primeval stars clustered in great swirling clouds, forming galaxies and planets.

The Milky Way Galaxy is among them, 100,000 light years in diameter and about 1,000 light years thick on average, containing somewhere on the order of 100 billion stars, with possibly 300 billion more tiny dwarf stars we are not yet able to detect. At its core, a light-devouring supermassive black hole seems to lurk, some four and a half million times the mass of our Sun.

The Solar System itself lies on the edge of the galaxy’s spiral Orion arm, some two-thirds of the way out from the core. It formed 4.6 billion years ago, as a region of molecular cloud underwent gravitational collapse. Gravity, pressure, rotation and magnetic fields flattened and contracted its mass, creating a dense, hot protostar at its core. This core condensed and grew even hotter, causing hydrogen atoms to fuse together, giving birth to our Sun, while the outer reaches of the disk cooled and condensed into mineral grains. Dust and grains collided and clumped, growing ever-larger – forming chondrules, meteoroids, planetesimals, protoplanets and finally planets, as gravitational attraction swept up additional fragments encircling the Sun. About a third of the age of the universe itself, Earth was born some 4.54 billion years ago, and now once every 365 days faithfully orbits that tremendous ball of blazing hydrogen and helium 93 million miles away. Just 100 million years after its initial formation, the infant Earth was smashed by another planetary body, knocking off a massive chunk to be trapped in eternal orbit as our Moon.

Our home is a rather unlikely object – a massive, spinning sphere of liquid, rock, metal and gas, fluid layers and plates, all held in delicate suspension within a great void by the invisible, binding force of gravity. Deep under the crust, molten metal sloshes around a solid iron-nickel core, creating a geodynamo - a planet-enveloping magnetic field which deflects most of the the sun’s solar wind, thus preventing it from blasting our atmosphere into space. It’s the interplay between this solar wind and our atmosphere that creates the breathtaking night-sky light show known as the Aurora Borealis or the Northern Lights.

Some nine billion years after the universe’s explosive birth, the first forms of life began to swim about this watery, star-born mass of rock. A very precise sequence of events occurred, allowing these fragile organisms to gradually grow in ever-greater complexity, to the point where they became self-aware, able for the first time to stare into the vast, cold reaches of the cosmos and wonder what miracle begat them.

Sources: The Path: Origins, copyright 2014, Eric A. Smith, Polyglot Studios 
"Earth's core far hotter than thought", April 26, 2013, Jason Palmer, BBC News
Video: "What is at the centre of the Earth?" August 31, 2011, BBC News

(Earth's surface is 71% water, nearly of it [97.5%] salt water. The crust is just 50 km thick on average; below that is a mantle, 2,900 km thick . Still deeper is a sea of molten metal some 2266 km deep, sloshing about at a scorching 4000-5700 degrees C. The core is solid iron-nickel crystals some 1200 km in radius, at temperatures found on the surface of the sun - 6000 degrees Celsius [10,832 F]).

Thursday, August 7, 2014

Small Wonder, Big Blunder

Image: A comparison of the LB1 skull with that of a typical modern human, photo: David Ferguson, 2014, The Raw Story
“There is nothing like looking, if you want to find something. You certainly usually find something, if you look, but it is not always quite the something you were after.” 
― J.R.R. Tolkien, The Hobbit

The news was sensational, a jaw dropping anthropological find: a mysterious new species of prehistoric humans, no bigger in stature than a modern child, had been found in the Indonesian island cave of Liang Bua. Named after Flores, the island where it was discovered, Homo floresiensis was a complete enigma. The mysterious speciman, LB1, included a complete skull and thighbones. All the other cave remains were fragments of several individuals.

LB1 was said to have a tiny cranium (only 380 milliliters or 23.2 cubic inches), housing a brain under a third of the average modern human's. Its thighs indicate it stood only 1.06 meters (3.5 feet) tall. When compared to Homo erectus and Australopithecus, it seemed utterly unique, surely an example of a previously unknown species.

Anthropologists have conclusively demonstrated that the first wave of prehistoric ancestors to leave Africa some 1.8 million years ago had been Homo Erectus, the so-called "wolves with knives", who crossed into Eurasia over the Levantine corridor and the Horn of Africa.

But Homo floresiensis seemed to have emerged with no known predecessors. How such a creature evolved completely outside documented human evolution was baffling, leaving anthropologists to surmise that limited island resources had led to dwarfism as an adaptation - just as it had with the island's indigenous pygmy mammoths, pygmy elephants, pygmy hippos and others.

It was a romantic idea, the notion of "hobbits" living just 15,000 years ago. Unfortunately, it was a matter of making the evidence fit the narrative rather than vice versa.

An international team of American, Chinese and Australian researchers have demonstrated a much more plausible explanation: Homo Floriensis never existed. Penn State evolutionary geneticist Dr. Robert B. Eckhardt, along with University of Adelaide anatomy and pathology professor Maciej Henneberg and Chinese geologist and paleoclimatologist Kenneth Hsu examined LB1, the single known specimen. Instead of a new prehistoric human species with no ancestry, they found the "less strained explanation" was a typical human with Down syndrome.

The team immediately saw signs of such a developmental disorder, and further evidence supported this conclusion: a mismatch between the skull's left and right halves, the craniofacial asymmetry typical of Down syndrome. Such Down syndrome characteristics were only found in LB1 but none of the other Liang Bua remains, further indicating how atypical LB1 was.The creature's cranial volume and stature had been "markedly" underestimated: subsequent measurements consistently showed a cranial volume of 430 milliliters (26.2 cubic inches), a "significant" difference, and within the range of modern Indonesians with Down syndrome. Down syndrome patients are also comparatively short, consistent with the recovered thighbone.

Dr. Eckhardt's team eventually concluded the Liang Bua remains weren't sufficiently atypical to require creating an entirely new human species. Down syndrome, however, is one of the most common modern human developmental disorders, affecting more than one in a thousand babies worldwide.

Sources: Flores bones show features of Down syndrome, not a new 'hobbit' human, press release, David Pacchioli, August 4, 2014, Penn State University;
The Path Book I: Origins, 2014, Eric A. Smith 

Saturday, August 2, 2014

Than are dreamt of in your philosophy

  Analysis by UC Berkeley and University of Hawaii astronomers shows that one in five sun-like stars have potentially habitable, Earth-size planets. (Animation by UC Berkeley/UH-Manoa/Illumina Studios)

Exoplanets are planets beyond our solar system, usually orbiting some other star or stellar remnant. The first to be discovered was Gamma Cephei Ab, about one and a half times the size of Jupiter, within the Errai binary star system in the northern constellation of Cepheus (the King), approximately 45 light-years from Earth. It was spotted in 1988 by Canadian astronomers at Victoria and British Columbia Universities.

Since that time, over 1700 more were confirmed by 2014. A little less than a third are parts of multiple planetary systems, while some are free floating, outside of any stellar orbit. The nearest so far discovered is about 12 light-years away.

NASA's Kepler space telescope spent four years scouring our galaxy for potentially habitable planets, tracking 156,000 stars with snapshots every 30 minutes. In its four-year mission, Kepler detected 4,229 possible exoplanet candidates, and, though not yet confirmed, astronomers are confident that at least 90% of them are genuine.

Kepler went offline in 2013 when its stabilizing system failed, after having searched only a tiny fraction of our galaxy - a patch of sky which includes part of the constellation Cygnus (the Swan), also called the Northern Cross. The planets it revealed were limited to those which transited (crossed in front of) their host stars from Kepler's vantage point. Vastly more exoplanets are likely to be found in the future.

Astronomers estimate the Milky Way's star population by measuring relative mass or luminosity (brightness), and most set the number at 100 to 200 billion stars. However, our instruments aren't sensitive enough to measure many smaller dwarf stars, so the number could be as high as one trillion in our galaxy alone.

What's truly exciting is that Earth-like planets are "relatively common throughout the Milky Way," according to the University of Hawaii's Dr. Andrew Howard: each star hosts at least 1.6 planets on average. A little over one out of five stars is of the same class as our sun and hosts an Earth-sized planet within the "habitable" zone - a distance at which liquid water can exist, potentially hosting life.

The research team based their calculations by focusing on 42,000 G type yellow stars similar to our sun in size and heat production.

The most conservative estimate of 100 billion stars (with 1.6 planets, one-fifth in the habitable zone), means our galaxy contains over 160 billion exoplanets, 32 billion of which orbit sunlike stars within the water-sustainable zone.

There could, however, could ten times as many, depending again upon the number of as-yet-undetectable smaller stars. The estimates also don't account for free-floating rogue planets, found outside any stellar orbit; these may outnumber stellar-bound exoplanets by as much as 50 percent, or two rogue planets for every star in the Milky Way.

That said, Earth-sized planets within the habitable zone may not necessarily be able to support life as we know it. Exoplanet diversity is, say researchers, "stunning"; some are much bigger than Earth, solid rocky giants with thick atmospheres like Neptune, dense gas giants like Jupiter, fantastically light, airy gas giants like Saturn (which could float in water, given  a large enough ocean), or perhaps even dead stars that hardened into solid diamond planets (though maybe not). Some may also have dense atmospheres that cook the planetary surfaces like our irascible twin Venus. However, it's likely that many are rocky and capable of supporting liquid water.

We inhabit, it seems, a very crowded galaxy indeed.

Sources: Number Of Alien Planets Confirmed Beyond Our Solar System Nears 1,000, Data Shows, September 29,2013, Mike Wall, Space.com 
Astronomers answer key question: How common are habitable planets?, Press release, November 4, 2013, Robert Sanders, Berkeley University
One in Five Stars Has Earth-sized Planet in Habitable Zone, News Release, November 4, 2013, Erik Petigura, W. M. Keck Observatory

Thursday, July 31, 2014

MIT lab demos rapid evolution in fish, driven by natural environmental change

Photo: Two varieties of the same species of Mexican Tetra, Astyanax mexicanus - in its surface and eyeless cave forms. Although the most obvious morphological traits (changes in body structure) are the loss of pigmentation and eyes in the cave variety, a number of other traits undergo rapid changes in response to the environment.
Image: Nicolas Rohner
The long-standing view of how evolution unfolds is that organisms experience spontaneous genetic mutations; these mutations can result in new traits which are either helpful or deleterious. Beneficial traits increase the odds an individual will survive, breed and pass its traits on to offspring.
 
Though this evolutionary process unquestionably occurs, it requires substantial time. Organisms under the pressure of extreme environmental changes must adapt rapidly to survive, however, pointing to a much more rapid adaptive strategy which scientists knew must exist. An MIT-Harvard research team has discovered such a strategy in cavefish, which make use of a heat shock protein called HSP90.

Heat-shock proteins are a protein group carried by virtually every living creature, including bacteria, plants, animals and humans. They help control the folding and unfolding of other proteins. 

The numbers indicate the molecular weight of each type of heat shock protein, such as HSP90. HSP90 controls protein folding of key growth and development gene regulators. Such folding must be very precise for proteins to perform their proper functions.

Several thousands of years in the past, a Mexican tetra (Astyanax mexicanus) population was transported from its native river habitat into the radically different environs of underwater caves. Forced to adapt to almost complete darkness, they lost their pigmentation, sharpened their sensitivity to nearby prey and to water pressure fluctuations, and completely lost their eyes. 

While the latter change may seem maladaptive, it is in fact advantageous, because maintaining a set of complex but pointless sense organs is biologically "expensive". Shedding unneeded eyes allowed Astyanax to reallocate limited biological resources to functions more appropriate for a cave environment.

According to lead author Dr. Nicolas Rohner, Astyanax mexicanus' striking adaptations are an example of standing genetic variation, which says that populations carry a number of silent but potentially beneficial genetic mutations - genes which have been switched off. Under specific environmental stresses, the genes for these mutations can switch on, guiding protein manufacture that results in visible phenotypes (observable traits which come from a creature's genes).

Says MIT biology professor and Howard Hughes Medical Institute investigator Dr. Susan Lindquist, HSP90 usually keeps such genetic variations dormant in a wide range of organisms, ranging from primitive yeasts to plants and fruit flies.  

Subjecting cells to heightened temperatures or other stresses reduces HSP production; in her research, Dr. Lindquist discovered that normally large reserves of cellular HSP90 dwindle during such physiologically stressful periods. These decreases in HSP90's suppressive control result in the rapid emergence of various phenotypic changes; some of these emergent traits are neutral or deleterious, while some are clearly beneficial.

Environmental changes alter protein folding, causing minor changes in the genome which can have major effects. Because HSP90 controls folding of important gene regulators of growth and development, it acts as a fulcrum for evolutionary change.

Dr. Rohner's research on the genetic changes behind Astyanax' eye loss caught Dr. Lindquist's attention, and they began to collaborate on researching HSP90's role in the process. 

Experiments on both cave and surface fish varieties of Astyanax yielded fascinating results. Raising surface fish with a drug that suppresses HSP90 - causing the same effect as rapid environmental changes - resulted in significant eye size variation, clearly showing HSP90's central role in the trait. 

While the cavefish variety is eyeless, their skulls retain ancestral orbital cavities. Cavefish raised in the same environment displayed no increase in variation of eye orbit size, but they grew smaller orbits, showing that eye size can vary based upon the presence of HSP90.

Because the team used artificial means to achieve their results, however, it was uncertain whether or not such (HSP90-altered) conditions would in fact naturally arise in the environment. 

To determine the answer, the team looked into the factors of the fish's two different natural environments, including oxygen levels, temperatures, and pH levels (potential Hydrogen, the tendency for water to be acidic - containing more positively-charged hydrogen ions - or alkaline - to contain more negatively charged OH hydroxide ions). 

The biggest difference between the surface and cave water environments was in conductivity - the ability of salts to transfer electrical charges. The cave environment had low salinity (conductivity levels), which induced a heat shock protein response, naturally generating lower levels of HSP90 protein, and thereby lifting the protein's constraints on growth and development regulators. As a result, surface fish raised in water with the same low salinity as the cave fish environment showed significant variations in eye sizes. This demonstrated that a natural environmental stressor could induce the same effects as artificially suppressing HSP90 activity.

This study expands upon Dr. Lindquist's previous experiments with HSP90-induced evolutionary changes in yeast, and showing the same mechanism comes into play throughout the plant and animal kingdoms.

Source: Rapid evolution of novel forms: Environmental change triggers inborn capacity for adaptation, press release, December 12, 2013, Matt Fearer, Whitehead Institute for Biomedical Research, Massachusetts Institute of Technology

Tuesday, July 29, 2014

Breathtaking Photorealism....

The first trailer for Assassin's Creed Unity was unveiled yesterday. And it is... pardon the cliche... revolutionary.

http://www.ign.com/articles/2014/07/29/new-assassins-creed-unity-trailer-introduces-elise?watch

The Cat That Was and Wasn't Dead or A Short Trip Through the Multiverse

One of the most puzzling problems facing modern quantum physics theories is the central paradox of how elementary particles like photons can simultaneously exist in multiple states until the point of being observed.

In 1935, Austrian physicist Dr. Erwin Schrödinger illustrated the improbability of the situation with a famous thought experiment: a cat is placed into a box with a radioactive particle, a Geiger counter (which measures particles shed by decaying, unstable subatomic particles), and a vial of poisonous gas. If the particle decays, shedding radioactive energy, the Geiger counter triggers a release of poison gas, killing the cat. If there is little or no nuclear decay, the poison-gas vial remains intact, and the cat survives. The question is, when the box is opened, do you find a live cat or a dead one?

According to accepted theories of physics, you could have both - at the same time. This absurd paradox illustrates the central problem with quantum mechanics - forcing us to either conclude that A) matter works much differently on the subatomic scale than it does in the observable world; or B) something is wrong with the theory itself, even though it has consistently held up to mathematical and experimental scrutiny and led to many scientific advances over the last century.

Most physicists resolve the paradox by saying that observation collapses particles into a single state, but this still raises troubling questions, such as: if there is more than one observer, which one is entitled to cause the collapse? and where does the border lie between the laws of the subatomic world, in which these paradoxical states (superpositions) can exist, and the observable world, in which they cannot?

In other words, at the quantum level, it seems that objects are not really definable until they're observed - it seems particles don't actually resolve themselves into a single, measurable state unless they're being observed, and return to a multiple set of potential states when they're not.

In 1996, Dr. Christopher Monroe and colleagues Dawn Meekhof, Brian King and David Wineland at the National Institute of Standards and Technology in Boulder, Colorado cooled a single beryllium ion to near absolute zero, trapping it in a magnetic field and bringing it to a near motionless state. Stripped of two outer electrons, the ion's single remaining outer electron could be in two quantum states, having either an up or down spin. Using lasers, the team applied a tiny force one way to induce an up state in the electron, then in the opposite direction to induce rapid oscillation between both the up and down spins.

The electron was eventually induced to spin in both orientations simultaneously - in what physicists call a superposition. The team then used lasers to gently nudge the two states apart physically, without collapsing them to a single entity, so that the two states of the single electron were separated in space by 83 nanometers, 11 times the size of the original ion.

In other words, while the old proverb says You can't have your cake and eat it, too, it appears that, at least on the quantum scale, you certainly can.

Schrodinger's cat is alive and well... dead. Simultaneously.

We Can't Be Certain
In 2010, in a darkened laboratory at the University of California Santa Barbara, a tiny metal paddle the width of a human hair was refrigerated, and a vacuum created in a special bell jar. The paddle was then plucked like a tuning fork and observed, as it simultaneously moved and stood still. Superposition was being directly observed – objects in the visible world existing in multiple places and states simultaneously.

In quantum mechanics, there is an inherent uncertainty to reality; the location of an electron can never be precisely pinpointed at any moment in its orbit. Instead, its position can only be predicted in terms of probability. According to some physicists, if there are a thousand possibilities, eventually all thousand possibilities will occur, and so, at the quantum level, the outcome of experiments cannot be 100% predicted. All things are only based on probabilities.

A great deal of modern technology works upon the paradoxical principles of quantum physics, but this seems to contradict everyday common sense - when we observe objects in the real world, they always exist in only one place and one state at a time.

Quantum physicists call this paradox the measurement problem, and have long resolved it by saying that particles collapse into single states at the time of observation. But in the 1950s, some physicists first began to hypothesize that in fact all possible states exist, separating into different realities at the moment of change.

Infinite Realities
Wavefunction collapse is the point at which a particle existing in several potential states resolves into a single state. Some physicists now believe that at this moment, reality branches off into every one of these potential states, and our path of reality simply continues along the path we've observed. And this Many-Worlds theory means that an infinite number of alternate realities exist.

Quantum physicist Hugh Everett says that this is because a quantum particle doesn't collapse into a measurable state, but actually causes a split in reality, with a universe existing for every possible state of the object. Superposition, he asserts, actually means parallel worlds exist - one arising anew out of each of a particle's states.

For example, photons beamed through double-slits strike an opposing surface in both single streams and in spread-out wave patterns, depending upon whether or not they're being observed. When the particle is observed, it functions as a particle; but when it's not observed, it acts like a wave. At the moment of observation, according to Dr. Everett's Many-Worlds theory, the universe splits, and both results occur, but in different realities.

Objects you can observe, he hypothesizes, can thus exist simultaneously in parallel universes - the reality in which you continue, and those alternate ones in which the other yous exist. This implies that anytime alternative actions can be taken, the universe splits into alternate realities, where each decision was taken.

According to Berkeley's Dr. Raphael Bousso and Stanford's Dr. Leonard Susskind (among others), this means an infinitely expanding range of multiple universes exists, some with distinctly different laws of physics than the ones governing ours. Our reality is one "causal patch" among an infinite number of others.

The theory suggests that every conceivable possibility eventually comes to pass in one of these infinite realities and perhaps information from separate causal patches of reality can leak across between universes. Although parallel universes are still much under debate, recent cosmological models support their existence, reality splitting into an infinite number of paths for every event occurring in space-time.

More, if you're interested, can be seen in this pair of MIT lectures:
https://www.youtube.com/watch?v=ANCN7vr9FVk
https://www.youtube.com/watch?v=4OinSH6sAUo

Peeping into the Box
Cutting-edge technology developed in 2011 at the National Research Council of Canada is allowing researchers to quickly measure the position and momentum of photons - a complex, 27-dimensional quantum state - in a single step. This is a huge leap in efficiency over former quantum tomography, which required multiple measurement stages and a significant amount of time, analogous to photographing a series of 2D images from different angles and assembling them into a 3D image.

Such a precise, rapid, accurate and efficient means of measuring states with multiple dimensions is likely to be critical for advancing our knowledge of quantum mechanics, not to mention the development of advanced-security quantum communications technology, which will in theory be impossible for interceptors to decode.









Image: This diagram illustrates the setup for the experiment, which incorporates a HeNe laser, beam-splitters, lenses, a special "fan-out hologram", wave-plates, and additional equipment. One can conceive of light as a spiral and the degree of "twist" to that spiral is called the orbital-angular-momentum quantum number. The spiral is "untwisted" before being measured. Illustration: M. Malik, Nature Communications.





According to Dr. Robert Boyd, Optics and Physics Professor at the University of Rochester and Canada Excellence Research Chair at the University of Ottawa, this new type of direct particle measurement is likely to play an increasingly important role in future quantum communications technology.

A cooperative effort between the Universities of Rochester and Ottawa and Glasgow demonstrated how direct measurement can be used to circumvent Heisenberg's uncertainty principle, simultaneously, accurately measuring two aspects of a quantum state without sacrificing accuracy in either measurement. This is because the act of measuring a quantum state alters either the particle's position or its motion, "collapsing the wave function".

But direct measurement performs two different measurements one after the other: an initial "weak" measurement followed by a "strong" one. The initial measurement is gentle enough to only slightly disturb the particle's state, avoiding wave function collapse.

According to the study's main author, the University of Vienna's Dr. Mehul Malik, this experiment allows physicists to peek into Schrödinger's box, without fully opening it. He says that the weak measurement is faulty, leaving uncertainty about the state of particles, but repetitions of it lead to near certainty of the particle's state, since it doesn't destroy the particle's state, it allows for a subsequent "strong" measurement of the second variable. The alternating sequence of weak and strong measurements can then be repeated for several identically-prepared particles, until a measurement of the wave function at the required precision is derived.


Sources: The Path Book I: Origins, Eric A. Smith, Polyglot Studios, KK

"Peeking into Schrodinger's Box", press release, January 20, 2014, Leonor Sierra, University of Rochester

Monday, July 28, 2014

Canadian Team Discovers Master Gene for Motor Development

This illustration shows the cerebellum's location in the human brain. Illustration: Alan Hoffring, National Cancer Institute, public domain image.





A Canadian research team announced in 2014 they had discovered a master control gene - Snf2h - critical for development of the cerebellum. The cerebellum (Latin for or little brain) at the base of the skull coordinates muscle contractions for smooth movement, muscle tone at rest, and helping maintain balance.

About half your entire brain's neurons are found in the cerebellum; it's a tightly-packed twin-hemisphere hindbrain region, with much denser surface folds than the outer surface of the main brain - the cerebral cortex (Latin for brain bark). Its hundreds of millions of neurons primarily relay information between muscles of the body and motor control regions of the cerebral cortex. It acts as a comparator, using mismatches between intentions and actual movement as an error-correction guide to produce fine, complex movements. 

Most information about the cerebellum's function is derived from examining humans and animals with damage to the region. Subjects with cerebellar damage  suffer from major motor control problems, ipsilateral (on the opposite side of the body) to the damaged cerebellar lobe. While they can still produce movement, it is uncoordinated, mistimed, or erratic. Coupled with fMRI scans, this indicates the region fine tunes and coordinates, rather than initiates or selects movement. Recent findings also indicate the cerebellum is not solely motor-related, as it also activates during functions related to mental imagery, attention, motor learning and language.

The cerebellum develops in response to external stimuli and practice, switching specific genes or gene groups on and off, and strengthening circuits to consolidate and perfect complex tasks. Here Snf2h coordinates this complex, ongoing process.

Master genes like Snf2h are epigenetic (above the gene) regulators, which can adapt according to environmental cues such as diet or stress, then adjust which genes are switched on or off.

Dr. David Picketts of the Ottawa Hospital Research Institute and University of Ottawa led the team which discovered Snf2h along the fourth chromosome (DNA strand) in the nuclei of neural stem cells. According to Dr. Picketts, master neural epigenetic regulators like Snf2h affect memory, behavior and learning. It is highly conserved (found unchanged) in animals including mice, indicating it existed at least 75 million years ago, when the common ancestor to mice and humans first evolved. 

Silencing Snf2h in developing mice embryos results in cerebella one-third the normal size, with consequent errors in walking, balance and coordinated movement. Such impairment, cerebellar ataxia, is common among sufferers of neurodegenerative diseases.

When cerebellar stem cells divide, en route to specializing as specific neurons, Snf2h determines which genes will be activated and which genes will be curled up inaccessibly along the DNA strands. Without its guidance, some vital genes remain inactivated, while others which hinder development remain switched on. The result is sparser neurons, which don't respond and adapt to external signals efficiently. Gene expression in the cerebellum becomes progressively more disorganized, leading to cerebellar ataxia and premature death for the afflicted.

Source: Researchers find gene critical for development of brain motor centre, News Release, June 20, 2014, Paddy Moore, Ottawa Hospital Research Institute

Sunday, July 27, 2014

The Oldest Primate Skeleton

An artistic rendering of Archicebus achilles' skeleton. Grey regions represent the fossilized bones found the specimen. Illustration: Mat Severson, Northern Illinois University

Researchers at Beijing's Chinese Academy of Sciences say the world's oldest known primate is a tiny, 55-million-year-old tree climber dubbed Archicebus achilles. 

A farmer discovered the creature's fossil last year in a quarry near Jingzhou City in central China. The site, now famous for its Eocene-era bird and fish fossils, had been an ancient lake. According to Dr. Xijun Ni, the region had once been covered in lush tropical forests and lakes.

Dr. Dan Gebo of Northern Illinois University says Archicebus is one of the earliest primates ever discovered, and its fossil is the most complete set from the Eocene Era ever recovered.

Archicebus was tiny, weighing under an ounce, with slender limbs and a long tail. Its anatomy was suited to leaping among the trees, feeding primarily upon insects during daylit hours. It seems to have been a hybrid, with the feet of a small monkey, the arms, legs and teeth of a much more primitive primate, and surprisingly tiny eyes.

The creature's name is derived from the Greek words Archi meaning "first" and cebus meaning "long-tailed monkey", while Achilles refers to the legendary Greek warrior with a special heel. This is because its most unusual features are in its feet, which combine strong big toes for grasping, long, nailed toes of ancient tree dwellers and monkey-like heel and metatarsals (innermost toe bones), advanced features not normally found among early Eocene primates. This unique blend of anatomical characteristics places it near the evolutionary split of tarsiers - tiny, nocturnal tree-dwellers - and anthropoids, the lineage which includes modern monkeys, apes, and humans.

Dr. Gebo believes that Archicebus and similar recent discoveries place the origin of primates in Asia, rather than Africa.

Source: Oldest primate skeleton discovered, press release, June 5th, 2013, Northern Illinois University

Thursday, July 24, 2014

Heisenberg's Uncertainty Principle

In 1687, Cambridge mathematics chairman Sir Isaac Newton published the Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). It was a stunning achievement, destined to change the world forever. The Principia outlined the principles of classical mechanics - laws governing motion and gravity, including the motions of astronomical bodies, the tides, the progression of the seasons and general principles governing objects in motion.

The work would shape our view of the natural world for two and a half centuries. Its formulae describe an orderly universe, where objects follow strict, predictable laws of behavior with clockwork-like precision. The Principia helped usher in a new Age of Reason - a breathtakingly optimistic view that we could understand the deepest secrets of reality if we used the proper yardstick. For the first time in human history, we seemed to be the utter masters of the universe; incredibly complex phenomena like the movement of the planets and progression of the seasons could now be predicted using the simplest of mathematical formulae.

Then, in 1927, a brilliant young physics professor in Copenhagen released a paper that would rip away the illusion of comfortable predictably we had so long enjoyed. Matter, he demonstrated, acted in the strangest of ways.

With "On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics" and subsequent writings, young Werner Heisenberg propelled the new field forward, helping physicists understand how stars ignite, why atoms don't implode, and that so-called "empty" space never truly is. His research would eventually net its author a Nobel Prize.

Over the previous ten years, renowned physicists like Heisenberg's mentor Neils Bohr and contemporary Erwin Schrodinger had been working out the principles of this new field of quantum theory, which explains the behavior of matter on a subatomic scale.

Heisenberg would go on to demonstrate that nature is inherently uncertain or "fuzzy", and that there are limits to what we can learn about quantum particles. At best, we can only calculate subatomic particle behavior and location in terms of probabilities. We would also learn that energy does not flow continuously, but instead takes the form of discrete packets called quanta, and these quanta can act like either a wave or a stream of particles, depending upon the way they are measured.

Heisenberg had discovered a problem with quantum particle measurement: we can measure either the position (x) or the momentum (p) of a subatomic particle with relative precision, but never both. In fact, the more precisely we determine one property, the less precisely we can measure the other.

Objects in the macroscopic world become visible when photons bounce off them and travel to retinal surfaces at the rear of your eyeballs. Here, a photon's energy stimulates a molecular change in a special protein called a rhodopsin, and when enough of these molecular changes accumulate, it stimulates a neural impulse that is delivered to the visual cortex of your brain.

However, to "see" the position of an electron or other subatomic particle, bouncing a photon against it may alter its path, making it impossible to precisely predict the direction in which it will travel. Similarly, an electron's rapid movement will change its position immediately after its measurement. In either case, either the position or the momentum will be inaccurately measured, disturbed by the very act of measuring it.

It's impossible to predict an electron's momentum and position with complete accuracy, so we speak of their orbits in terms of "probability clouds" - the likelihood that an electron will be in a certain area. In atoms, negatively-charged electrons orbit in "probability clouds" around a nucleus containing positively-charged protons.

Source: What is Heisenberg's Uncertainty Principle? Alok Jha, November 10, 2013, Guardian News and Media