Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics.
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.
But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.
Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT. These guys say the reason why mathematicians have been so embarrassed is that the answer depends on the nature of the universe. In other words, the answer lies in the regime of physics rather than mathematics.
The human brain is the most complex machine in existence. Every brain is loaded with some 100 billion nerve cells, each connecting to thousands of others, giving around 100 trillion connections. Mapping those connections, or synapses, could enable scientists to decipher what causes neurological disease and mental illness. It's an immense, daunting task.
The best way to tackle it? Use large amounts of data, saysMichael I. Miller, a professor in the Department of Biomedical Engineering and University Gilman Scholar. Miller and an interdisciplinary team of Johns Hopkins researchers are merging their neuroscience, computing, and data science expertise to unravel the brain's mysteries and give other scientists tools to do the same.
"We're calling it data-intensive brain science," Miller says.
In early 2016, the team launched the Kavli Neuroscience Discovery Institute at Johns Hopkins. Miller co-directs the institute with Richard Huganir, director of the Department of Neuroscience at the School of Medicine. They hope to make the KNDI a focal point of brain research, pulling together researchers from across Johns Hopkins Institutions.
In September, 400 international researchers, ethicists, and government officials gathered in New York for a meeting dubbed the United Nations of Brain Projects. Researchers there called for an International Brain Station, modeled after the International Space Station, that would create a digital, cloud-based storehouse of neuroscience data accessible to researchers worldwide.
At Johns Hopkins, a trio of engineering researchers—Miller;Joshua Vogelstein, assistant professor of biomedical engineering; and Randal Burns, professor of computer science—is submitting a grant to create a major U.S. Brain Hub under the domestic BRAIN Initiative launched by President Barack Obama in 2013. The hub would focus on digital, cloud-based data in an effort to support the international effort.
The JHU team is uniquely positioned to create such a hub. Miller—working with Susumu Mori, a professor of radiology at the School of Medicine—is already creating an immense cloud-based library of MRI brain scans taken from children with normal and abnormal brains. Computer software sorts and classifies the images. Doctors can search the library for images that match their patient's most recent scan, helping them diagnose diseases and potentially treat them earlier. The researchers have been adding 3,000 brain scans a month to the databank, Miller says.
Meanwhile, Vogelstein has teamed with Burns to map the brain's connections, which are collectively called the connectomes. TheOpen Connectome Project stores hundreds of terabytes of brain imaging data—mouse brains, in this case. The cloud-based data is free for scientists and the public to access around the world. They can view images, analyze them to identify neurons and synapses using special image-processing tools, and then help annotate them.
"People have charted the Earth from coarse to fine scale," Miller says. "We're developing neurocartography tools that allow you to chart the brain at different scales, in the same way as a road atlas."
Prof Sergio Palomares-Ruiz of Valencia Univ is going to give a seminar on neutrino astronomy.
More information can be found below
Time: 10th of Aban (31st of October), 11 am
Venue: Lecture room A, ground floor
Speaker: Prof Sergio Palomares-Ruiz of Valencia Univ
Title: On the high-energy IceCube neutrinos
Abstract: The observation of the first high-energy neutrinos in the IceCube detector at the South Pole has signaled the beginning of neutrino astronomy. After four years of data taking, 53 neutrino events with energies between 20 TeV and 2 PeV have provided the first evidence for the existence of an extraterrestrial neutrino flux at more than 6 sigma. The discovery of this flux has motivated a large number of studies in the literature to unravel their origin, from different scenarios within standard cosmic-ray sources to more exotic possibilities. In this talk, I will describe the evolution of these data and their main features and I will present the results of statistical analyses of different scenarios.
Nature Index 2016 Australia and New Zealand highlights these countries’ quality natural science research. The supplement focuses on the cities and institutions that contribute the lion’s share, and examines the factors that drive their success. In some instances, local partnerships have played a significant role.
What you’ll discover in Nature Index 2016 Australia and New Zealand:
The Nature Index is a powerful tool to probe and compare research performance and collaboration within countries, and among them. Explore the data available on our website and discover your institution’s research strengths and partnerships.
Nicky Phillips
Editor, Nature Index
The Nature Index tracks the affiliations of high-quality scientific articles and presents recent research outputs by institution and country.
The Max Planck Institute for Gravitational Physics (Albert Einstein Institute) in
Potsdam-Golm invites applications for two postdoctoral positions in the
gauge-gravity duality (holography). The researchers will join the newly-formed
independent research group “Gauge-Gravity Duality” led by Michal P. Heller
and generously supported by the Humboldt Foundation through the Sofja
Kovalevskaja Award. The group is primarily focused on exploring quantum
information approaches to quantum field theories and gravity in the context of
holography and beyond. Other topics of interest include non-equilibrium
physics of gauge theories, foundations of relativistic hydrodynamics,
numerical holography and various aspects of black hole physics.
The initial appointment will be for two years with a possibility of extension for
one more year upon exceptional performance. In outstanding cases threeyear
contract offer can be made. The positions come with very competitive
research budgets. The group members will also benefit from active visitor and
seminar agenda, several topical workshops organized in the Berlin area, as
well as strong ties with the Division of Quantum Gravity and Unified Theories.
The starting date is negotiable with the default date of appointment being
September 2017. The group is expected to grow to at least five researchers
by the end of 2018.
Interested applicants are invited to send their materials through the regular
application system of the Division of Quantum Gravity and Unified Theories
available at:
https://jobs.itp.phys.ethz.ch/postdoc/
The deadline for applying is 1 December 2016. See this link for the details of
the application procedure.
For more information, please send an e-mail to
michal.p.heller@aei.mpg.de
Don’t regard the action’s speed but follow the good performance of a task, for people don’t ask how long it does take but rather they ask about its quality
Imam Ali a.s
Beautiful saying of Imam Ali {AS} from Nahjul Balagha
لا تَطْلُبْ سُرعَةَ العَمَلِ و اطْلُبْ تَجوِیدَهُ؛ فإنَّ الناسَ لا یَسألُونَ: فی کَم فَرَغَ مِنَ العَمَلِ، إنّما یَسألُونَ عَنْ جَوْدَةِ صَنعَتِهِ.
در پی سرعت عمل مباش، بلکه به دنبال خوب انجام دادن آن باش، زیرا مردم نمی پرسند در چه مدت کار را به سرانجام رساند، بلکه از کیفیت انجام آن می پرسند.
Advanced cancer research is calling on techniques used by NASA scientists who analyze satellite imagery to find commonalities among stars, planets and galaxies in space.
Scientists from NASA's Jet Propulsion Laboratory (JPL) use complex machine learning algorithms to identify similarities among galaxies that may otherwise be overlooked, NASA officials said in a statement. Using similar techniques, medical professionals are able to analyze a lung sample for common cancer biomarkers.
However, analyzing a biopsy specimen for biomarkers is not the only way in which JPL's complex machine learning algorithms can be used in the medical field. Cancer researchers can also use the space exploration tools to identify common chemical or genetic signatures related to specific cancers, which could revolutionize strategies for early cancer detection. [Captain Kirk Says Space Exploration Leads To Medical Advancements (Video)]
In a continuing effort to advance both space and medical knowledge, JPL and the National Cancer Institute (NCI) renewed a research partnership on Sept. 6, which they expect will carry through 2021. By compiling their collective findings into a searchable network, researchers from the NCI-supported Early Detection Research Network (EDRN) hope to improve early diagnosis of cancer or cancer risk by distributing their findings across the world, much like astronomers have done, NASA officials said in the statement.
"From a NASA standpoint, there are significant opportunities to develop new data science capabilities that can support both the mission of exploring space and cancer research using common methodological approaches," Dan Crichton, the head of JPL's Center for Data Science and Technology, said in the statement. "We have a great opportunity to perfect those techniques and grow JPL's data science technologies, while serving our nation."
Using the algorithms designed for space exploration, EDRN researchers have already discovered six new Food and Drug Administration-approved chemical and genetic signatures that denote the presence of cancer, called biomarkers, and nine biomarkers approved for use in Clinical Laboratory Improvement Amendments labs, according to the statement from NASA.
In addition to the EDRN, the renewed partnership will assist other NCI-funded programs, such as the Consortium for Molecular and Cellular Characterization of Screen-Detected Lesions and the Informatics Technology for Cancer Research Initiative.
Making medical data accessible across the globe will solve a common problem with uniformity. Previously, medical data such as patient age, cancer type or other characteristics was not labeled and stored uniformly (the same everywhere), so it could not be shared and studied by everyone, NASA officials said.
"We didn't know if they were early-stage or late-stage specimens, or if any level of treatment had been tried," Sudhir Srivastava, chief of NCI's Cancer Biomarkers Research Group and head of EDRN, said in the statement. "And JPL told us, 'We do this type of thing all the time! That's how we manage our Planetary Data System.'"
In the years to come, the NCI plans to incorporate image-recognition technology to help archive images of cancer specimens from the EDRN. Then, much like how computer algorithms comb through images of star clusters,these images could be analyzed for early signs of cancer based on a patient's age, ethnic background and other demographics, Crichton said in the statement.
"As we develop more automated methods for detecting and classifying features in images, we see great opportunities for enhancing data discovery," Crichton added. "We have examples where algorithms for detection of features in astronomy images have been transferred to biology and vice versa."
A lung specimen analyzed for common cancer biomarkers
Credit: Early Research Detection Network/University of Colorado
Nobody understands why deep neural
networks are so good at solving complex problems. Now physicists say the
secret is buried in the laws of physics.
In the last couple of years, deep learning techniques have
transformed the world of artificial intelligence. One by one, the
abilities and techniques that humans once imagined were uniquely our own
have begun to fall to the onslaught of ever more powerful machines.
Deep neural networks are now better than humans at tasks such as face
recognition and object recognition. They’ve mastered the ancient game of
Go and thrashed the best human players.
But there is a problem.
There is no mathematical reason why networks arranged in layers should
be so good at these challenges. Mathematicians are flummoxed. Despite
the huge success of deep neural networks, nobody is quite sure how they
achieve their success.
Today that changes thanks to the work of
Henry Lin at Harvard University and Max Tegmark at MIT. These guys say
the reason why mathematicians have been so embarrassed is that the
answer depends on the nature of the universe. In other words, the answer
lies in the regime of physics rather than mathematics.
First,
let’s set up the problem using the example of classifying a megabit
grayscale image to determine whether it shows a cat or a dog.
Such an image consists of a million pixels that can each take one of 256 grayscale values. So in theory, there can be 2561000000 possible
images, and for each one it is necessary to compute whether it shows a
cat or dog. And yet neural networks, with merely thousands or millions
of parameters, somehow manage this classification task with ease.
In
the language of mathematics, neural networks work by approximating
complex mathematical functions with simpler ones. When it comes to
classifying images of cats and dogs, the neural network must implement a
function that takes as an input a million grayscale pixels and outputs
the probability distribution of what it might represent.
The problem is that there are orders of magnitude more mathematical
functions than possible networks to approximate them. And yet deep
neural networks somehow get the right answer.
Now Lin and Tegmark
say they’ve worked out why. The answer is that the universe is governed
by a tiny subset of all possible functions. In other words, when the
laws of physics are written down mathematically, they can all be
described by functions that have a remarkable set of simple properties.
So deep neural networks don’t have to approximate any possible mathematical function, only a tiny subset of them.
To
put this in perspective, consider the order of a polynomial function,
which is the size of its highest exponent. So a quadratic equation like
y=x2 has order 2, the equation y=x24 has order 24, and so on.
Obviously,
the number of orders is infinite and yet only a tiny subset of
polynomials appear in the laws of physics. “For reasons that are still
not fully understood, our universe can be accurately described by
polynomial Hamiltonians of low order,” say Lin and Tegmark. Typically,
the polynomials that describe laws of physics have orders ranging from 2
to 4.
The laws of physics have other important properties. For
example, they are usually symmetrical when it comes to rotation and
translation. Rotate a cat or dog through 360 degrees and it looks the
same; translate it by 10 meters or 100 meters or a kilometer and it will
look the same. That also simplifies the task of approximating the
process of cat or dog recognition.
These properties mean that
neural networks do not need to approximate an infinitude of possible
mathematical functions but only a tiny subset of the simplest ones.
There
is another property of the universe that neural networks exploit. This
is the hierarchy of its structure. “Elementary particles form atoms
which in turn form molecules, cells, organisms, planets, solar systems,
galaxies, etc.,” say Lin and Tegmark. And complex structures are often
formed through a sequence of simpler steps.
This is why the
structure of neural networks is important too: the layers in these
networks can approximate each step in the causal sequence.
Lin and
Tegmark give the example of the cosmic microwave background radiation,
the echo of the Big Bang that permeates the universe. In recent years,
various spacecraft have mapped this radiation in ever higher resolution.
And of course, physicists have puzzled over why these maps take the
form they do.
Tegmark and Lin point out that whatever the reason,
it is undoubtedly the result of a causal hierarchy. “A set of
cosmological parameters (the density of dark matter, etc.) determines
the power spectrum of density fluctuations in our universe, which in
turn determines the pattern of cosmic microwave background radiation
reaching us from our early universe, which gets combined with foreground
radio noise from our galaxy to produce the frequency-dependent sky maps
that are recorded by a satellite-based telescope,” they say.
Each
of these causal layers contains progressively more data. There are only
a handful of cosmological parameters but the maps and the noise they
contain are made up of billions of numbers. The goal of physics is to
analyze the big numbers in a way that reveals the smaller ones.
And when phenomena have this hierarchical structure, neural networks make the process of analyzing it significantly easier.
“We
have shown that the success of deep and cheap learning depends not only
on mathematics but also on physics, which favors certain classes of
exceptionally simple probability distributions that deep learning is
uniquely suited to model,” conclude Lin and Tegmark.
That’s
interesting and important work with significant implications. Artificial
neural networks are famously based on biological ones. So not only do
Lin and Tegmark’s ideas explain why deep learning machines work so well,
they also explain why human brains can make sense of the universe.
Evolution has somehow settled on a brain structure that is ideally
suited to teasing apart the complexity of the universe.
This work
opens the way for significant progress in artificial intelligence. Now
that we finally understand why deep neural networks work so well,
mathematicians can get to work exploring the specific mathematical
properties that allow them to perform so well. “Strengthening the
analytic understanding of deep learning may suggest ways of improving
it,” say Lin and Tegmark.
Deep learning has taken giant strides in
recent years. With this improved understanding, the rate of advancement
is bound to accelerate.
Four years into its travels across Mars, NASA’s Curiosity rover faces an unexpected challenge: wending its way safely among dozens of dark streaks that could indicate water seeping from the red planet’s hillsides.
Although scientists might love to investigate the streaks at close range, strict international rules prohibit Curiosity from touching any part of Mars that could host liquid water, to prevent contamination. But as the rover begins climbing the mountain Aeolis Mons next month, it will probably pass within a few kilometres of a dark streak that grew and shifted between February and July 2012 in ways suggestive of flowing water.
NASA officials are trying to determine whether Earth microbes aboard Curiosity could contaminate the Martian seeps from a distance. If the risk is too high, NASA could shift the rover’s course — but that would present a daunting geographical challenge. There is only one obvious path to the ancient geological formations that Curiosity scientists have been yearning to sample for years (see ‘All wet?’).
“We’re very excited to get up to these layers and find the 3-billion-year-old water,” says Ashwin Vasavada, Curiosity’s project scientist at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. “Not the ten-day-old water.”
The streaks — dubbed recurring slope lineae (RSLs) because they appear, fade away and reappear seasonally on steep slopes — were first reported1 on Mars five years ago in a handful of places. The total count is now up to 452 possible RSLs. More than half of those are in the enormous equatorial canyon of Valles Marineris, but they also appear at other latitudes and longitudes. “We’re just finding them all over the place,” says David Stillman, a planetary scientist at the Southwest Research Institute in Boulder, Colorado, who leads the cataloguing.
Dark marks
RSLs typically measure a few metres across and hundreds of metres long. One leading idea is that they form when the chilly Martian surface warms just enough to thaw an ice dam in the soil, allowingwater to begin seeping downhill. When temperatures drop, the water freezes and the hillside lightens again until next season. But the picture is complicated by factors such as potential salt in the water; brines may seep at lower temperatures than fresher water2.
Other possible explanations for the streaks include water condensing from the atmosphere, or the flow of bone-dry debris. “They have a lot of behaviours that resemble liquid water,” says Colin Dundas, a planetary geologist at the US Geological Survey in Flagstaff, Arizona. “But Mars is a strange place, and it’s worth considering the possibility there are dry processes that could surprise us.”
Source: Route: NASA; Terrain: ASU; RSLs: Ref. 4
A study published last month used orbital infrared data to suggest that typical RSLs contain no more than 3% water3. And other streaky-slope Martian features, known as gullies, were initially thought to be caused by liquid water but are now thought to be formed mostly by carbon dioxide frost.
Dundas and his colleagues have counted 58 possible RSLs near Curiosity’s landing site in Gale Crater4. Many of them appeared after a planet-wide dust storm in 2007 — possibly because the dust acted as a greenhouse and temporarily warmed the surface, Stillman says.
Since January, mission scientists have used the ChemCam instrument aboard the rover — which includes a small telescope — to photograph nearby streaks whenever possible.
So far, the rover has taken pictures of 8 of the 58 locations and seen no changes. The features are lines on slopes, but they have not yet recurred. “We’ve got two of the three letters in the acronym,” says Ryan Anderson, a geologist at the US Geological Survey who leads the imaging campaign.
Curiosity is currently about 5 kilometres away from the potential RSLs; on its current projected path, it would never get any closer than about 2 kilometres, Vasavada says. The rover could not physically drive up and touch the streaks if it wanted to, because it cannot navigate the slopes of 25 degrees or greater on which they appear.
But the rover’s sheer unexpected proximity to RSLs has NASA re-evaluating its planetary-protection protocols. Curiosity was only partly sterilized before going to Mars, and experts at JPL and NASA headquarters in Washington DC are calculating how long the remaining microbes could survive in Mars’s harsh atmosphere — as well as what weather conditions could transport them several kilometres away and possibly contaminate a water seep. “That hasn’t been well quantified for any mission,” says Vasavada.
The work is an early test for the NASA Mars rover slated to launch in 2020, which will look for life and collect and stash samples for possible return to Earth. RSLs exist at several of the rover’s eight possible landing sites.
For now, Curiosity is finishing exploring the Murray Buttes. These spectacular rock towers formed from sediment at the bottom of ancient lakes — the sort of potentially life-supporting environment the rover was sent to find. Curiosity’s second extended mission begins on 1 October.
Barring disaster, the rover’s lifespan will be set by its nuclear-power source, which will continue to dwindle in coming years through radioactive decay. Curiosity still has kilometres to scale on Aeolis Mons as it moves towards its final destination, a sulfate-rich group of rocks.