INSIDE each one of us lies a mystery. An analysis of genes from the human gut has found DNA so unusual it could belong to microbes unlike anything that science has encountered before.
Life as we know it is split into three major groups or domains. Plants, animals and fungi are all classed as eukaryotes, whose defining feature is their nucleus. Less complex cells fall into two different divisions - bacteria and archaea (see diagram, below).
But some biologists suspect new forms of life are still to be discovered - the equivalent of dark matter-not least because more than 99 per cent of microbes can’t actually be grown in the lab.
Until about 25 years ago, we had virtually no way of studying them. Since then, genomic tools have enabled us to sequence microbial DNA and get an idea of the range of different species.
Even with these techniques it is hard to identify completely new types of life. One problem is drawing the evolutionary dividing line between different groups of microbes. Because they can swap genes, the divisions between them become blurred, and difficult to detect. And when a DNA analysis does identify gene sequences unlike any others, researchers don’t know how to interpret them - precisely because they are so unusual.
Philippe Lopez and Eric Bapteste at the Pierre and Marie Curie University in Paris have come up with a solution. Working with Sebastien Halary at the University of Montreal, they have developed a new method for identifying particularly unusual genes.
They reasoned that if they could find genes from these 86 families that don’t obviously belong to bacteria, archaea or eukaryotes, this might hint at gaps in our three domains of life.
In their quest, the team has turned to our guts, because the human gut microbiome is the best studied of all microbial communities, and hosts a diverse range of species.
They analysed microbiome samples, recovering about 230,000 DNA sequences that are related to known sequences in those 86 gene families. They then used these sequences as the starting point for a second analysis - a little like digging deeper into your ancestry by using your parents’ DNA rather than your own to guide the search. This revealed an additional 80,000 stretches of microbial DNA that belonged in the 86 gene families. But the sequence of bases was highly unusual in about one-third of the DNA - it shared just 60 per cent or less of its identity with any known gene sequences. That degree of difference is what you might expect to separate different domains of life, such as bacteria and archaea.
One explanation is that genes are more variable in known organisms than we thought, says Lopez. But there is an alternative. “It’s as if they belonged to unknown lineages of microbes that diverged very early in the history of life,” he says. That might mean they belong to an as- yet-unidentified fourth domain {Biology Direct, doi.org/82c).
Bapteste stresses that only more research will allow us to be confident that the human gut really does play host to life forms as we know nothing about the microbes that defy classification, especially microbes that carry the strange genes, “Let’s wait to see how unusual the organisms are,” he says. In particular, it would help to know about their size and their internal structure, including the ribosomes they use to make proteins. The cell’s metabolic processes might be unusual too, says Bapteste.
But there’s no reason why we couldn’t find a new domain of life inside us. “Scientists have found a huge diversity of microbes in the human gut, so 1 would not expect it to be necessarily hostile to different life forms,” says Bapteste. Last year, Dusko Ehrlich at the French National Institute for Agricultural Research in Jouy- en-Josas was part of a consortium that updated the catalogue of genes in the gut microblome, expanding our estimate from 3.3 million to 9.9 million, “The gut mlcrobiome is not such a well- known playground” he says.
But the big question remains: could it really play host to microbes from a fourth domain? “The evidence is suggestive and indeed tantalising, but would need to be confirmed,” says Ehrlich.”1 am not entirely sure what the | results really mean,” says James Mclnerney at the University of Manchester, UK, whose research looks at the origins of the three known domains of life. He suspects that the unusual genes will turn out to belong to fairly ordinary microbes – particularly since researchers are beginning to appreciate that gene diversity might be more extreme than once thought. That would still be an exciting find, he says. “It might hint at new metabolic processes at work in our guts.” Mclnerney also points out that the proposed fourth domain of life still eludes us even after 25 years of sampling DIM A from the environment. But Lopez and Bapteste argue this is because recognising completely new microbes is, by definition, a challenge
All can agree that the true significance of the new findings won’t become clear until they have actual cells containing the unusual genes - the next task on the agenda. “The good news is we now know something about them that could help us to fish them out’ says Bapteste. Like most microbes, they probably
can’t be grown in the lab, but Bapteste says developments in a technology called single cell genomics should soon offer a way to sequence the genome of individual microbes, even if they can’t be cultured
If the results reveal microbes as unconventional as their genes appear to be, biology might change - just as it did 30 years ago when researchers first realised that the archaca formed a distinct third domain of life.
“The discovery of archaea revolutionised our fundamental knowledge in biology,” says Bapteste. For now, he says, we should remain cautious, “These deep lineages, if they exist, still need to be captured.” ■ £
Mystery microbes in our gut could belong to a fourth domain of life,tasks
Task1. Find words/expressions meaning the following.
1. different from;
2. to meet;
3. to exchange;
4. to think , consider;
5. instead of;
6. to provide the things that are needed for;
7. unfriendly, antagonistic ;
8. wholly;
9. to develop or end in a particular way;
10. to value , treasure , admire;
11. problem, difficulty
Task2. Say if the following is true, false or not mentioned
1. According to scientists there exist three groups of life.
2. The most recent hypothesis points out that the ancestors of the three groups were cells with RNA genomes and that they gained DNA by three independent acquisitions from DNA viruses.
3. Some scientists suggest that not all forms of life have been discovered, there may be more to find.
4. Since microbes exchange genes , it is very hard to tell them apart.
5. After examining about 230,000 DNA sequences that are related to known sequences in those 86 gene families scientists immediately found that about a third of them does not match any known sequences.
6. The only way to account for this fact is that there are more variations of genes in our organism than we could imagine.
7. To probe life’s dark matter , scientists have resorted to a relatively new technique called metagenomics.
8. MacInerney doubts if the fourth domain really exists.
9. Most scientists are sure that the new group has not yet been identified as it is extremely hard.
10. If their existence is ever proved, it will lead to a revolution in biology.
Task3. Answer the following questions.
1. Why are plants , animals and fungi referred to one group when they are so different?
2. Why are new forms of life compared to dark matter?
3. What makes them so hard to identify?
4. What method did Pierre an Marie Curie University scientists suggest?
5. What did they choose for their research ?
6. Why do scientists think that there is nothing to prevent us from finding a new life form inside us?
7. Do all scientists agree that the unusual genes are a new life form? Why? Why not?
8. What makes the new findings really important?
9. How can the genome of individual microbes be sequenced even if they cannot be cultured?
-
Intelligent without design
Evolution’s random workings have a lot in common with that most elegant problem solver, the human brain.
FEATHER isn’t just pretty: it’s pretty useful. Strong, light and flexible, with tiny barbs to zip each filament to its neighbours, it is fantastically designed for flight. The mammalian eye, too, is a marvel of complex design, with its pupil to regulate the amount of light that enters, a lens to focus it onto the retina, and rods and cones for low light and colour vision - all linked to the brain through the optic nerve. And these are just the tip of the iceberg of evolution’s incredible prowess as a designer.
For centuries, the apparent perfection of such designs was taken as self-evident proof of divine creation. Charles Darwin himself expressed amazement that natural selection could produce such variety and complexity. Even today, creationism and intelligent design thrive on intuitive incredulity that an unguided, unconscious process could produce such intricate contraptions.
We now know that intuition fails us, with feathers, eyes and all living things the product of an entirely natural process. But at the same time, current ways of thinking about evolution give a less-than-complete picture of how that works. Any process built purely on random changes has a lot of potential changes to try.
So how does natural selection come up with such good solutions to the problem of survival so quickly, given population sizes and the number of generations available?
A traditional answer is through so-called massive parallelism: living things tend to have a lot of offspring, allowing many potential solutions to be tested simultaneously. But a radical new addition to the theory of evolution provides a new perspective on this question and more - while turning ideas of intelligent design on their head. It seems that, added together, evolution’s simple processes form an intricate learning machine that draws lessons from past successes to improve future performance. Get to grips with this idea, and we could have a raft of new tools with which to understand evolution. That could allow us to better preserve the diversity of life on Earth - and perhaps even harness evolution’s power Successful strategies
Despite its huge reach, the theory of evolution is simple. It rests on three pillars: variation, selection and inheritance. Variation stems from random genetic mutations, which create genetically distinct individuals.
Natural selection favours “fitter” individuals, with those better suited than others to a particular environment prospering and producing the most offspring. Inheritance means that these well-adapted individuals pass their characteristics down the generations. All this eventually leads to new adaptations and new species.
At first glance, there is no need for learning in this process. In fact, to invoke it at all risks is violating one of evolution’s most important principles. When we learn, we in some way anticipate the future, combining solutions from past experience with knowledge of develop a strategy for what we think will come next. But evolution can’t see the future: its exploration is born out of random mutations selected or rejected by current circumstances, so it is blind to the challenges to come.
But then again, learning organisms can’t actually see the future. When we cross a road, we can’t anticipate all traffic movements, but we have a memory bank of solutions that have worked before. We develop a strategy based on those - and if it proves successful, we call on that newly learned experience next time. That’s not too dissimilar to what natural selection does when it reuses successful variants from the past, such as the flowers of bee orchids that are unusually good at attracting bees, or the mouthparts of mosquitoes that work like hypodermic syringes and are particularly effective at sucking blood.
Well-stocked toolkit
Some now think the similarities between learning and evolution go more than skin- deep - and that our understanding of one could help understand the other. Since the early days of computer science, researchers have been developing algorithms - iterative rules - that allow computers to combine banked knowledge with fresh information to create new outputs, and so mimic processes involved in learning and intelligence. In recent years, such learning algorithms have come to underlie much technology that we take for granted, from Google searches to credit-scoring systems. Could that well-stocked toolkit now prise open the secrets of evolution? “The analogy between evolution and learning has been around for a long time,” says Richard Watson of the University of Southampton,
UK. “But the thing that’s new is the idea of using learning theory to radically expand our understanding of how evolution works.”
A pioneer of this approach is Leslie Valiant, a computational theorist at Harvard University. In his 2013 book Probably Approximately Correct, he described how the workings of evolution equate to a relatively simple learning algorithm known as Bayesian updating. Used to model everything from celestial mechanics to human decision-making computationally, this type of learning entails starting with many hypotheses and pinpointing the best ones using new information as it becomes available. Replace the hypotheses you want to test with the organisms in a population, Valiant showed, and natural selection amounts to incorporating new information from the surrounding environment to home in on the best-adapted organisms.
That could be just coincidence. But in 2014, Erick Chastain at Rutgers University in New Brunswick, New Jersey, and his colleagues found a similar equivalence between evolution in a sexually reproducing population and another learning model called the multiplicative weights update algorithm.
This presumes there may be many potential solutions to a problem, and the key to finding the best lies in weighting their promise on the basis of past performance. Applying this algorithm and assuming that natural selection gives more weight to previously successful solutions was enough to reproduce how, over generations, evolution homes in on the gene variants with the highest overall fitness.
Such parallels left Watson wondering how a model that more closely follows the genetic changes underpinning evolution might look. Not so long ago, we naively talked about genes “for” particular traits, and assumed for example that humans, being so complex, would have lots of genes. When in the 1990s two groups were vying to sequence the human genome, they believed they would identify some 100,000 genes. To everyone’s surprise, they discovered we have fewer than 25,000. The reason, we now know, is that genes are team players: their activity is regulated by other genes, creating a network of connections. The whole is thus capable of much more than the sum of its parts.
These connections mean that mutations, whether caused by spontaneous chemical changes or faulty DNA repair processes, don’t just alter single genes. When a mutation changes one gene, the activity of many others in the network can change in concert. The network’s organisation is itself a product of past evolution, because natural selection rewards gene associations that increase fitness. This allows your genotype (the set of genes you inherit from your parents) to solve the problem of creating a well-adapted phenotype (the set of outward characteristics that adds up to you). “In evolution, the problem is to produce a phenotype that is fit in a given environment, and the way to do it is to make connections between genes - to learn what goes together,” says Watson.
Watson’s insight was to realise that this whole process has a lot in common with the workings of one of the cleverest learners we know - the human brain. Our brains consist of neurons connected via synapses. Connections between two neurons are strengthened when they are activated at the same time or by the same stimulus, a phenomenon encapsulated by the phrase “neurons that fire together wire together”. When we learn, we alter the strengths of connections, making networks of associations capable of problem-solving Hebbian learning after neuropsychologist Donald Hebb, who first described it in the mid-20th century. Simple models based on these networks can do surprisingly clever things, such as recognising and classifying objects, generalising behaviour from examples, and learning to solve optimisation problems. If evolution works in equivalent ways, Watson realised, that could explain why it is such a good problem-solver, creating all that complexity in such short order.
Spontaneous solutions
Working with Gunter Wagner from Yale University and others, Watson built a model network in which genes can either increase or reduce each other’s activity, as they do in nature. Each network configuration controls how the genes within it interact to give rise to a different phenotype, presented in the form of a pixelated image on a screen. The modellers evolved the network by randomly changing the gene connections, one mutation at a time, and selecting those networks that produced an image with a closer resemblance to one deemed to be the optimal phenotype - a picture of Darwin’s face. Thus guided, the evolving system eventually reproduced this image, at which point the team used the same process to teach it to reproduce Hebb’s face.
But here came the surprise. The modellers then removed the selection pressure guiding the system towards mugshots of Darwin or Hebb. Any old mutation that arose was allowed to survive. But the system did not produce a random image, or a Darwin-Hebb mash-up. Instead, it produced one or the other face - and as little as a single gene mutation was enough to trigger a flip between the two. In other words, a model that simply took account of genes’ networked nature showed that when the genotype had learned solutions, it could remember them and reproduce them in different environments - as indeed our brains can.
Evidence for learning in this sense is often seen in the natural world, for instance in the way a crocodile genome can produce a male or female crocodile depending on the temperature at which the egg is incubated.
But learning the way our brains do it is not just about remembering and reproducing past solutions. “A real learning system also has to be able to generalise - to produce good solutions even in new situations it hasn’t encountered before,” says Watson. Think crossing a road you’ve never crossed before versus crossing a familiar one.
This generalisation ability rests in recognising similarities between new and old problems, so as to combine the building blocks of past solutions to tackle the problem at hand. And as another model created by Watson and his colleagues showed last year, this kind of learning is also what a gene network does under the pressure of natural selection. The cost associated with making gene connections - proteins must be produced and energy expended - favours networks with fewer connections. Subsets of connections that work well together become bound tightly in blocks that themselves are only loosely associated. Just as our brains do, natural selection memorises partial solutions - and these building blocks are embedded in the structure of the gene network (arxiv.org/abs/1508.06854).
This way of working allows genotypes to generate phenotypes that are both complex and flexible. “If past selection has shaped the building blocks well, it can make solving new problems look easy,” says Watson. Instead of merely making limbs longer or shorter, for example, evolution can change whether forelimbs and hindlimbs evolve independently or together. A single mutation that changes connections in the network can lengthen all four legs of a giraffe, or allow a bat to increase its wingspan without getting too leggy. And a feather or an eye needn’t be generated from scratch, but can evolve by mixing and matching building blocks that have served well in the past.
This ability to learn needs no supernatural intervention - it is an inevitable product of random variation and selection acting on gene networks. “Far from being blind or dumb, evolution is very smart,” says Watson.
Watson’s idea has caught the attention of respected evolutionary theorists, among them Eors Szathmary of the Parmenides Foundation in Munich, Germany. “It is absolutely new,” he says. “I thought that the idea was so fascinating and so interesting that I should put some support behind it.” Earlier this year, he and Watson collaborated on a paper called “How Can Evolution Learn?” to discuss some of its implications (Trends in Ecology and Evolution, vol 31, p 146).
For a start, if evolution learns, by definition it must get better at what it does. It will not only evolve new adaptations, but improve its ability to do so. This notion, known as the evolution of evolvability, has been around for some time, but is contentious because it seems to require forethought. No longer. “If you can do learning, then you are able to generalise from past experience and generate potentially useful and novel combinations,” says Szathmary. “Then you can get evolvability.”
Applying similar ideas might also begin to explain how ecosystems evolve (see “Eco- learning”, below). More speculatively, Watson and Szathmary suggest that the marriage between learning theory and evolutionary theory could throw light on the giant leaps made by evolution in the past 3.8 billion years. These “major transitions”, an idea first formulated by Szathmary and John Maynard Smith in the 1990s, include the jumps from replicating molecules to cellular organisms, from single-celled to multicellular organisms and from asexual to sexual reproduction. Szathmary and Watson think the key might lie in a model known as deep learning.
This was how Google DeepMind beat the world’s top player at the ancient and fiendish game of Go earlier this month. It is based on Hebbian learning, with the difference that it “freezes” successive levels of a network once it has learned as much as it can, using the information acquired as the starting point for the next level. “It’s intriguing that evolutionary algorithms exploiting deep learning can solve problems that single-level evolution cannot,” says Watson - although he admits the details of the parallel are still to be worked out. If we could tease out the circumstances required to produce a major transition, that might suggest where evolution is heading next - or even how to engineer a transition. For example, says Watson, it might show us how to transform a community of microbes into a true multicellular organism.
Other evolution researchers are also intrigued. “Watson and Szathmary are right in recognising that a species’ evolutionary history structures its genes in much the same way that an individual’s learning history structures its mind,” says Mark Pagel of the University of Reading, UK. David Sloan Wilson at Binghamton University,New York, thinks it could be an important step forward too.
“In the past, it has been heretical to think about evolution as a forward-looking process, but the analogy with learning itself a product of evolution - is quite plausible,” he says.
Szathmary thinks we can fruitfully see that analogy from both ends. If evolution and cognitive learning are based on the same principles, we can use our understanding of either to throw new light on the other. With that in mind, he is now co-opting evolutionary theory to investigate the long-standing puzzle of how infants learn language so easily with no formal teaching and little other input.
Those infants may now grow up with a better grasp on the processes underlying that greatest of theories, evolution by natural selection. If evolution looks smart, that’s because it is, says Watson. “The observation that evolutionary adaptations look like the product of intelligence isn’t evidence against Darwinian evolution - it’s exactly what you should expect.” ■
Дата: 2019-02-02, просмотров: 259.