Sie sind auf Seite 1von 4

Cover Story

Achuthsankar S Nair*, Sunitha P., Rani J. R. and Aswathi B. L.**


* Editor, CSI Communications ** Lecturers at the State Inter-University Centre of Excellence in Bioinformatics, University of Kerala

Bioinspired Computing the Evolving Scenario

(L to R) Sunitha P., Rani J. R., Aswathy B. L. and Dr. Achuthsankar S Nair

This article aims to give a very generic introduction to bioinspired computing, not only from an established view point, but also from the perspective of evolution of the area. Biological inspiration in computing is of course not something new. That we very often are forced to compare the desk-tops, lap-tops and palm-tops with the unbeatable (in tasks that really matter) neck-top, tells the whole story. Sub-conciously, the whole idea of a digital computer is bio-inspired. The metaphor of the brain as CPU is as old as the modern digital computer. The input output devices as the indriyas of the computer is also well known. Not so well known metaphors include the RAM as the mind and the operating systems as culture. In earlier days, possibly due to insulation between disciplines, the bio-inspiration in computing field was not so profound. The Dartmouth Summer Research Conference on Artificial Intelligence (1956) is considered by many as a seminal event in the field. It featured giants like John McCarthy, Marvin Minsky and Claude Shannon, and recommended a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed

on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. Needless to say, after half a century of summers, we may find the assessment of Dartmouth Conference as a highly overambitious one. However the journey has not been broken and many developments which draw their inspiration from nature and life have been developed and some of them have established themselves as very sought-after tools for wide variety of applications. If we take a birds eye view of bio-inspired computing, we can clearly see two paths. One in which algorithms which mimic some aspect of life or nature are simulated on modern digital computers and another in which the basic hardware of computers is being attempted to be replaced with biological phenomena. In this article we look at both these paths.

CSI Communications | December 2011 | 6

www.csi-india.org

The human brain is of course a very natural metaphor for a computer. Modelling of the human brain is an attempt which can be traced back to thousands of years, to the model of sahasrara padma, the thousand-petal lotus. After an initial excitement with perceptrons, there was a period of setback after realisation that perceptrons could not do anything non-trivial. But the lost interest was reborn after Rumelhart (who passed away early this year) developed the error back propagation algorithm for multi-layer perceptrons in 1980s. The area of artificial neural networks (ANN) was established firmly with a number of paradigms popping up such as Kohonens network and Boltzmann machine. They all basically try to capture some essential behaviours of the human brain, specifically the distributed and interconnected nature of the neuronal cells. It was abstracted that learning in human neural networks amounted to adjusting weights associated with synaptic connections. Various fine tunings in these methodologies have happened during the last two decades. The support vector machines are one of the successful modifications which ensure that the pattern classification done by ANNs are characterised by an optimal band of separation rather than a plane. One of the greatest ideas in biology is the theory of evolution put forward by Charles Darwin. Today, nothing in biology makes sense until it makes sense in the backdrop of evolution. It is simple to realise that Darwin theorised evolution as a natural optimisation process, when he referred to survival of the fittest. By 1980s, mathematicians and computer scientists

had evolved a rugged optimisation algorithm called genetic algorithm in which candidate solutions are coded as chromosomes and they are randomly crossed to simulate reproduction process and new generations created. To simulate survival of fittest, these algorithms use a goodness function to choose better chromosomes to reproduce more. The algorithm is well known to be very rugged and not stuck with local minima. It is widely used today in many situations where optimisation is called for.

Insects represent more than half of all known living organisms and close to 10 million species believed to be extant. They are well known to possess social behavior, having class hierarchies and division of labour. It is even known that those ants that are entrusted with waste disposal go around the ant colony and locate

places where waste is piling and then dump the waste along with the pile (very much reminiscent of human behaviour in metro cities!). It has been observed that ants are able to circumvent obstructions to their march in an optimal way. They do this by spraying a chemical known as pheromones and then following the most thick trial. Computer scientists havent lost any time in abstracting ant behaviour into ant-colony optimisation (ACO) algorithms. Similarly, swarming behavior of birds and bees have been captured in Swarm Intelligence Algorithms. While abstractions such as the above have been going on, a totally different bio-inspiration has also been taking place. Dont be surprised if you hear the name DNA computer among the new brands of computers available in the market. This is a technology still in its development phase, for which the concept was put forward more than a decade ago. Leonard M. Adleman, a mathematician and Computer scientist who was working with the University of California is considered to be the pioneer in the field of DNA computing technology. A DNA computer unlike the silicon computers is using a technology based on DNA, the biological macromolecule. But it is bears little resemblance with the modern personal computer.The idea of using living cells as prospective machine components date back to late 1950s when Richard Feynman described about sub microscopic computers in his famous paper. But the concept of using the biomolecule- DNA for computing was put forward and made a reality by Leonard Adleman, when he used DNA computing to solve a NP complete problem.

CSI Communications | December 2011 | 7

He realized the potential of DNA molecule for storage of information, while he was reading the classic textbook, The molecular biology of Gene, co authored by James D. Watson, who discovered the structure of DNA. He then recognized that biology was the study of the information content stored in the DNA molecule which controls all the activities of the complex organisms which we see on our earth today. DNA, which stands for Deoxyribo Nucleic Acids are considered as the molecular basis of heredity in all living organisms except some viruses. DNA is a very large molecule which is double stranded and having the shape of a twisted ladder, made up of long chains of subunits called nucleotides. Each nucleotide is a combination of a sugar molecule (deoxyribose) and phosphate group with one of the four nitrogen bases [Adenine (A), Thymine (T), Guanine (G), and Cytosine(C)]. These individual nucleotides are joined together by phosphodiester bonds and the sugarphosphate chains are on the outside and they form the backbone structure and the nitrogen bases forms rungs joining the two strands giving the ladder like appearance and these two long chains of DNA molecules are held together by complementary base pairs. An Adenine molecule always pairs with Thymine with the help of two hydrogen bonds where as Guanine always pairs with Cytosine with the help of three hydrogen bonds. While reading through the properties of DNA and the DNA synthesizing enzyme, DNA polymerase, Adleman got stuck with its similarities to the notion of computability, which was described by Alan M. Turing in 1936. Inspired by Turings computer, Adleman thought of the possibility of utilizing DNA polymerase in a similar way in place of the finite control.For building a computer two things were necessary One a method of storing information and a few simple operations for handling that information. Just like in an electronic computer, where it stores information as sequence of zeros and ones in memory and manipulate that information with the help of operations available in the microprocessor chip, DNA computer can be made to store information as sequences of letters and manipulate that information using the properties of finite control, which here is the DNA polymerase enzyme. As the cell has been using DNA for storing the blueprint of life, this property can be

used for storing the information in the computer. DNA computing was made a reality by the end of the year 1994, when Adleman solved the Hamiltonian path problem, (which is a non deterministic polynomial time problem) also known as the travelling salesmans problem, using the DNA molecules. He chose a 7node Hamiltonian path problem in which the goal of the problem was to find the shortest path for the salesman to cover 7-cities at a minimum effort and going each city only once.

End Start

Fig. Showing a 7-point Hamiltonian path


Even though computer scientists had tried hard, no efficient algorithms were available till then to solve the problem in the realistic time and Adleman solved this using his DNA test tube computer within about one second. He assigned some random DNA sequences (combination of A, G, C, T) to each of the seven cities and the possible path. These molecules were then shuffled in a test tube and the complimentary strands stacked together. A chain of these strands represented the possible answer. Within a few milliseconds all of the possible combinations of DNA strands, which represented the answers were created in the test tube and the wrong molecules (answers) were eliminated through chemical reactions, which left behind only the shortest path connecting seven cities [For further reading on how this problem was solved, see [1, 3]. Hence we can say that in a DNA computer, certain combinations of DNA molecule are inferred as the result to a particular problem depicted to the original molecules. The DNA computer will be much smaller compared to modern PC. Another advantage of the technology is that it is inexpensive compared to the modern

day PCs, since DNA can be taken from the living cells and as long as life exists, the resources for DNA would not drain out. Also as the DNA is well known for the storage of information in cells, this property can be utilized as the capacity for memory storage in the computers. DNA computers are also non toxic because of the usage of biological components and possibly they can reduce the present day problem of e-wastage. The capacity for parallel processing is yet another advantage of a DNA computer. Even though DNA computers perform well and solutions can be found for the harder problems, there is no guarantee that the solution produced will be the absolute or best one, but it might be a good solution. Another disadvantage of DNA computer is that, it is not programmable and requires human assistance to solve and interpret the result in a useful form. Also to narrow down the possibilities of the right answers, it takes time and human effort. A group of scientists from Imperial College London, have materialized a biological digital devicethereby raising hopes of a living computer based on the digital computer paradigm. They genetically engineered the human gut bacteria, E. coli to function as logic gates, which are the basic components used to build electronic circuits. Without Logic gates we are unable to process digital information. The scientists could effectively replicate these logic gates using bacteria and these are the first biological logic gates proven to function like electronic ones.

InputA InputB
Gene1 AND

Output
Gene2 Gene3

Activate

The team says that their biological logic gates behave more like their electronic counterparts. The new biological logic gates can be assembled together to make more complex components in a similar way that electronic components are made. The researchers demonstrated how biological logic gates can replicate the way that

electronic logic gates functions by either switching ON or OFF states. They could effectively construct an AND Gate from E.Coli by modifying its DNA which reprogrammed it to carry out the same ON and OFF switching process as its electronic equivalent when simulated by chemicals. For activating this gate, two input signals must be present to stimulate two independent genes. The two proteins expressed by these two genes then jointly trigger a third gene that produces the desired output. One of the key features of these gates was that they could function separately of both the genetic background of the host as well as the external environment. The Imperial team suggests that although there is still a long way to go in the design of these living computers or microscopic biological computers, the potential applications are fascinating. They could be used with sensors to detect and destroy the cancerous cells within the human body. Other possible applications include devices with sensors that swim inside arteries, detecting the accumulation of

plaque, preventing heart attacks, or they could be used for speedily deliverance of medications to the affected area. Other applications may include detecting toxins in the environment and neutralizing them. One of the practical problems addressed by the team is in deciding how to connect many individual bacterial components together into a working system. Since these are living cells, they cannot be wired together like electronic components. They will probably need to be immobilized by encapsulation in some sort of micro-fluidic device, says Martin Buck, the biology professor on the Imperial team. Inspired by Adlemans work, another set of researchers from the University of Rochester have developed DNA logic gates [4]. In 2003, researchers from Weizmann Institute of Science in Israel developed a DNA computer which can perform 330 trillion operations per second, which is 100,000 times faster than the electronic PC and this research was considered as a giant step in DNA computing and the Guinness world records in the year 2004,

recognized this computer as the Smallest biological computing device ever constructed. In the first half of the 2011, a new DNA computer was developed by the researchers at California Institute of Technology, which can calculate square root of integers up to 15[2]. Heard melodies are sweet and unheard are sweeter! In the era of interdisciplinary and multi disciplinary studies, we need to keep our eyes set on the horizon, on the line where Science of Life meets Science of Technology. In near future, we may see complex living computers processing information using chemicals, to perform specific jobs, much in the same way that our body uses them to process and store information.

References:
1. 2.

3.

4.

L. M. Adleman, Computing with DNA, Scientific American, vol. 279, 1998, pp.54-61. L. Qian and E. Winfree, Scaling up digital circuit computation with DNA strand displacement cascades., Science, vol. 332, 2011, pp. 1196-201. J. Macdonald, D. Stefanovic, and M. N. Stojanovic, DNA computers for work and play Scientific American, vol. 299, 2008, pp. 84-91. M. B. Ruiz-perez, Logic Gates made with DNA, Oligonucleotides, 2002. n

Das könnte Ihnen auch gefallen