Sie sind auf Seite 1von 12

Visit: www.geocities.com/chinna_chetan05/forfriends.

html

ABSTRACT

Bioinformatics is the new niche in biotechnology , the new investment opportunity for
venture capitalists, the new business opportunity for entrepreneurs; bioinformatics is
simply the new, new thing!
In an attempt to provide a simplistic definition of bioinformatics, scientists have
conjured up broad band of interpretations where software companies plan to declining
businesses by switching from writing algorithms for banking to diversify into
bioinformatics as a logical extension of their businesses. Bioinformatics is the symbiotic
relationship between computational end biological sciences.
Computational simulation of experimental biology is another important
application of bioinformatics which is aptly referred to as “in silico” testing. This is
perhaps an area that will expand in prolific way, given the need to obtain a grater degree
of predictability in animal and human clinical trails.

Quantum computing is the field where we use quantum physics and use that
system developed for solving general problems (even complex problems) which can be
solved by digital computer.
Almost all of today’s computers are based on simple turing theory and employ
boolean logic based on binary mathematics. Even ‘parallel” computers are really complex
turing engines employing multiple computing modules which deal with pieces of
incoming data(chunks bytes, instructions, etc.). There has been some research into
biological computing using enzymes or large-molecule systems as memory, shift
registers etc., but this has not proven to be very practical. Quantum computing is based
on a different physics than digital computing. Instead of having two(or three) states per
element like digital computers which are off on, or neither, quantum computers have all
the three states at the same time.

1 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

BIOINFORMATICS :

INTRODUCTION

The science of bioinformatics or computational biology is increasingly being used to improve the
quality of life as we know it.

Bioinformatics has developed out of the need to understand the code of life, DNA. Massive
DNA sequencing projects have evolved and added in the growth of the science of bioinformatics.
DNA the basic molecule of life directly controls the fundamental biology of life. It codes for
genes which code for proteins which determine the biological makeup of humans or any living
organism. It is variations and errors in the genomic DNA which ultimately define the likelihood
of developing diseases or resistance to these same disorders.

The ultimate goal of bioinformatics is to uncover the wealth of biological information hidden in
the mass of sequence data and obtain a clearer insight into the fundamental biology of organisms
and to use this information to enhance the standard of life for mankind.

It is being used now and in the foreseable future in the areas of molecular medicine to help
produce better and more customised medicines to prevent or cure diseases, it has environmental
benefits in, identifying waste cleanup bacteria and in agriculture it can be used for producing
high yield low maintenance crops. These are just a few of the many benefits bioinformatics will
help develop.

The genomic era has seen a massive explosion in the amount of biological information
available due to huge advances in the fields of molecular biology and genomics.

Bioinformatics is the application of computer technology to the management and analysis


of biological data. The result is that computers are being used to gather, store, analyse
and merge biological data.

Bioinformatics is an interdisciplinary research area that is the interface between the


biological and computational sciences. The ultimate goal of bioinformatics is to uncover
the wealth of biological information hidden in the mass of data and obtain a clearer
insight into the fundamental biology of organisms. This new knowledge could have
profound impacts on fields as varied as human health, agriculture, the environment,
energy and biotechnology.

2 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

Biological applications:

Once all of the biological data is stored consistently and is easily available to the
scientific community, the requirement is then to provide methods for extracting the
meaningful information from the mass of data. Bioinformatic tools are software programs
that are designed to carry out this analysis step.

Factors that must be taken into consideration when designing these tools are:
• The end user (the biologist) may not be a frequent user of computer technology
• These software tools must be made available over the internet given the global
distribution of the scientific research community
The EBI provides a wide range of biological data analysis tools that fall into the
following four major categories:
• Homology and Similarity Tools
• Protein Function Analysis
• Structural Analysis
• Sequence Analysis
Homology and Similarity Tools :
Homologous sequences are sequences that are related by divergence from a common
ancestor. Thus the degree of similarity between two sequences can be measured while
their homology is a case of being either true of false.
This set of tools can be used to identify similarities between novel query sequences of
unknown structure and function and database sequences whose structure and function
have been elucidated.

Protein Function Analysis :


This group of programs allow you to compare your protein sequence to the secondary (or
derived) protein databases that contain information on motifs, signatures and protein
domains. Highly significant hits against these different pattern databases allow you to
approximate the biochemical function of your query protein.

Structural Analysis :
This set of tools allow you to compare structures with the known structure databases. The
function of a protein is more directly a consequence of its structure rather than its
sequence with structural homologs tending to share functions. The determination of a
protein's 2D/3D structure is crucial in the study of its function.

Sequence Analysis:
This set of tools allows you to carry out further, more detailed analysis on your query
sequence including evolutionary analysis, identification of mutations, hydropathy
regions, CpG islands and compositional biases. The identification of these and other
biological properties are all clues that aid the search to elucidate the specific function of
your sequence.

3 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

Biological databases:
Biological databases are archives of consistent data that are stored in a uniform and efficient
manner. These databases contain data from a broad spectrum of molecular biology areas.
Primary or archived databases contain information and annotation of DNA and protein
sequences, DNA and protein structures and DNA and protein expression profiles.

Secondary or derived databases are so called because they contain the results of analysis on the
primary resources including information on sequence patterns or motifs, variants and mutations
and evolutionary relationships. Information from the literature is contained in bibliographic
databases, such as Medline.

It is essential that these databases are easily accessible and that an intuitive query system is
provided to allow researchers to obtain very specific information on a particular biological
subject. The data should be provided in a clear, consistent manner with some visualisation tools
to aid biological interpretation.

Specialist databases for particular subjects have been set-up for example EMBL database for
nucleotide sequence data, Swiss-Prot protein database and PDB a 3D protein structure database.

Scientists also need to be able to integrate the information obtained from the underlying
heterogeneous databases in a sensible manner in order to be able to get a clear overview of their
biological subject. SRS (Sequence Retrieval System) is a powerful, querying tool provided by
the EBI that links information from more than 150 heterogeneous resources.

Why is bioinformatics important?


The greatest challenge facing the molecular biology community today is to make sense of
the wealth of data that has been produced by the genome sequencing projects.
Traditionally, molecular biology research was carried out entirely at the experimental
laboratory bench but the huge increase in the scale of data being produced in this
genomic era has seen a need to incorporate computers into this research process.

Sequence generation, and its subsequent storage, interpretation and analysis are entirely
computer dependent tasks. However, the molecular biology of an organism is a very
complex issue with research being carried out at different levels including the genome,
proteome, transcriptome and metabalome levels. Following on from the explosion in
volume of genomic data, similar increase in data have been observed in the fields of
proteomics, transcriptomics and metabalomics.

The first challenge facing the bioinformatics community today is the intelligent and
efficient storage of this mass of data. It is then their responsibility to provide easy and
4 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

reliable access to this data. The data itself is meaningless before analysis and the sheer
volume present makes it impossible for even a trained biologist to begin to interpret it
manually. Therefore, incisive computer tools must be developed to allow the extraction
of meaningful biological information.

There are three central biological processes around which bioinformatics tools must be
developed:
• DNA sequence determines protein sequence
• Protein sequence determines protein structure
• Protein structure determines protein function
The integration of information learned about these key biological processes should allow
us to achieve the long term goal of the complete understanding of the biology of
organisms.

APPLICATIONS
Real world applications of bioinformatics

The science of bioinformatics has many beneficial uses in the modern day world.

These include the following:


1.4 Gene therapy

In the not too distant future, the potential for using genes themselves to treat disease may
become a reality. Gene therapy is the approach used to treat, cure or even prevent disease
by changing the expression of a persons genes. Currently, this field is in its infantile stage
with clinical trials for many different types of cancer and other diseases ongoing.

2.2 Climate change

Increasing levels of carbon dioxide emission, mainly through the expanding use of fossil
fuels for energy, are thought to contribute to global climate change. Recently, the DOE
(Department of Energy, USA) launched a program to decrease atmospheric carbon
dioxide levels. One method of doing so is to study the genomes of microbes that use
carbon dioxide as their sole carbon source.

2.4 Biotechnology

The archaeon Archaeoglobus fulgidus and the bacterium Thermotoga maritima have
potential for practical applications in industry and government-funded environmental
remediation. These microorganisms thrive in water temperatures above the boiling point
and therefore may provide the DOE, the Department of Defence, and private companies
with heat-stable enzymes suitable for use in industrial processes,
Other industrially useful microbes include, Corynebacterium glutamicum which is of
high industrial interest as a research object because it is used by the chemical industry for
the biotechnological production of the amino acid lysine. The substance is employed as a
source of protein in animal nutrition. Lysine is one of the essential amino acids in animal

5 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

nutrition. Biotechnologically produced lysine is added to feed concentrates as a source of


protein, and is an alternative to soybeans or meat and bonemeal.

Xanthomonas campestris pv. is grown commercially to produce the exopolysaccharide


xanthan gum, which is used as a viscosifying and stabilising agent in many industries.

Lactococcus lactis is one of the most important micro-organisms involved in the dairy
industry, it is a non-pathogenic rod-shaped bacterium that is critical for manufacturing
dairy products like buttermilk, yogurt and cheese. This bacterium, Lactococcus lactis
ssp., is also used to prepare pickled vegetables, beer, wine, some breads and sausages and
other fermented foods. Researchers anticipate that understanding the physiology and
genetic make-up of this bacterium will prove invaluable for food manufacturers as well
as the pharmaceutical industry, which is exploring the capacity of L. lactis to serve as a
vehicle for delivering drugs.

2.8 Evolutionary studies

The sequencing of genomes from all three domains of life, eukaryota, bacteria and
archaea means that evolutionary studies can be performed in a quest to determine the tree
of life and the last universal common ancestor.
For more interesting stories, check the archive at the Genome News Network .

For information on structural, functional and comparative analysis of genomes and genes
from a wide variety of organisms see The Institute of Genomic Research.

3. Agriculture
The sequencing of the genomes of plants and animals should have enormous benefits for
the agricultural community. Bioinformatic tools can be used to search for the genes
within these genomes and to elucidate their functions. This specific genetic knowledge
could then be used to produce stronger, more drought, disease and insect resistant crops
and improve the quality of livestock making them healthier, more disease resistant and
more productive.

6 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

5. Comparative studies

Analysing and comparing the genetic material of different species is an important method
for studying the functions of genes, the mechanisms of inherited diseases and species
evolution. Bioinformatics tools can be used to make comparisons between the numbers,
locations and biochemical functions of genes in different organisms.

Organisms that are suitable for use in experimental research are termed model organisms.
They have a number of properties that make them ideal for research purposes including
short life spans, rapid reproduction, being easy to handle, inexpensive and they can be
manipulated at the genetic level.

An example of a human model organism is the mouse. Mouse and human are very
closely related (>98%) and for the most part we see a one to one correspondence between
genes in the two species. Manipulation of the mouse at the molecular level and genome
comparisons between the two species can and is revealing detailed information on the
functions of human genes, the evolutionary relationship between the two species and the
molecular mechanisms of many human diseases.

What is a Quantum Computer?


Behold your computer. Your computer represents the culmination of years of
technological advancements beginning with the early ideas of Charles Babbage (1791-
1871) and eventual creation of the first computer by German engineer Konrad Zuse in
1941. Surprisingly however, the high speed modern computer sitting in front of you is
fundamentally no different from its gargantuan 30 ton ancestors, which were equipped
with some 18000 vacuum tubes and 500 miles of wiring! Although computers have
become more compact and considerably faster in performing their task, the task remains
the same: to manipulate and interpret an encoding of binary bits into a useful
computational result. A bit is a fundamental unit of information, classically represented
as a 0 or 1 in your digital computer. Each classical bit is physically realized through a
macroscopic physical system, such as the magnetization on a hard disk or the charge on a
capacitor. A document, for example, comprised of n-characters stored on the hard drive
of a typical computer is accordingly described by a string of 8n zeros and ones. Herein
lies a key difference between your classical computer and a quantum computer. Where a
classical computer obeys the well understood laws of classical physics, a quantum
computer is a device that harnesses physical phenomenon unique to quantum mechanics
(especially quantum interference) to realize a fundamentally new mode of information
processing.
In a quantum computer, the fundamental unit of information (called a quantum bit or
qubit), is not binary but rather more quaternary in nature. This qubit property arises as a
direct consequence of its adherence to the laws of quantum mechanics which differ
radically from the laws of classical physics. A qubit can exist not only in a state
corresponding to the logical state 0 or 1 as in a classical bit, but also in states
corresponding to a blend or superposition of these classical states. In other words, a qubit
can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient
representing the probability for each state. This may seem counterintuitive because

7 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

everyday phenomenon are governed by classical physics, not quantum mechanics --


which takes over at the atomic level. This rather difficult concept is perhaps best
explained through an experiment. Consider figure a below:

Introduction :
The science of physics seeks to ask, and find precise answers to, basic questions about
why nature is as it is. Historically, the fundamental principles of physics have been
concerned with questions such as ``what are things made of?'' and ``why do things move
as they do?'' In his Principia, Newton gave very wide-ranging answers to some of these
questions. By showing that the same mathamatical equations could describe the motions
of everyday objects and of planets, he showed that an everyday object such as a tea pot is
made of essentially the same sort of stuff as a planet: the motions of both can be
described in terms of their mass and the forces acting on them. Nowadays we would say
that both move in such a way as to conserve energy and momentum. In this way, physics
allows us to abstract from nature concepts such as energy or momentum which always
obey fixed equations, although the same energy might be expressed in many different
ways: for example, an electron in the large electron-positron collider at CERN, Geneva,
can have the same kinetic energy as a slug on a lettuce leaf.
Another thing which can be expressed in many different ways is information. For
example, the two statements ``the quantum computer is very interesting'' and
``l'ordinateur quantique est tres interessant'' have something in common, although they
share no words. The thing they have in common is their information content. Essentially
the same information could be expressed in many other ways, for example by substituting
numbers for letters in a scheme such as a -> 97, b -> 98, c -> 99 and so on, in which case
the english version of the above statement becomes 116 104 101 32 113 117 97 110 116
117 109... . It is very significant that information can be expressed in different ways
without losing its essential nature, since this leads to the possibility of the automatic
manipulation of information: a machine need only be able to manipulate quite simple
things like integers in order to do surprisingly powerful information processing, from
document preparation to differential calculus, even to translating between human
languages. We are familiar with this now, because of the ubiquitous computer, but even
fifty years ago such a widespread significance of automated information processing was
not forseen.
However, there is one thing that all ways of expressing information must have in
common: they all use real physical things to do the job. Spoken words are conveyed by
air pressure fluctuations, written ones by arrangements of ink molecules on paper, even
thoughts depend on neurons (Landauer 1991). The rallying cry of the information
physicist is ``no information without physical representation!'' Conversely, the fact that
information is insensitive to exactly how it is expressed, and can be freely translated from
one form to another, makes it an obvious candidate for a fundamentally important role in
physics, like energy and momentum and other such abstractions. However, until the
second half of this century, the precise mathematical treatment of information, especially
information processing, was undiscovered, so the significance of information in physics
was only hinted at in concepts such as entropy in thermodynamics. It now appears that
information may have a much deeper significance. Historically, much of fundamental

8 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

physics has been concerned with discovering the fundamental particles of nature and the
equations which describe their motions and interactions. It now appears that a different
programme may be equally important: to discover the ways that nature allows, and
prevents, information to be expressed and manipulated, rather than particles to move. For
example, the best way to state exactly what can and cannot travel faster than light is to
identify information as the speed-limited entity. In quantum mechanics, it is highly
significant that the state vector must not contain, whether explicitly or implicitly, more
information than can meaningfully be associated with a given system. Among other
things this produces the wavefunction symmetry requirements which lead to Bose
Einstein and Fermi Dirac statistics, the periodic structure of atoms, and so on.
The programme to re-investigate the fundamental principles of physics from the
standpoint of information theory is still in its infancy. However, it already appears to be
highly fruitful, and it is this ambitious programme that I aim to summarise.

A Brief History of Quantum Computing:


The idea of a computational device based on quantum mechanics was first explored in the
1970's and early 1980's by physicists and computer scientists such as Charles H.
Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioff of Argonne
National Laboratory in Illinois, David Deutsch of the University of Oxford, and the
late Richard P. Feynman of the California Institute of Technology (Caltech). The idea
emerged when scientists were pondering the fundamental limits of computation. They
understood that if technology continued to abide by Moore's Law, then the continually
shrinking size of circuitry packed onto silicon chips would eventually reach a point where
individual elements would be no larger than a few atoms. Here a problem arose because
at the atomic scale the physical laws that govern the behavior and properties of the circuit
are inherently quantum mechanical in nature, not classical. This then raised the question
of whether a new kind of computer could be devised based on the principles of quantum
physics.
Feynman was among the first to attempt to provide an answer to this question by
producing an abstract model in 1982 that showed how a quantum system could be used to
do computations. He also explained how such a machine would be able to act as a
simulator for quantum physics. In other words, a physicist would have the ability to carry
out experiments in quantum physics inside a quantum mechanical computer.
Later, in 1985, Deutsch realized that Feynman's assertion could eventually lead to a
general purpose quantum computer and published a crucial theoretical paper showing that
any physical process, in principle, could be modeled perfectly by a quantum computer.
Thus, a quantum computer would have capabilities far beyond those of any traditional
classical computer. After Deutsch published this paper, the search began to find
interesting applications for such a machine.
Unfortunately, all that could be found were a few rather contrived mathematical
problems, until Shor circulated in 1994 a preprint of a paper in which he set out a method
for using quantum computers to crack an important problem in number theory, namely
factorization. He showed how an ensemble of mathematical operations, designed

9 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

specifically for a quantum computer, could be organized to enable a such a machine to


factor huge numbers extremely rapidly, much faster than is possible on conventional
computers. With this breakthrough, quantum computing transformed from a mere
academic curiosity directly into a national and world interest.

Obstacles and Research


The field of quantum information processing has made numerous promising
advancements since its conception, including the building of two- and three-qubit
quantum computers capable of some simple arithmetic and data sorting. However, a few
potentially large obstacles still remain that prevent us from "just building one," or more
precisely, building a quantum computer that can rival today's modern digital computer.
Among these difficulties, error correction, decoherence, and hardware architecture are
probably the most formidable. Error correction is rather self explanatory, but what errors
need correction? The answer is primarily those errors that arise as a direct result of
decoherence, or the tendency of a quantum computer to decay from a given quantum
state into an incoherent state as it interacts, or entangles, with the state of the
environment. These interactions between the environment and qubits are unavoidable,
and induce the breakdown of information stored in the quantum computer, and thus errors
in computation. Before any quantum computer will be capable of solving hard problems,
research must devise a way to maintain decoherence and other potential sources of error
at an acceptable level. Thanks to the theory (and now reality) of quantum error
correction, first proposed in 1995 and continually developed since, small scale quantum
computers have been built and the prospects of large quantum computers are looking up.
Probably the most important idea in this field is the application of error correction in
phase coherence as a means to extract information and reduce error in a quantum system
without actually measuring that system. In 1998, researches at Los Alamos National
Laboratory and MIT led by Raymond Laflamme managed to spread a single bit of
quantum information (qubit) across three nuclear spins in each molecule of a liquid
solution of alanine or trichloroethylene molecules. They accomplished this using the
techniques of nuclear magnetic resonance (NMR). This experiment is significant because
spreading out the information actually made it harder to corrupt. Quantum mechanics
tells us that directly measuring the state of a qubit invariably destroys the superposition of
states in which it exists, forcing it to become either a 0 or 1. The technique of spreading
out the information allows researchers to utilize the property of entanglement to study the
interactions between states as an indirect method for analyzing the quantum information.
Rather than a direct measurement, the group compared the spins to see if any new
differences arose between them without learning the information itself. This technique
gave them the ability to detect and fix errors in a qubit's phase coherence, and thus
maintain a higher level of coherence in the quantum system. This milestone has provided
argument against skeptics, and hope for believers. Currently, research in quantum error
correction continues with groups at Caltech (Preskill, Kimble), Microsoft, Los Alamos,
and elsewhere.
At this point, only a few of the benefits of quantum computation and quantum computers
are readily obvious, but before more possibilities are uncovered theory must be put to the
test. In order to do this, devices capable of quantum computation must be constructed.
Quantum computing hardware is, however, still in its infancy. As a result of several
10 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

significant experiments, nuclear magnetic resonance (NMR) has become the most
popular component in quantum hardware architecture. Only within the past year, a group
from Los Alamos National Laboratory and MIT constructed the first experimental
demonstrations of a quantum computer using nuclear magnetic resonance (NMR)
technology. Currently, research is underway to discover methods for battling the
destructive effects of decoherence, to develop an optimal hardware architecture for
designing and building a quantum computer, and to further uncover quantum algorithms
to utilize the immense computing power available in these devices. Naturally this pursuit
is intimately related to quantum error correction codes and quantum algorithms, so a
number of groups are doing simultaneous research in a number of these fields. To date,
designs have involved ion traps, cavity quantum electrodynamics (QED), and NMR.
Though these devices have had mild success in performing interesting experiments, the
technologies each have serious limitations. Ion trap computers are limited in speed by the
vibration frequency of the modes in the trap. NMR devices have an exponential
attenuation of signal to noise as the number of qubits in a system increases. Cavity QED
is slightly more promising; however, it still has only been demonstrated with a few
qubits. Seth Lloyd of MIT is currently a prominent researcher in quantum hardware. The
future of quantum computer hardware architecture is likely to be very different from what
we know today; however, the current research has helped to provide insight as to what
obstacles the future will hold for these devices.

Future Outlook
At present, quantum computers and quantum information technology remains in its
pioneering stage. At this very moment obstacles are being surmounted that will provide
the knowledge needed to thrust quantum computers up to their rightful position as the
fastest computational machines in existence. Error correction has made promising
progress to date, nearing a point now where we may have the tools required to build a
computer robust enough to adequately withstand the effects of decoherence. Quantum
hardware, on the other hand, remains an emerging field, but the work done thus far
suggests that it will only be a matter time before we have devices large enough to test
Shor's and other quantum algorithms. Thereby, quantum computers will emerge as the
superior computational devices at the very least, and perhaps one day make today's
modern computer obsolete. Quantum computation has its origins in highly specialized
fields of theoretical physics, but its future undoubtedly lies in the profound effect it will
have on the lives of all mankind.

BIBILIOGRAPHY:
www.ebi.ac.uk

11 Email: chinna_chetan05@yahoo.com
Visit: www.geocities.com/chinna_chetan05/forfriends.html

References:
1. D. Deutsch, Proc. Roy. Soc. London, Ser. A 400, 97 (1985).
2. R. P. Feynman, Int. J. Theor. Phys. 21, 467 (1982).
3. J. Preskill, "Battling Decoherence: The Fault-Tolerant Quantum
Computer," Physics Today, June (1999).
4. Shor, P. W., Algorithms for quantum computation: Discrete logarithms and
factoring, in Proceedings of the 35th Annual Symposium on Foundations of
Computer Science, IEEE Computer Society Press (1994).
5. Nielsen, M., "Quantum Computing," (unpublished notes) (1999).
6. QUIC on-line, "Decoherence and Error Correction," (1997).
7. D.G. Cory et al., Physical Review Letters, 7 Sept 1998.
8. J. Preskill, "Quantum Computing: Pro and Con," quant-ph/9705032 v3,
26 Aug 1997.
9. Chuang, I. L., Laflamme, R., Yamamoto, Y., "Decoherence and a Simple
Quantum Computer," (1995).
10. D. Deutsch, A. Ekert, "Quantum Computation," Physics World, March
(1998).
Originally Written: 02/25/00
Last Updated: 05/31/00

12 Email: chinna_chetan05@yahoo.com

Das könnte Ihnen auch gefallen