Sie sind auf Seite 1von 51

Chapter-1 ABSTRACT

A new quantum information technology (QIT) could emerge in the future, based on current research in the fields of quantum information processing and communication1-3 (QIPC). In contrast to conventional IT, where quantum mechanics plays a support role in improving the building blocks, fundamental quantum phenomena play a central role for QIPC information is stored, processed and communicated according to the laws of quantum physics. This additional freedom could enable future QIT to perform tasks we will never achieve with ordinary IT. This article provides an introduction to QIPC, some indication of the state of play today, and some comments on the future. Quantum information technology promises powerful information processing based on the control of the dynamics of individual particles. Research on single electron dynamics aiming at the coherent electrical manipulation of single electron charge and spin in semiconductor quantum dots is expected to provide practical quantum information technology. A single electron charge quantum bit (qubit), which has recently been realized in a double quantum dot, is advantageous for flexible control of the quantum state by a high-speed electrical signal, while a single electron spin qubit is expected to have a sufficiently long decoherence time. This paper reviews our research on single electron dynamics for quantum information processing. Moore's Law has set great expectations that the performance/price ratio of commercially available semiconductor devices will continue to improve exponentially at least until the end of the next decade. Although the physics of nanoscale silicon transistors alone would allow these expectations to be met, the physics of the metal wires that connect these transistors will soon place stringent limits on the performance of integrated circuits. We will describe a Si-compatible global interconnect architecture - based on chip-scale optical wavelength division multiplexing that could precipitate an "optical Moore's Law" and allow exponential performance gains until the transistors themselves become the bottleneck. Based on similar fabrication techniques and technologies.

CHAPTER-2 INTRODUCTION

Today many people are familiar with at least the consequences of Moores Law the fastest computer in the shops doubles in speed about every 18 months to two years. This is because electronic component devices are shrinking. The smaller they get, the faster they work, and the closer they can be packed on a silicon chip. This exponential progress, first noted4by Gordon Moore, a co-founder and former CEO of Intel, in 1965, has continued ever since. But it cannot go on forever. Hurdles exist, for example: silicon will hit problems, with oxide thinness, track width, or whatever; 5 new materials or even new paradigms, such as self-assembled nanodevices or molecular electronics, will be needed; lots of dollars will be needed, as Moores second law tells us that fabrication costs are also growing exponentially. However, even if all the hurdles can be overcome, we will eventually run into Nature. Very small things do not behave the same way as big ones they begin to reveal their true quantum nature. Following Moores Law, an extrapolation of the exponentially decaying number of electrons per elementary device on a chip gets to one electron per device around 2020. This is clearly too nave, but it gives us a hint. Eventually we will get to scales where quantum phenomena rule, whether we like it or not. If we are unable to control these effects, then data bits in memory or processors will suffer errors from quantum fluctuations, and devices will fail. Clearly this alone makes a strong case for investment in research into quantum devices and quantum control. The results should enable us to push Moores Law to the limit, evolving conventional information technology (IT) as far as it can go. However, such quantum research has already shown that the potential exists to do much more revolution! Instead of playing support act to make better conventional devices, let quantum mechanics take centre stage in new technology that stores, processes and communicates information according to the laws of quantum mechanics.

CHAPTER-3 LITERATURE REVIEW


"Tomorrows computer might well resemble a jug of water" This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement. Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now. GENERAL CONCEPT OF INFORMATION In quantum information technology, a quantum state is a carrier of information. It is transformed from an initial input state to a final output state by applying an external field, e.g., an electromagnetic field, for acertain period. The fundamental transformation of the quantum state is called a quantum logic gate, and it should be a unitary transformation to keep coherency. The quantum state is changed by applying a series of quantum logic gates for the start of quantum computation, when classical digital information is input to the quantum state, until the end of the computation ,when the output states are finally measured as classical information. One may not measure the intermediate state, because that would cause the quantum state to collapse. Quantum computing requires that a high degree of quantum coherence be maintained for along enough time to complete the computation. One can design an algorithm for quantum computation in such a way that a series of data processing is performed simultaneously in a parallel fashion (quantum parallelism). Quantum computation is therefore expected to provide extremely efficient

calculations for specific problems that cannot be solved efficiently with conventional classical computers. To construct a quantum computer, a bunch of single quantum states must be (i) (ii) (iii) (iv) (v) prepared physically, initialized, manipulated coherently, preserved for a long enough time, and measured individually.

Each of these requirements needs further development. Below, we briefly summarize our strategies for realizing quantum computers using single-electron dynamics

The building blocks quantum bits


Most information manipulation these days is done digitally, so data is processed, stored and communicated as bits. The two states of a conventional data bit (but written in suggestive quantum notation as) |0> and |1> take many forms two different voltages across a transistor on a chip, two different orientations of a magnetic domain on a disc or tape, two different voltages propagating down a wire, two different light pulses travelling down an optical fibre, and so on dependent upon what is being done with the data. At any time a bit is always in state |0> or state |1>, hence the name, although bits get flipped as data is processed or memory is rewritten. However, the quantum analogue of a conventional bit, a qubit, has rather more freedom. It can sit anywhere in a two-dimensional Hilbert space picture it as the surface of a sphere with a general state of the form (1) parametrized by two angles. A conventional bit only has the choice of the poles, but a qubit can live anywhere on the surface of the sphere. States such as (1) are superposition states; they have amplitudes for and thus carry information about the states |0> and |1> at the same time. Similarly, a collection, or register, of N qubits can have exponentially many (2N ) amplitudes, whereas the analogous conventional data register can only hold one of these states at any given time. Clearly if it is possible to operate, or compute, simultaneously with all the amplitudes of a quantum register, there is the possibility of massively parallel computation based on quantum superpositions.1,2We can read ordinary information without noticeably changing it you can read a book without harming it and your telephone calls can be tapped without you knowing. The same is simply not so for quantum information. If a qubit in
4

state (1) is measured to determine its bit value, it will always give the answer 0 or 1. This is a truly random and irreversible process, with respective probabilities of cos2a and sin2a, and afterwards the qubit is left in the corresponding bit state |0> or |1> (if it isnt destroyed). It is thus impossible to read, or similarly copy or clone,6,7unknown quantum information without generally leaving evidence of the intrusion. This unavoidable disturbance through quantum measurement can be used to detect eavesdropping on quantum communications,3 and provides the basis for guaranteed security.8,9 Many types of usable qubit exist, or in some cases reasonable approximations, where two orthogonal quantum states (used to represent |0> and |1>) are or can be separated from the rest of the space. Examples include: two adjacent energy eigenstates of atoms10or ions11 (separated by a microwave or an optical transition); the vacuum or single photon state of a mode in a small optical or superconducting microwave cavity;12 two orthogonal linear or circular polarizations of a travelling photon or weak light pulse;3 the which path label of a photon3or atom in an interferometer; the energy eigenstates (up or down) of a spin-1/2 in a magnetic field;13 two adjacent energy eigenstates of an electron or exciton in a quantum dot;14 two charge states of a tiny superconducting island 15 or flux states of a superconducting ring;16,17 and so on. This list is not at all exhaustive, and many more candidate qubits have been proposed and are under investigation. As with realisations of conventional data bits, the most appropriate choice is defined by the application. CONCEPT OF INFORMATION IN QUANTUM COMPUTERS - THE QUBIT In quantum computers also, the basic unit of information is a bit. The :c-ocept of quantum computing first arose when the use of an atom as a bit was suggested. If we choose an atom as a physical bit then quantum mechanics tells at apart from the two distinct electronic states (the excited state and the vid state), the atom can be also prepared in what is known as a coherent >Derposition of the two states. This means that the atom can be both in state 0 and state 1 simultaneously. It is at this point that the concept of a quantum bit or a 3ubit arises. This concept is the backbone of the idea of quantum computing, for the same reason, let's see in detail what actually coherent superposition is.

COHERENT SUPERPOSITION In any quantum mechanical system, a particular state of the system is represented by a mathematical function called as the wave function of that state. A wave function is a complex exponential which includes all possible phases of existence of that particular state. Considering any quantum mechanical system, let \\>i and y2 be two wave functions that represent any two independent states of the system. Then quantum mechanics tells us that there exists a state of the same system that can be represented by the wave function Ci*j/i + c2v|/2.This state is called as a superposition of the two states represented by yi and vj/2. This would mean that the system would be in both the states of y-i and i|/2 simultaneously. All super positions of two quantum states of a system need not be stable. If the superposition is to be stable, then there should be some sort of coherence between the two states that are being super positioned. Such a superposition is called as a coherent superposition.

There can be more than one coherent superposition for a particular pair of states of a quantum mechanical system. So in our talk the term 'coherent superposition' would refer to that superposition which is the most stable one.

ADVANTAGE OF USING COHERENT-SUPERPOSITIONED MEMORY The importance of coherent-superpositioned storage can be understood from the following example. Consider a register composed of three physical bits. Any classical register of that type can store in a given moment of time only one out of eight different numbers i.e. the register can be in only one out of eight possible configurations such as 000, 001, 010, ... 111. Consider the case of a quantum register at that place. Since a qubit can store both the values of 0 & 1 simultaneously, a quantum register composed of three qubits can store in a given moment of time all the eight numbers in a quantum superposition. The catch is that the memory size grows exponentially when additional bits are added compared to the linear growth in classical computers. Also, once the register is prepared in such a superposition, operations can be performed on all the numbers simultaneously.

ROLE OF COHERENT SUPERPOSITION IN COMPUTING OPERATIONS We have seen that once a register is prepared in a superposition of different numbers, we can perform operations on all of them. For example, if qubits are atoms then suitably tuned laser pulses affect atomic electronic states and evolve initial super positions of encoded numbers into different super positions. During such evolution each number in the superposition is affected and as the result we generate a massive parallel computation albeit in one piece of quantum hardware. This means that a quantum computer can in only one computational step perform the same mathematical operation on 2L different input numbers encoded in coherent super positions of L qubits. In order to accomplish the same task, any classical computer has to repeat the same computation 2L times or one has to use 2L different processors working in parallel. In other words a quantum computer offers an enormous gain in the use of computational resources such as time and memory.

But this, after all, sounds as yet another purely technological progress. It looks like classical computers can do the same computations as quantum computers but simply need more time or more memory. The catch is that classical computers need exponentially more time or memory to match the power of quantum computers and this is really asking for too much because an exponential increase is really fast and we run out of available time or memory very quickly.

SINGLE-ELECTRON DYNEMICS A quantum dot is a small conductive island (less than 100 nm in diameter) that contains a tunable number of electrons occupying discrete orbitals. Figure 1shows a scanning electron micrograph of representative samples containing from one to four quantum dots (schematically illustrated by circles). Since the electron filling in a quantum dot resembles that in normal atoms, quantum dots are often referred to as artificial atoms. For example, as is true for some atoms, a quantum dot containing a few electrons exhibits magnetic properties depending on the electron filling. Actually, quantum dots with a few electrons show spin-polarized states even at zero magnetic field. The great advantage of artificial atoms (quantum dots) is that the element of the artificial atom (H,He, Li, and so on) can be changed just by changing external voltages.NTT

Basic Research Laboratories have been study VL VR Vsd IsdS D300 nm Drain Quantum dot Gate Heterostructure Source Vl Vr

(a)

(b)

(c)

(d)

Fig. 1.

Scanning electron micrographs of quantum dot devices. (d) Double quantum dot,

schematically shown by two circles, fabricated by dry etching and fine metal gate patterning. (b) Two sets of double quantum dots (prototype device). (c) A vertical single quantum dot, in which the source and drain electrodes are located above and below The dynamical response of a single electron in a quantum dot, which is called single-electron dynamics. For instance, a single electron can be injected into or extracted from a quantum dot by applying a high speed electrical signal with time accuracy better than 1 ns. The injection and extraction can be detected as a change in the charge on the dot with time accuracy better than 1 s. Moreover, an electron in a quantum dot can be excited by applying microwaves, which
8

corresponds to optical excitation in atoms. This research promises to provide novel semiclassical electronic devices that can precisely control and measure an electronic current. The more challenging and effective application of this research though is in quantum information technology, in which single electron charge and spin is controlled quantum mechanically.

Single electron dynamics in quantum dots is expected to satisfy the requirements for quantum information technology and lead to useful quantum computers. CHARGE QUANTUM BIT IN A BOUBLE QUANTUM DOT Consider a double quantum dot, in which two quantum dots are separated by a tunneling barrier. The double quantum dot can be regarded as diatomic molecule: the two dots are coupled electrostatically (corresponding to an ionic bond) and quantum mechanically (corresponding to a covalent bond). The coupling strengths can also be controlled by external voltages. Figure 1(a) shows a typical double quantum dot device fabricated in an AlGaAs/GaAs modulation doped heterostructure. The upper and lower dark regions are etched, and the adjacent region is depleted of conductive electrons. The bright vertical lines are metal gate electrodes used to deplete the underlying region. The resultant conductive islands (double quantum dot) are schematically shown by circles. The double quantum dot is attached to the source and drain electrodes and can be investigated by measuring the tunneling current.Now, consider the situation where an excess electron is injected into the double quantum dot (See Fig.2(a)). In the classical picture, the electron occupies either the left or right dot at any given time. This is analogous to the logical 0 and 1 of a classical bit. In the quantum mechanical picture, however, an electron behaves as if it occupies the left and right dots simultaneously (superposition of 0 and 1), but the probability of finding the electron in one of the dots is determined quantum mechanically. In quantum information technology, this probability is the quantum information itself, and should be manipulated or preserved during the calculation.We have succeeded in controlling the quantum state of a double quantum dot by applying a high-speed electrical pulse (from 100 ps to 2 ns) to one of the electrodes [4]. The electron initially injected into the left dot moves back and forth between the two dots during the pulse, which corresponds to a sinusoidal oscillation in probability between 0% and 100%. The probability can be controlled by tailoring the pulse waveform and is obtained from the tunneling current under the repetition of many pulses. A

specific pulse waveform that changes the probability from 0% to 100%, or from 100% to 0% can be used as a NOT gate, which

(a)

(b) Fig. 2. (a) Quantum state of a single electron in a double quantum dot. When an excess electron is injected, it can occupy the left dot (0), right dot (1), or both dots simultaneously (0+1). (b) The probability of finding the electron in the right dotis the most fundamental quantum logic gate. The next experiment would be two-qubit operation on two sets of double quantum dots (See Fig. 1(b) and Fig. 3(a)). The two double dots are coupled electrostatically to correlate the electron occupation in one double quantum dot with that in the other. For instance, the NOT gate operation at one double quantum dot can be performed only when the electron is located in the
10

left dot of the other double quantum dot. This conditional NOT operation (controlled NOT gate) is another fundamental gate operation in quantum information technology. If it is performed coherently, the controlled- NOT gate should work for any superposition state as well. One can design any quantum algorithm by combining a few types of fundamental quantum logic gates (a universal set of logic gates). Another important requirement for quantum information technology is the single-shot measurement, in which the electron occupation in the double quantum dot is determined by a single measurement. In this case, the measurement outcome is either 0 or 1, whose probability can be controlled by quantum logic gates. The electron occupation in a double quantum dot can be measured with a single electron transistor (SET), which can be fabricated near the double quantum dot by the same technique. We are developing a high-speed version of the SET, operated with a radio frequency signal (RF-SET), and expect it to work as a single-shot measurement device [5]. So far, basic technologies (NOT gate, controlled- NOT gate, and single shot measurement) for a charge qubit have been summarized (Fig. 3(a)). However, a serious problem for the success of quantum information technology is decoherence, the loss of the quantum information: the whole computation must to be finished within this decoherence time. For a charge qubit in a double quantum dot, the decoherence time is not very long, about 1 ns, at present. Solving this problem requires further investigation and refinement.

Fig3. Summary of quantum information technologies, one- and two-qubit operations and single-shot measurement, for (a) a charge qubit in a double quantum dot

11

(a)

(b) Fig. 4 Summary of quantum information technologies, one- and two-qubit operations and single-shot measurement, for (b) a spin qubit in a single quantum dot. SPIN QUBIT INTO A SINGLE QUANTUM DOT The spin degree of freedom, which originates from the spinning of a charged particle, is an alternative way to construct a qubit. While the charge qubit is an artificial qubit, the electron spin is a natural one. The spin decoherence time of conductive electrons in bulk GaAs crystal can be longer than 100 ns, and electron spin bound to a donor in silicon shows a decoherence
12

time of about 300 s. Electron-spin based quantum computation is motivated by this long decoherence time. Coherent manipulation of electron spins has been studied in many systems. The easiest way is to use electron spin resonance, which involves applying a microwave magnetic field under a static magnetic field.However, in contrast to the countless studies on the ensemble of spins in many materials, little work has been done on the manipulation of singleelectron spin. A quantum dot containing a single electron spin provides flexible quantum information storage (See Fig. 1(c) and Fig. 3(b)). We have developed a novel electrical pump and probe experiment for a quantum dot, and investigated inelastic spin relaxation time in a quantum dot containing a few electrons (Fig. 1(c)) [6]. We found that the energy relaxation time, which is the time necessarily to complete the quantum computation including the last measurement, can be longer than 200 s for a realistic quantum dot structure. This result is very encouraging. The NOT gate operation, which reverses the spin direction from up (0) to down (1) or vice versa can be performed using microwave irradiation (electron spin resonance). To address each electron spin (qubit) in many quantum dots, single-spin manipulation and measurement techniques are essential. The effectiveg-factor of each electron spin can be made different for different quantum dots through g-factor engineering,or a moderate magnetic field gradient is applied to the quantum dots, so that each qubit is addressed by a corresponding microwave frequency. However, a typical one-qubit operation using electron spin resonance requires a relatively long period of time, 100 ns for instance, because of the weak magnetic dipole interaction. Alternative approaches, i.e., using the optical Stark effect in a specific band structure or exchange coupling among three electron spins constituting one qubit, are suitable for much faster operations. Two-qubit operation, which is required to construct a universal set of logic gates, is expected to be achieved by connecting two quantum dots with a tunable tunneling barrier. The exchange coupling between the two spins can be used to swap the two spins (SWAP gate). The controlled-NOT gate can be constructed by combining a NOT gate and a SWAP gate. Single-shot spin measurement is a challenging technique for quantum information technology One proposal is based on the spin-dependent tunneling between two quantum dots combined with an RFSET.When each of the dots possesses one electron spin before the measurement, tunneling from one dot to the other is allowed, if the two electron spins can make a spin pair (spin singlet state). This spin-dependent tunneling could be measured with an RF-SET in a short time.
13

QUANTUM COMPUTING The seeds of quantum computing began with Richard Feynman 2 and others in the early 1980s and it was David Deutsch 28 who first considered in detail the implications of quantum physics for the theory of computation. In 1992 Deutsch and Richard Jozsa came up with an algorithm 29 that showed a clear quantum advantage and the subject really took off in the mid-1990s with the factoring algorithm of Peter Shor 30 and the searching algorithm of Lov Grover,31 which give significant advantage over their classical counterparts. Much of the secure communications in the world today use public key cryptography, which is based on factoring a very large number into its two component primes (or related problems) being practically unbreakable. The construction of a many-thousand qubit quantum computer would thus trash the worlds communications infrastructure certainly dramatic, whether you approve or not. Quantum computers could clearly also do a much better job of simulating quantum systems32 than conventional IT, and so would open up new research capabilities in many fields. The search is still on for more quantum algorithms open problems exist because not everything is amenable to a nave quantum speed-up. The quantum computational advantage arises because (in principle exponentially) many calculations can run in parallel during the evolution stage. However, quantum measurements have to be made to get answers, so simple number crunching doesnt get exponential advantage. Rather, it is problems that utilise the parallelism through interference that can gain. The factoring algorithm uses the exponential resources and a Fourier transform to find the (very large) periods of oscillatory functions, and the search algorithm offers a square root reduction in time by effectively searching amplitudistically rather than

probabilistically.Implementation research today has progressed to the few qubit and simple algorithm level. The first two-qubit gate was done with an ion trap33 and work has now progressed to four-ion entanglement 34 and realisation of the Deutsch-Jozsa algorithm.35 Atomcavity interactions in the optical 36,37 and microwave38 domains have got three-qubit entanglement39 Use of nuclear spin qubits in a molecule in an ensemble nuclear magnetic resonance approach 40,41 has demonstrated a number of simple algorithms,42-46 most recently the factoring of fifteen.47 Single superconducting qubits based on charge or phase have been constructed 48-51 (see figure 1).Many other approaches to quantum computing hardware have been proposed.52,53 Examples include photons,54 charge14 or spin13 in quantum dots, dopant
14

nuclear (or electronic) spins in the solid state,55,56 spins in fullerene cages,57 trapped electrons58 (see figure 2), quantum Hall systems,59magnetic molecules or nano-crystals60 (see figure 3) and electrons on liquid helium.61 From the perspective of scalability in qubit number, solid state approaches which build on the wealth of existing fabrication techniques have much appeal. It is certainly also the case that most qubit successes to date do not seem to be easily scalable. The flip side is that solid state systems generally suffer more decoherence, so it will be a very big challenge to reduce this to the level required for error correction and fault tolerant operation. ALGORITHMS FOR QUANTUM COMPUTERS In order to solve a particular problem computers follow a precise set of instructions that can be mechanically applied to yield the solution to any given instance of the problem. A specification of this set of instructions is called an algorithm. Examples of algorithms are the procedures taught in elementary schools for adding and multiplying whole numbers; when these procedures are mechanically applied, they always yield the correct result for any pair of whole numbers. Some algorithms are fast (e.g. multiplication) other are very slow (e.g. factorizations). Consider, for example, the following factorization problem X = 29083 How long would it take you, using paper and pencil, to find the two whole numbers which should be written into the two boxes (the solution is unique) Probably about one hour. At the same time, solving the reverse problem127x129 = ,again using paper and pencil technique, takes less than a minute. All because we know fast algorithms for multiplication but we do not know equally fast ones for factorization. What really counts for a "fast" or a "usable" algorithm, according to the standard definition, is not the actual time taken to multiply a particular pairs of number but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers. The same standard text-book method of multiplication requires little extra work when we switch from two 3 - digit numbers to two 30 - digit numbers. By contrast, factoring a thirty digit number using the simplest trial division method is about 1013 times more time or memory consuming than factoring a three digit number. The use of computational resources is enormous when we keep increasing the number of digits. The largest number that has been factorized as a mathematical
15

challenge, i.e. a number whose factors were secretly chosen by mathematicians in order to present a challenge to other mathematicians, had 129 digits. No one can even conceive of how one might factorize say thousand-digit numbers; the computation would take much more that the estimated age of the universe. Apart form the standard definitions of a fast or a usable algorithm, computer scientists have a rigorous way of defining what makes an algorithm fast (and usable) or slow (and unusable). For an algorithm to be fast, the time it takes to execute the algorithm must increase no faster than a polynomial function of the size of the input. Informally, think about the input size as the total number of bits needed to specify the input to the problem, for example, the number of bits needed to encode the number we want to factorize. If the best algorithm we know for a particular problem has the execution time (viewed as a function of the size of the input) bounded by a polynomial then we say that the problem belongs to class P. Problems outside class P are known as hard problems. Thus we say, for example, that multiplication is in P whereas factorization is not in P and that is why it is a hard problem. Hard does not mean "impossible to solve" or "noncomputable" - factorization is perfectly computable using a classical computer, however, the physical resources needed to factor a large number are such that for all practical purposes, it can be regarded as intractable. Purely technological progress can only increase the computational speed by a fixed multiplicative factor which does not help to change the exponential dependence between the size of the input and the execution time. Such change requires inventing new, better algorithms. Although quantum computation requires new quantum technology, its real power lies in new quantum algorithms which allow exploiting quantum superposition that can contain an exponential number of different terms. Quantum computers can be programmed in a qualitatively new way. For example, a quantum program can incorporate instructions such as "... and now take a superposition of all numbers from the previous operations..." this instruction is meaningless for any classical data processing device but makes lots of sense to a quantum computer. As the result we can construct new algorithms for solving problems, some of which can turn difficult mathematical problems, such as factorization, into easy ones! :

16

DEMONSTRATING QUANTUM COMPUTING Due to technical obstacles, till date, a quantum computer has not yet been realized. But the concepts and ideas of quantum computing has been demonstrated using various methods. Here, we discuss four of the most important technologies that are used to demonstrate quantum computing. They are I .Nuclear Magnetic Resonance ii. Ion Trap iii. Quantum Dot iv. Optical Methods While reading the following "top four technologies", two things should be kept in mind. The first is that the list will change over time. Some of the approaches valuable for exploring quantum computing in the laboratory are fundamentally un-scalable, and so will drop out of contention over the next few years. The second thing to keep in mind is that although there are a bewildering number of proposed methods for demonstrating quantum computing (a careful search will yield many more options that what is listed here); all of them are variations on three central themes: (a) manipulating the spin of a nucleus or subatomic particle (b) manipulating electrical charge ( c ) manipulating the polarization of a photon. In variation "a" a qubit is derived from superposition of up and down spins. In variation "b" a qubit is derived from superposition of two or more discrete locations of the charge. In the last variation , a qubit is derived from the superposition of polarization angles. Of the three , the manipulation of the spin is generally viewed as the most promising for practical large-scale application. Let's now see each of these techniques in detail.

17

Nuclear Magnetic Resonance Using nuclear magnetic resonance (NMR) techniques, invented in the 1940's and widely used in chemistry and medicine today, these spins can be manipulated, initialized and measured. Most NMR applications treat spins as little "bar magnets", whereas in reality, the naturally wellisolated nuclei are non-classical objects. A Nuclear Magnetic Resonance (NMR) quantum computer is based on control of nuclear spin. In demonstrations to date this has been achieved by manipulating the nuclear spins of several atoms making up a molecule - the most recent effort using a molecule of five fluorine atoms and two carbon atoms. The spin manipulation is accomplished by application of magnetic pulses within a magnetic field produced by the NMR chamber. The quantum behavior of the spins can be exploited to perform quantum computation. Magnetic field produced by the NMR chamber can be used to manipulate the spin state of the nucleus. The manipulation of the spin is accomplished by application of magnetic pulses in the chamber. The entanglement of spins required to establish a qubit is created by the chemical bonds between neighboring atoms within a molecule - within 1018 molecules to be more precise, since a measurable signal (relative to the background noise from kinetic energy) is achieved by using a test tube of "processing liquid" rather than a single molecule. For example, consider the carbon and hydrogen nuclei in a chloroform molecule to be representing two qubits. Applying a radio-frequency pulse to the hydrogen nucleus addresses that qubit, and causes it to rotate from a |0> state to a superposition state. Interactions through chemical bonds allow multiple-qubit logic to be performed. In this manner, applying newly developed techniques to allow bulk samples with many molecules to be used, small-scale quantum algorithms have been experimentally demonstrated with molecules such as Alanine, an amino acid. This includes the quantum search algorithm, and a predecessor to the quantum factoring algorithm. The major drawback of this method is scalability; the signal strength of the answer decreases exponentially with the number of qubits.

18

ii. Ion Trap An Ion Trap quantum computer is also based on control of nuclear spin (although using vibration modes or "phonons" has also been considered). In this approach the individual ions are, as the name implies, trapped or isolated by means of an electromagnetic field which is produced by means of an electromagnetic chamber. Ordinarily, the energy difference between different spin states in nuclei is so small relative to the kinetic energy of the ions, that they are not measurable. In the prior technique (NMR) this problem was overcome by operating simultaneously on a large number of atoms. But in this case, the solution is a bit different. The trapped ions are cooled to the point where motion is essentially eliminated. They are then manipulated by laser pulses and a qubit arises from the superposition of lower and higher energy spin states. This technique is potentially scalable, but a great disadvantage is that it requires a cryogenic environment - not to mention that to date no more than single qubit systems have been demonstrated. iii. Quantum Dot A quantum dot is a particle of matter so small that the addition or removal of an electron changes its properties in some useful way. All atoms are, of course, quantum dots, but multi-molecular combinations can have this characteristic. In biochemistry, quantum dots are called redox groups. Quantum dots typically have dimensions measured in nanometers, where one nanometer is 10"9 meter or a millionth of a millimeter. A Quantum Dot quantum computer can involve manipulation of electrical charge, spin, or energy state - the Australians have a patent on a spin based version. The idea is that a small number of electrons or possibly an individual electron is confined with a quantum dot, the quantum dot typically being a small "hill" of molecules grown on a silicon substrate. A computer would be made up of a regular array of such dots. As with the prior two methods, the most popular approach is to have spin up counted as zero, spin down counted as one, and use a superposition of spin states to create the qubit. Techniques for self assembly of large arrays of quantum dots have already been demonstrated and can be done using the industry standard
19

silicon substrate. Thus, of all the approaches listed here, this one seems to have the highest potential for commercial scalability. iv. Optical As the name indicates, an optical quantum computer uses the two different polarizations of a light beam to represent two logical states. As an example, we can consider the polarization of a light beam in the vertical plane to represent a logical 1 and the polarization of the beam in the horizontal plane to represent a logical 0. An Optical quantum computer would be based on manipulating the polarization of individual photons. Entanglement is achieved by coincident creation of identical photons. Identical photons in this context would mean photons having the same energy as well as same polarization. The superposition of polarization or phase state is manipulated using polarizing lenses, phase shifters, and beam splitters. It was originally believed that non-linear optics would be required in order to create a working quantum computer, and this was considered to be the major technical obstacle to a photon-based approach. However, recent theoretical advances indicate that linear optics is sufficient. Several laboratories are working on a practical demonstration. The new stumbling block has been creating beam splitters of sufficient accuracy. The entire process can be carried out at room temperature and there is no reason, in principle, that it is not scalable to large numbers of qubytes. ADVANTAGES OF QUANTUM COMPUTING Quantum computing principles use the principle of coherent superposition storage. As stated in the above example, it is quite remarkable that all eight numbers are physically present in the register but it should be no more surprising than a qubit being both in state 0 and 1 at the sametime. If we keep adding qubits to the register we increase its storage capacity exponentially i.e. three qubits can store 8 different numbers at once, four qubits can store 16 different numbers at once, and so on; in general L qubits can store 2L numbers at once. Once the register is prepared in a superposition of different numbers we can perform operations on all of them. For example, if qubits are atoms then suitably tuned laser pulses affect atomic electronic states and evolve initial super positions of encoded numbers into different super positions. During such evolution each number in the superposition is affected and as the result we generate a massive parallel
20

computation albeit in one piece of quantum hardware. This means that a quantum computer can in only one computational step perform the same mathematical operation on 2L different input numbers encoded in coherent super positions of L qubits. In order to accomplish the same task any classical computer has to repeat the same computation 2L times or one has to use 2L different processors working in parallel. In other words a quantum computer offers an enormous gain in the use of computational resources such as time and memory. It looks like classical computers can do the same computations as quantum computers but simply need more time or more memory. The catch is that classical computers need exponentially more time or memory to match the power of quantum computers and this is really asking for too much because an exponential increase is really fast and we run out of available time or memory very quickly. WHAT WILL QUANTUM COMPUTERS BE GOOD AT These are the most important applications currently known: Cryptography: Perfectly secure communication. Searching, especially algorithmic searching (Graver's algorithm). Factorizing large numbers very rapidly (Shor's algorithm). Simulating quantum-mechanical systems efficiently. OBSTACLES AND RESEARCH The field of quantum information processing has made numerous promising advancements since its conception, including the building of two- and three-qubit quantum computers capable of some simple arithmetic and data sorting. However, a few potentially large obstacles stijl remain that prevent us from "just building one", or more precisely, building a quantum computer that can rival today's modern digital computer. Among these difficulties, error correction, decoherence, and hardware architecture are probably the most formidable. Decoherence We have seen that if a superposition of any two states of a quantum -mechanical system is to be stable over a period of time, there should be some sort of coherence between the states that are
21

being superpositioned. But still, no superposition of any pair of given states are perfectly stable to any extent. This is because of the existence of a property by name decoherence which forces a quantum - mechanically superpositioned system to decay from a given quantum coherent superpositioned state into an incoherent state as it entangles or interacts with the surroundings. The final effect is that the coherence between the two states that are superpositioned would be gradually "lost". For the same reason, no quantum memory can, at present, be used to hold data that is to be used for operations that take a long time. The time taken by the system to lose the coherence between the two states is known as decoherence time. ii. Error Correction Error correction is rather self explanatory, but what errors need correction The answer is primarily those errors that arise as a direct result of decoherence, or the tendency of a quantum computer to decay from a given quantum state into an incoherent state as it interacts, or entangles, with the state of the environment. These interactions between the environment and qubits are unavoidable, and induce the breakdown of information stored in the quantum computer, and thus errors in computation. Before any quantum computer will be capable of solving hard problems, research must devise a way to maintain decoherence and other potential sources of error at an acceptable level. Thanks to the theory (and now reality) of quantum error correction, first proposed in 1995 and continually developed since, small scale quantum computers have been built and the prospects of large quantum computers are looking up. Probably the most important idea in this field is the application of error correction in phase coherence as a means to extract information and reduce error in a quantum system without actually measuring that system. In 1998, researches at Los Alamos National Laboratory and MIT managed to spread a single bit of quantum information (qubit) across three nuclear spins in each molecule of a liquid solution of alanine or trichloroethylene molecules. They accomplished this using the techniques of nuclear magnetic resonance (NMR). This experiment is significant because spreading out the information actually made it harder to corrupt. Quantum mechanics tells us that directly measuring the state of a qubit invariably destroys the superposition of states in which it exists, forcing it to become either a 0 or 1.

22

The technique of spreading out the information allows researchers to utilize the property of entanglement to study the interactions between states as an indirect method for analyzing the quantum information. Rather than a direct measurement, the group compared the spins to see if any new differences arose between them without learning the information itself. This technique gave them the ability to detect and fix errors in a qubit's phase coherence, and thus maintain a higher level of coherence in the quantum system. This milestone has provided argument against skeptics, and hope for believers. At this point, only a few of the benefits of quantum computation and quantum computers are readily obvious, but before more possibilities are uncovered theory must be put to the test. In order to do this, devices capable of quantum computation must be constructed. Quantum computing hardware is, however, still in its infancy. As a result of several significant experiments, nuclear magnetic resonance (NMR) has become the most popular component in quantum hardware architecture. Only within the past year, a group from LOS Alamos National Laboratory and MIT constructed the first experimental demonstrations of a quantum computer using nuclear magnetic resonance (NMR) technology. Currently, research is underway to discover methods for battling the destructive effects of decoherence, to develop optimal hardware architecture for designing and building a quantum computer, and to further uncover quantum algorithms to utilize the immense computing power available in these devices. Naturally this pursuit is intimately related to quantum error correction codes and quantum algorithms, so a number of groups are doing simultaneous research in a number of these fields. To date, designs have involved ion traps, cavity quantum electrodynamics (QED), and NMR. Though these devices have had mild success in performing interesting experiments, the technologies each have serious limitations. Ion trap computers are limited in speed by the vibration frequency of the modes in the trap. NMR devices have an exponential attenuation of signal to noise as the number of qubits in a system increases. Cavity QED is slightly more promising; however, it still has only been demonstrated with a few qubits. The future of quantum computer hardware architecture is likely to be very different from what we know today; however, the current research has helped to provide insight as to what obstacles the future will hold for these devices.

23

iii. Lack of Reliable Reading Mechanism The techniques that exist till date have a big problem that trying to read from a super positioned qubit would invariably make it to lose its superpositioned state and make it to behave just as a classical bit - i.e. it would store only one among the values of 0 and 1. Also, if we are given a quantum register comprising of n bits of which m bits are superpositioned ones, none among the reading mechanisms available today is able to determine which value from the superposition is to be read out. i.e. if we are given a 3 - bit register that contains, say, 4 values in a particular superposition (let them be 4,5,6,7), the reading mechanisms available today are unable to determine which how to access a specific value from the superposition. QUANTUM CRYTOGRAPHY 3Around 1970 Stephen Wiesner 62 cryptography and in realised quantum mechanics could be useful for

1984 Charles Bennett and Gilles Brassard proposed the well known

BB84scheme 63 for quantum key distribution. Many developments and new protocols have followed.64The basic idea is for Alice and 5Bob to share a secret key and to use this as a onetime-pad to communicate securely quantum mechanics guarantees the security of the key. In BB84 Alice sends to Bob photons chosen randomly from the four states of two overlapping qubit bases (e.g. two orthogonal linear polarisations and right and left circular polarizations) and Bob measures in one of the two bases, chosen at random.After accumulating data, using public communication 65 and sacrificing some of the bits, they can then identify what to keep (the raw key when Bob used the correct basis), locate and correct errors, and scramble and reduce their correct bits (privacy amplification) to distil a shared secret key. Like Bob, any eavesdropper (Eve) has to measure66 the qubits she has to play guess the basis and so cannot avoid introducing errors into the raw key. If Eve reads the lot, Alice and Bob know this and bin the raw key; if Eve reads only a fraction they can use the rest to distil some guaranteed secure bits.The first prototype system 67 ran in 1989. Since then, many developments have taken quantum cryptography out of the laboratory and towards actual technology, using qubits embodied in weak laser pulses or photons, sent from Alice to Bob through standard telecommunications optical fibres or even free space. Fibre examples work over useful distances,68-72 can operate alongside conventional communications through multiplexing,73 can use multiple Bobs,74 have
24

used entangled photons75-79 (see figure 4) and have shared a secret distributed between Bob and Charlie.80 The working distance is now up to 67 km with a plug & play system81 (see figure 5). Free space systems have also been developed,82-87 with the aim of secure communications to and via satellites. The distance is currently up to 23.4 km at altitude in the Alps,88 which makes quantum communication to near-Earth orbit satellites look feasible. Research continues to improve sources and detectors, which would enhance all forms of quantum cryptosystem for example systems are now operating with on-demand single photons.89 On the theory and protocols side, research continues to see just what can and cant be done securely by quantum means. Clearly key distribution can, for example it is known that bit commitment cant,90,91 and open problems in between remain. QUANTUM TELEPORTATION The theory for quantum teleportation was laid out 92 in 1993 by Charles Bennett, Gilles Brassard, Claude Crepeau, Richard Jozsa, Asher Peres and Bill Wootters. The basic idea is that if Alice and Bob share a pair of entangled qubits (as in (2)), they can use this as a resource to offer a teleportation service. Alice takes an unknown qubit from a customer, performs a twoqubit gate on this and qubit A, and then measures both. She transmits the results (two bits) to Bob by a conventional communication channel. The results uniquely identify one of four singlequbit operations to Bob, one of which is do nothing! Once he has performed the identified operation on qubit B, it is left in the state of the qubit supplied by the customer! There is no instantaneous signalling as the two bits have to be sent to Bob, so relativity is happy, and no quantum copy has been made as all record of the state is destroyed at Alices end. Amusingly, and of course very much in principle, Alice doesnt have to know where Bob is provided that she broadcasts her bits!

25

Also, if the customer supplies half of an entangled pair, the outcome is entanglement between two qubits who have never met! From 1997 a number of experiments demonstrating the principles of teleportation have been performed.93-98 The details differ, but their basis is to distribute entanglement using photon qubits (or light pulses), and to use this for the teleportation of a quantum state from A to B. Currently it is not possible to teleport the unknown state of a customer qubit (for example, another photon) with complete success because of the difficulty in realising the required two-qubit gate at A on demand. Research continues towards this goal. There is certainly an incentive because teleportation underpins the concept of a quantum repeater,99 which could be used to extend the working distance of quantum cryptosystems. A recent step towards this has been the demonstration of teleportation through 2 km 6of optical fibre100 (see figure 6). It isnt on demand, but it does show that teleportation is progressing beyond the confines of a single laboratory.

26

Prospects for QIT


Quantum cryptography works, around Ipswich (UK), under Lake Geneva and between mountains in the Alps (Europe), in the Los Alamos desert (USA), and in numerous other places worldwide. You can buy a working fibre system,101 and secure satellite communications may well emerge over the next few years. Few-qubit demonstration quantum computers exist. However, useful many-qubit machines are still a long way off and we dont yet know what form they might take. If QIT develops, it is unlikely to displace conventional IT and more likely to work with it, addressing specific tasks. In the future you are more likely to buy a PC with some quantum chips in it, rather than chucking your existing machine in the bin (or recycler) in favour of a new wholly quantum one! Teleportation of a single qubit works down about 2 km of optical fibre. However, I very much doubt whether any of us will ever walk into a teleport and utter the immortal words: Beam me up Scotty. That said, simpler teleportation could play a very important future role in distributing quantum information between processors, or effectively stringing out entanglement for longdistance quantum communications. Present day IT companies measure their annual revenue in billions of dollars. If such mass-market scale or consumer QIT is to emerge in the future, new quantum applications, software and protocols will be needed. Hardware development is certainly necessary, but certainly also not sufficient. The development of large-scale quantum processors will likely be very expensive, so this investment will need the promise of a market. This means the quantum algorithms, theory and protocols folk cannot now put their feet up, and simply leave things to the hardware scientists and engineers. Much further research is needed in all aspects of the field if QIT is to become a reality. QUANTUMALGORITHMS The main quantum algorithms are: Quantum circuit based algorithmThe Deutsch Oracle The DeutschJozsa OracleThe Simon OracleShors AlgorithmGrovers Algorithm Adiabatic algorithm Measurement based algorithm Topological quantum field theory(TQFT) algorithm
27

1.3 ENEMIES OF QUANTUM COMPUTING There are two known enemies of quantum computing: a) Decoherence If we keep on putting quantum gates together into circuits we will quickly run into some serious practical problems. The more interacting qubits are involved the harder it tends to be to engineer the interaction that would display the quantum interference. Apart from the technical difficulties of working at single-atom and single-photon scales, one of the most important problems is that of preventing the surrounding environment from being affected by the interactions that generate quantum superposition. The more components the more likely it is that quantum computation will spread outside the computational unit and will irreversibly dissipate useful information to the environment. This process is called Decoherence. Even though we try to isolate the quantum system from the environment much as we can, we cannot supply total isolation. Therefore, the interaction of the quantum system and the environment result in Decoherence of the quantum state, which is equivalent to a partial measurement of the state by the environment. b) Gate Inaccuracies Decoherence is not the only problem with quantum computing. Gates, whether they are classical or quantum, are not perfect. The gates are usually combined together. So small errors in gates can combine together during computation and eventually causing failure, and it is not clear how to correct these small errors. The simplest example of error correcting code is a repetition code: replacing the bit we want to protect by 3 copies of the bit, 0 (000) 1 (111) Now an error may occur that causes one of the three bits to flip; If its the first bit, say, (000) (100) (111) (011) Now in spite of the error, the bit can be encoded correctly, by majority voting.

28

CHAPTER- 4 TITLE OF WORK

Quantum information science combines two of the great scientific and technological revolutions of the 20th century, quantum mechanics and information theory. According to the National Science and Technology Councils 2008 report A Federal Vision for Quantum Information Science, quantum information science will enable a range of exciting new possibilities including: greatly improved sensors with potential impact for mineral exploration , improved medical imaging and a revolutionary new computational paradigm that will likely lead to the creation of computation device capable of efficiently solving problems that cannot be solved on a classical computer. One of the fundamentally important research areas involved in quantum information science is quantum communications, which deals with the exchange of information encoded in quantum states of matter or quantum bits (known as qubits) between both nearby and distant quantum systems. Our Quantum Communication project performs core research on the creation, transmission, processing and measurement of optical qubits the quantum states of photons, with particular attention to application to future information technologies.

Single photons at telecommunication wavelengths can be detected with higher efficiency with our frequency up-conversion detector.
29

In the past few years, we have undertaken an intensive study of quantum key distribution (QKD) systems for secure communications. Specifically, we demonstrated high-speed QKD systems that generate secure keys for encryption and decryption of information using a one-time pad cipher, and extended them into a 3-node quantum communications network. We have demonstrated the strengths and observed the limitations of QKD systems and networks. One such limitation is the effective communication distance of a point-to-point QKD system, which is about 100 km. Quantum repeaters represent a promising solution to this distance limitation. It enables quantum information exchange between two distant quantum systems including quantum computers. Though quantum repeaters are conceptually feasible, there are tremendous challenges to their development. Our goal in this area is to identify the problems, find potential solutions and evaluate their capabilities and limitations for future quantum communication applications.

In summary, we perform research and development (R&D) in quantum communication and related measurement areas with an emphasis on applications in information technology. Our R&D is aimed to promote US innovation, industrial competitiveness and enhance the nations security. This website shows the footprint of our R&D efforts in the past few years. For more information concerning this program, please contact project leader Dr. Xiao Tang (xiao.tang@nist.gov). Keywords: quantum communication, quantum measurement science, entangled photons, quantum teleportation and repeaters, free space optics, quantum cryptography, photon source/detectors. The History of Quantum Money

But can one actually exploit the No-Cloning Theorem to achieve classically-impossible cryptographic tasks? This question was first asked by Wiesner [39], in a remarkable paper written around 1970 (but only published in 1983) that arguably founded quantum information science. In that paper, Wiesner proposed a scheme for quantum money that would be physically impossible to clone. In Wiesners scheme, each banknote would consist of a classical serial

30

number s, together with a quantum state |si consisting of n unentangled qubits, each one |0i, |1i, |0i+2|1i , or |0i2|1i with equal probability. The issuing bank would maintain a giant database, which stored a classical description of |si for each serial number s. Whenever someone wanted to verify a banknote, he or she would take it back to the bankwhereupon the bank would use its knowledge of how |si was prepared to measure each qubit in the appropriate basis, and check that it got the correct outcomes. On the other hand, it can be proved [31] that someone who did not know the appropriate bases could copy the banknote with success probability at most (3/4)n. Though historically revolutionary, Wiesners money scheme suffered at least three drawbacks: ( 1 ) The Verifiability Problem: The only entity that can verify a banknote is the bank that printed it. (2) The Online Attack Problem: A counterfeiter able to submit banknotes for verification, and get them back afterward, can easily break Wiesners scheme. (3) The Giant Database Problem: The bank needs to maintain a database with an entry for every banknote in circulation.

In followup work in 1982, Bennett, Brassard, Breidbart, and Wiesner [14] (henceforth BBBW) at least showed how to eliminate the giant database problem: namely, by generating the state |si =_fk(s) using a pseudorandom function fk, with key k known only by the bank. Unlike Wiesners original scheme, the BBBW scheme is no longer information-theoretically secure: a counterfeiter can recover k given exponential computation time. On the other hand, a counterfeiter cannot break

31

CHAPTER- 5 MATHEMATICAL ANALYSIS

Quantum information science is an interdisciplinary research endeavour that brings together computer scientists, mathematicians, physicists, chemists, and engineers to develop revolutionary information processing and communication technologies that are infeasible without exploiting the principles of quantum mechanics. The importance of quantum information was first widely recognized in 1982 when Feynman conjectured that a quantum computer would efficiently simulate quantum systems, and a universal Turing machine (classical computer) could not. In the mid1990s, Shor showed that the quantum computer could efficiently determine the factors of large numbers whereas this problem is believed to be intractable on a classical computer. Even earlier, in 1984, Bennett and Brassard proposed an information theoretically secure key distribution technique through public channels, as opposed to standard methods that are only computationally secure. Originally proposed in 1984, quantum cryptography has since become commercial technology. Quantum information technology is thus disruptive both technically and also at a fundamental level both to physics and to computer science. Quantum information leads to a violation of the strong ChurchTuring thesis and could enable informationtheoretic security over public channels. Moreover quantum computing and quantum cryptography damage and ameliorate, respectively, information security. Theoretical research in quantum information relies on sophisticated mathematical methods, and advances rely on concomitant developments of new mathematics, hence the need for strong mathematical research in quantum information.

32

Areas of interest for the PIMS CRG on MQI This CRG is ideally suited to address and make significant progress in the areas such as the three noted below. Models of quantum computing: Quantum computing is technologically challenging so different models for implementing quantum algorithms are of great interest. The first model treated unitary gates on single qubits (quantum binary digits) and on pairs of qubits; a few gates of these types can be used to construct any quantum circuit efficiently with bounded error. Subsequently remarkably different circuits were devised that serve the same end, such as adiabatic quantum computing, topological quantum computing, quantum computing with oscillators (socalled continuous variable quantum computing), and oneway quantum computing. Proving that these models are equivalent, especially under the faulttolerant conditions of error correction, is generally difficult. Not only are models of quantum computing valuable in exploring different physical realizations, but also these models are conceptually quite different and have inspired new quantum algorithms by forcing new ways to think about circuits. In addition, for oneway quantum computation, beautiful and unexpected connections with graph theory are beginning to emerge, and they require further exploration. Error correction: The theory of quantum error correction showed that quantum computing errors could be efficiently corrected despite Heisenberg limits to quantum measurements. Quantum error correction was then shown to follow easily from classical error correction theory, which made devising error correction protocols relatively straightforward. Quantum error correction was also shown to underlie the security of quantum key distribution by proving that classical cryptographic privacy amplification of quantum keys protected against quantum eavesdroppers. Although quantum error correction is accepted as strategy against errors and faults, the overhead of error correction is typically high, and new strategies are sought to reduce the resource overhead. For example quantum error correction can be assisted by providing consumable entanglement resources thereby reducing the number of extra qubits and gates. Also more
33

sophisticated methods such as belief propagation can be used to correct errors. This CRG will develop significantly better quantum error correction protocols. Quantum algorithms: Quantum computers are of revolutionary importance because they would make some computational problems efficiently solvable that are regarded as intractable on a classical computer. So far the class of problems that are converted from intractable to tractable by a quantum computer is small, and this CRG will seek to expand the class of such problems by developing new quantum algorithms. The graph isomorphism problem is of special interest to members of the CRG, and its development will link to topological models of quantum computing. In summary members of the proposed CRG have expertise and active research in many areas of quantum information including the three important topics listed above: models of quantum computing, error correction, and algorithms. The CRG will pursue these areas by bringing together their complimentary expertise and holding workshops with the worlds leading scientists in the field. These early ideas about quantum money inspired the field of quantum cryptography [13]. But strangely, the subject of quantum money itself lay dormant for more than two decades,

even as interest in quantum computing exploded. However, the past few years have witnessed a quantum money renaissance. Some recent work has offered partial solutions to the

verifiability problem: for example, Mosca and Stebila [32] suggested that the bank use a blind quantum computing protocol to offload the verification of banknotes to local merchants, while Gavinsky [23] proposed a variant of Wiesners scheme that requires only classical communication between the merchant and bank. However, most of the focus today is on a more ambitious goal: namely, creating what Aaronson [3] called public-key quantum money, or quantum money that anyone could authenticate, not just the bank that printed it. As with public-key cryptography in the 1970s, it is far from obvious a priori whether public-key quantum money is possible at all. Can a bank publish a description of a quantum circuit that lets people feasibly recognize a state |i, but does not let them feasibly prepare or even copy |i? Aaronson [3] gave the first formal treatment of
34

public-key quantum money, as well as related notions such as copy-protected quantum software. He proved that there exists a quantum oracle relative to which secure public-key quantum money is possible. Unfortunately, that result, though already involved, did not lead in any obvious way to an explicit (or real-world) quantum money scheme.4 He raised as an open problem whether secure public-key quantum money is possible relative to a classical oracle. In the same paper, Aaronson also proposed an explicit scheme, based on random stabilizer states, but could not offer any evidence for its security. And indeed, the scheme was broken about a year afterward by Lutomirski et al. [30], using an algorithm for finding planted cliques in random graphs due to Alon, Krivelevich, and Sudakov [7]. Recently, Farhi et al. [22] took a completely different approach to public-key quantum money. They proposed a quantum money scheme based on knot theory, where each banknote is a superposition over exponentially-many oriented link diagrams. Within a given banknote, all the link diagrams L have the same Alexander polynomial p (L) (a certain knot invariant).5 This p (L), together with a digital signature of p (L), serves as the banknotes classical serial number. Besides the unusual mathematics employed, the work of Farhi et al. [22] (building on [30]) also developed an idea that will play a major role in our work. That idea is to construct public-key quantum money schemes by composing two simpler ingredients: first, objects that we call mini-schemes; and second, classical digital signature schemes. The main disadvantage of the knot-based scheme, which it shares with every previous scheme, is that no one can say much about its securityother than that it has not yet been broken, and that various known counterfeiting strategies fail. Indeed, even characterizing which quantum states Farhi et al.s verification procedure accepts remains a difficult open problem, on which progress seems likely to require major advances in knot theory! In other words, there might be states that look completely different from legitimate banknotes, but are still accepted with high probability. In followup work, Lutomirski [29] proposed an abstract version of the knot scheme, which gets rid of the link diagrams and Alexander polynomials, and simply uses a classical oracle to achieve the same purposes.

35

In talks beginning in 2002,1 the author often raised the following question: Suppose a function f : [n] [n] is a permutation, rather than far from a permutation. Is there a small ( polylog (n)-qubit) quantum proof |'f i of that fact, which can be verified using polylog (n) quantum queries to f?

In this paper, we will answer the above question in the negative. As a consequence, we will obtain an oracle A such that SZKA 6 QMAA. This implies, for example, that any QMA protocol for graph non-isomorphism would need to exploit something about the problem structure beyond its reducibility to the collision problem. Given that the relativized SZK versus QMA problem remained open for eight years, our solution is surprisingly simple. We first use the in-place amplification procedure of Marriott and Watrous [12] to eliminate the witness, and reduce the question to one about quantum algorithms with extremely small acceptance probabilities. We then use a relatively-minor adaptation of the polynomial degree argument that was used to prove the original collision lower bound. Our proof actually yields an oracle A such that SZKA 6 A0PPA, where A0PP is a class defined by Vyalyi [15] that sits between QMA and PP. Despite the simplicity of our result, to our knowledge it constitutes the first nontrivial lower bound on QMA query complexity, where nontrivial means that it doesnt follow immediately from earlier results unrelated to QMA.2 We hope it will serve as a starting point for stronger results in the same vein. 2 Preliminaries We assume familiarity with quantum query complexity, as well as with complexity classes such as QMA (Quantum Merlin-Arthur), QCMA (Quantum Merlin-Arthur with classical witnesses), and SZK (Statistical Zero-Knowledge). See Buhrman and de Wolf [9] for a good introduction to quantum query complexity, and the Complexity Zoo3 for definitions of complexity classes. We now define the main problem we will study. Problem 1 (Permutation Testing Problem or PTP) Given black-box access to a function f : [n] [n], and promised that either (i) f is a permutation (i.e., is one-to-one), or (ii) f differs from every permutation on at least n/8 coordinates.

36

The problem is to accept if (i) (ii) holds and reject if holds.

1See for example: Quantum Lower Bounds, www.scottaaronson.com/talks/lower.ppt; The Future (and Past) of Quantum Lower Bounds by Polynomials, www.scottaaronson.com/talks/future.ppt; The Polynomial Method in Quantum and Classical Computing, 2From the BBBV lower bound for quantum search [6], one immediately obtains an oracle A such that coNPA 6 QMAA: for if there exists a witness state |'i that causes a QMA verifier to accept the all-0 oracle string, then thatsame |'i must also cause the verifier to accept some string of Hamming weight 1. Also, since QMA PP relative to all oracles, the result of Vereshchagin [14] that there exists an oracle A such that AMA 6 PPA implies an A such that AMA 6 QMAA as well. In the above definition, the choice of n/8 is arbitrary; it could be replaced by cn for any 0 < c < 1. As mentioned earlier, Aaronson [1] defined the collision problem as that of deciding whether f is one-to-one or two-to-one, promised that one of these is the case. In this paper, we are able to prove a QMA lower bound for PTP, but not for the original collision problem. Fortunately, however, most of the desirable properties of the collision problem carry over to PTP. As an example, we now observe a simple SZK protocol for PTP. Proposition 2 PTP has an (honest-verifier) Statistical Zero-Knowledge proof protocol, requiring O (log n) time and O (1) queries to f. Proof. The protocol is the following: to check that f : [n] [n] is one-to-one, the verifier picks an input x [n] uniformly at random, sends f (x) to the prover, and accepts if and only if the prover returns x. Since the verifier already knows x, it is clear that this protocol has the zeroknowledge property. If f is a permutation, then the prover can always compute f1 (f (x)), so the protocol has perfect completeness. If f is n/8-far from a permutation, then with at least 1/8 probability, the verifier picks an x such that f (x) has no unique preimage, in which case the prover can find x with probability at most 1/2. So the protocol has constant soundness.

37

2.1 Upper Bounds To build intuition, we now give a simple QMA upper bound for the collision problem. Indeed, this will actually be a QCMA upper bound, meaning that the witness is classical, and only the verification procedure is quantum. Theorem 3 For all w [0, n], there exists a QCMA protocol for the collision problemi.e., for verifying that f : [n] [n] is one-to-one rather than two-to-onethat uses a w log n-bit classical witness and makes O _min npn/w, n1/3o_ quantum queries to f. Proof. If w = O n1/3_, then the verifier V can just ignore the witness and solve the problem in O n1/3_ queries using the Brassard-Hyer-Tapp algorithm [8]. So assume w Cn1/3 for some suitable constant C. The witness will consist of claimed values f (1) , . . . , f (w) for f (1) , . . . , f (w) respectively. Given this witness, V runs the following procedure. (Step 1) Choose a set of indices X [w] with |X| = O (1) uniformly at random. Query f (x) for each x X, and reject if there is an x X such that f (x) 6= f (x). (Step 2) Choose a set of indices Y {w + 1, . . . , n} with |Y | = n/w uniformly at random. Use Grovers algorithm to look for a y Y such that f (y) = f (x) for some x [w]. If such a y is found, then reject; otherwise accept. Clearly this procedure makes O _pn/w_ quantum queries to f. For completeness, notice that if f is one-to-one, and the witness satisfies f (x) = f (x) for all x [w], then V accepts with probability 1. For soundness, suppose that Step 1 accepts. Then with high probability, we have f (x) = f (x) for at least (say) a 2/3 fraction of x [w]. However, as in the analysis of Brassard et al. [8], this means that, if f is two-to-one, then with high probability, a Grover search over n/w randomly-chosen indices y {w + 1, . . . , n} will succeed at finding a y such that f (y) = f (x) = f (x) for some x [w]. So if Step 2 does not find such a y, then V has

verified to within constant soundness that f is one-to-one. For the Permutation Testing Problem, we do not know whether there is a QCMA protocol that satisfies both T = o n1/3_ and w = o (n log n). However, notice that if w = (n log n), then the witness can just give claimed values f (1) , . . . , f (n) for f (1) , . . . , f (n) respectively. In that case, the verifier simply needs to check that f is indeed a permutation, and that f (x) = f (x) for O (1) randomly-chosen values x [n]. So if w = (n log n), then the QMA, QCMA, and MA query complexities are all T = O (1). 3 Main Result
38

In this section, we prove a lower bound on the QMA query complexity of the Permutation Testing Problem. Given a QMA verifier V for PTP, the first step will be to amplify V s success probability. For this, we use the by-now standard procedure of Marriott and Watrous [12], which amplifies without increasing the size of the quantum witness. Lemma 4 (In-Place Amplification Lemma [12]) Let V be a QMA verifier that uses a w-qubit quantum witness, makes T oracle queries, and has completeness and soundness errors 1/3. Then for all s 1, there exists an amplified verifier V s that uses a w-qubit quantum witness, makes O (Ts) oracle queries, and has completeness and soundness errors 1/2s. Lemma 4 has a simple consequence that will be the starting point for our lower bound. Lemma 5 (Guessing Lemma) Suppose a language L has a QMA protocol, which makes T queries and uses a w-qubit quantum witness. Then there is also a quantum algorithm for L (with no witness) that makes O (Tw) queries, accepts every x L with probability at least 0.75/2w, and accepts every x / L with probability at most 0.25/2w. Proof. Let V s be the amplified verifier from Lemma 4. Set s := w + 2, and consider running V s with the w-qubit maximally mixed state Iw in place of the QMA witness |'xi. Then given any yes-instance x L, Pr _V s (x, Iw) accepts_ 1 2w Pr _V s (x, |'xi) accepts_ 1 2s2w0.752w , while given any no-instance x / L, Pr _V s (x, Iw) accepts_ 12s 0.252w . Now let Q be a quantum algorithm for PTP, which makes T queries to f. Then just like in the collision lower bound proofs of Aaronson [1], Aaronson and Shi [4], and Kutin [11], the crucial fact we will need is the so-called Symmetrization Lemma: namely, Qs acceptance probability 4 can be written as a polynomial, of degree at most 2T, in a small number of integer parameters characterizing f. In more detail, call an ordered pair of integers (m, a) valid if (i) 0 m n, (ii) 1 a n m, and (iii) a divides n m. Then for any valid (m, a), let Sm,a be the set of all functions f : [n] [n] that are one-toone on m coordinates and a-to-one on the remaining n m coordinates (with the two ranges not intersecting, so that |Imf| = m + nm
39

a ). The following version of the Symmetrization Lemma is a special case of the version proved by Kutin [11]. Lemma 6 (Symmetrization Lemma [1, 4, 11]) Let Q be a quantum algorithm that makes T queries to f : [n] [n]. Then there exists a real polynomial p (m, a), of degree at most 2T, such that p (m, a) = E f Sm,a hPr hQf acceptsii for all valid (m, a). Finally, we will need a standard result from approximation theory, due to Paturi [13]. Lemma 7 (Paturi [13]) Let q : R R be a univariate polynomial such that 0 q (j) _ for all integers j [a, b], and suppose that |q (x) q (x)| = (_) for some x [a, b]. Then deg (q) = _p(x a + 1) (b x + 1)_. Intuitively, Lemma 7 says that deg (q) = b a_ if x is close to one of the endpoints of the range [a, b], and that deg (q) = (b a) if x is close to the middle of the range. We can now prove the QMA lower bound for PTP. Theorem 8 (Main Result) Let V be a QMA verifier for the Permutation Testing Problem, which makes T quantum queries to the function f : [n] [n], and which takes a w-qubit quantum witness |'f i in support of f being a permutation. Then Tw = n1/3_. Proof. Assume without loss of generality that n is divisible by 4. Let " := 0.25/2w. Then by Lemma 5, from the hypothesized QMA verifier V , we can obtain a quantum algorithm Q for the PTP that makes O (Tw) queries to f, and that satisfies the following two properties: (i) Pr _Qf accepts_ 3" for all permutations f : [n] [n]. (ii) Pr _Qf accepts_ " for all f : [n] [n] that are at least n/8-far from any permutation. Now let p (m, a) be the real polynomial of degree O (Tw) from Lemma 6, such that p (m, a) = Ef Sm,a hPr hQf acceptsii for all valid (m, a). Then p satisfies the following two properties: (i) p (m, 1) 3" for all m [n]. (For any f Sm,1 is one-to-one on its entire domain.) (ii) 0 p (m, a) " for all integers 0 m 3n/4 and a 2 such that a divides n m. (For in this case, (m, a) is valid and every f Sm,a is at least n/8-far from a permutation.) So to prove the theorem, it suffices to show that any polynomial p satisfying properties (i) and (ii) above has degree n1/3_. Let g (x) := p (n/2, 2x), and let k be the least positive integer such that |g (k)| > 2" (such a k must exist, since g is a non-constant polynomial). Notice that g (1/2) = p (n/2, 1) 3", that g (1) = p (n/2, 2) ", and that |g (i)| 2" for all i [k 1]. By Lemma 7, these facts together
40

imply that deg (g) = _k_. Now let c := 2k, and let h (i) := p (n ci, c). Then for all integers i _ n4c , nc _, we have 0 h (i) ", since (n ci, c) is valid, n ci 3n/4, and c 2. On the other hand, we also have h _ n 2c_ = p _n2 , c_ = p _n2 , 2k_ = g (k) > 2". By Lemma 7, these facts together imply that deg (h) = (n/c) = (n/k). Clearly deg (g) deg (p) and deg (h) deg (p). So combining, deg (p) = _maxnk,n ko_ = _n1/3_ .

41

CHAPTER-6 FUTURE OF QUANTUM INFORMATION TECHNOLOGY

For some time now we have been immersed in the information age - if you're reading this online, you're proving the point. Our immersion in the information age is reflected in almost every social setting; it has become hard not to find at least one face, if not several faces, aglow with the cool blue light of a smartphone screen. Google, Facebook, and Twitter have changed the way we learn about the world and each other. The digitisation of information has brought about Marshall McLuhan's "global village", in which nearly everyone is - or soon will be - fully connected. The explosive growth of information technology has transformed our world into one that is now draped in a network of fiber-optic cables, dotted with cell-phone towers and encircled by a drone army of communication satellites, all of which enable the flow of digital information around the globe. Privacy, security, knowledge and power A major unresolved question is the role, or even the possibility, of private communication and information security in this brave new digital world - a topic in which governments, corporations and private citizens all have a vested interest. This issue has come sharply into focus as various governments attempt to monitor the communications of their citizens, and even threaten to ban some services, such as Blackberry's BBM, because of the incredibly secure encryption such services provide to end users. But it's important to emphasise that private and secure communication is broadly relevant to users of the internet. For example, if I want to do some online banking, I want to be sure my financial data is protected from online prying eyes. A fundamental, scientific question is this: how much privacy is even possible over networks controlled and monitored by others? Under what physical conditions can Person A communicate privately with Person B over a public network? Fortunately, we currently have efficient encryption systems to reliably protect online activities such as personal banking. But practical systems can be cracked, depending on the resources the would-be eavesdroppers have at their disposal.
42

One of the most widely used encryption schemes on the internet today is the RSA scheme. The basic idea is that one can encode information with a key - which is some very large number - that is made publicly available. With RSA, the encoded information can only be decoded by someone who knows the two prime factors that, when multiplied together, produce this very large number. While it is easy to multiply two numbers together, it turns out to be extremely difficult to find the two prime factors that are multiplied to create a large product. This difficulty is what enables privacy. I make my locking-key publicly available and anyone who wants to send me information privately encodes that information with this locking-key. If I do not disclose the prime factors to anyone - that is, if I keep my unlocking key private - then only I can decipher the encoded message. To anyone else, the message looks like random binary gibberish. How secure is this? The answer depends on a number of practical considerations, but fundamentally, RSA remains only as secure as the difficulty of finding the two prime factors for the locking-key. For large enough locking-key numbers this problem is believed to be unfeasibly hard to solve - even with vast amounts of conventional computing power - because finding the prime factors gets exponentially more difficult as you increase the number of digits in the key.

43

A multinational research group is trying to transmit quantum bits to an orbiting satellite [GALLO/GETTY]

What can quantum physics do for you ... or to you? Information is an abstract concept - it can comprise names, numbers, dates, places, almost anything. The important point is that any information is ultimately represented as some physical quantity - it is always encoded in some physical medium, whether the physical medium involves sound waves from one person's mouth to another's ear, blotches of ink on paper, pulses of light in a fiber-optic cable or the magnetised regions of a hard drive. When the physical medium is manipulated according to the laws of classical physics - for example, the laws of classical electromagnetism which completely describe conventional computers - then these laws imply certain physical limits on how the encoded information can be manipulated and accessed. And

44

when the information is encoded in physical media that obey the laws of quantum physics, then a different set of rules describes how the information can be manipulated and accessed. The laws of quantum physics are now well established as the appropriate rules that govern the way that world works. However, the special features of the quantum laws that make them different from the classical laws are typically only manifest when we manipulate objects at the level of individual atoms and photons. So the idea of quantum information technology is based on the possibility of encoding information at this tiny scale. But before we discuss the practical issues associated with this technological challenge, let's first address the following question: how do the unique features of the quantum laws - and any quantum technology based upon them affect information privacy? The quantum information age The quantum information age was born about 20 years ago from a somewhat unexpected union between quantum physicists and computing scientists. One of the major insights that brought quantum information to the forefront of science was the discovery by Peter Shor that a "quantum computer" (a computer built out of components that can be manipulated according to the full extent allowed by the laws of quantum physics) could easily solve the factoring problem. Hence a quantum computer, if one could be built, would spell an end to the security of the most practical encryption method used for private communication today. Shor's algorithm and a host of other quantum algorithms that have been discovered subsequently have stimulated a major global research effort investigating practical ways to build a large-scale quantum computer. However, there are major technological obstacles to realising large-scale quantum computers. It turns out that the same special features of quantum mechanics that give power to quantum computing also make them tricky to build. Quantum systems are fragile, fickle and tough to control. While small-scale quantum computers consisting of up to a dozen quantum bits, or "qubits", have been realised in the most advanced research labs, currently there is no known technological pathway to building a large-scale quantum computer with thousands or even tens of thousands of qubits, which would be required to crack present-day encryption.

45

The quantum world taketh ... but also giveth Quantum technology creates a threat to the possibility of private communication using current encryption methods - but, interestingly, it also provides a new and more secure solution to achieving private communication. While a quantum computer would break current practical encryption schemes, quantum technology also enables a new means of establishing unconditionally secure private communication through a protocol known as quantum key distribution, which was actually discovered a decade before Shor's algorithm. Quantum key distribution exploits one of the fundamental features of quantum mechanics known as the Heisenberg uncertainty principle. This principle holds that, when dealing with quantum systems, it is impossible to observe one property of that system without disturbing some other property. The significance of this for private communication is this: if a Sender A transmits some (random) data to a Receiver B using quantum bits encoded in the right way, then the receiver can always detect whether an eavesdropper has snooped on the transmission. If no eavesdropper is detected, then B is certain that the random data is private, and this private random data can then be used to establish a secure communication channel over a regular (classical) network. Unlike the RSA scheme currently in use, private communication with quantum key distribution remains secure even if an adversary has access to a quantum computer. The technological threshold for creating and using quantum communication in practice is much lower than that for creating a practical quantum computer. In fact, we already have the technology - researchers have shown that it is possible to transmit quantum bits over hundreds of kilometers using commercial-grade fiber-optic cables. Moreover, there are already private companies offering quantum cryptographic systems. For example, quantum key distribution was used to establish secure communication during the federal election in Switzerland in 2007. Moreover, a multi-national research group led by one of my colleagues, Thomas Jennewein at the Institute for Quantum Computing, is now undertaking a research program to transmit quantum bits to an orbiting communications satellite, which would enable quantum key distribution on a truly global scale.

46

Of course, the extra security afforded by quantum key distribution is currently unnecessary for most applications. Current encryption methods, and the information it protects, can be decrypted only in the future, from an adversary who gains eventual access to a quantum computer. Although for most applications this level of security is not relevant, for others the threat of this future technology can be a serious security concern. We live at a time when rapid developments in conventional information technology have led to an equally rapidly adapting social and political landscape surrounding private communication over public networks. The advent of quantum information technology will further shape the future of communication privacy in our expanding global village.

47

CHAPTER-7 CONCLUSION
The field of quantum information has typically concerned itself with the manipulation of discrete systems such as quantum bits, or qubits. However, many quantum variables, such as position, momentum or the quadrature amplitudes of electromagnetic fields, are continuous, leading to the concept of continuous quantum information. Initially, quantum information processing with continuousvariables seemed daunting at best, ill-defined atworst. Nonetheless, the first real success came with theexperimental realization of quantum teleportation for optical fields. This was soon followed by a flood of activity, to understand the strengths and weaknesses of this type of quantum information and how it may be processed. The next major breakthrough was the successful definition of a notion of universal quantum computation over continuous variables, suggesting that such variables are as powerful as conventional qubits for any class of computation. In some ways continuous-variable computation may not be so different from qubitbased computation. In particular, limitations due to finite precision make quantum floating-point operations, like their classical counterparts, effectively discrete. Thus we might expect a continuous-variable quantum computer to perform no better than a discrete quantum computer. However, for some tasks continuous-variable quantum computers are nonetheless more efficient. Indeed, in many protocols, especially those relating to communication, they only require linear operations together with classical feedforward and detection. This together with the large bandwidths naturally available to continuous (optical) 61 variables appears to give them the potential for a significant advantage. However, notwithstanding these successes, the very practical optical cv approach, when solely based upon Gaussian transformations such as beam-splitter and squeezing transformations, feed-forward and homodyne detections, is not sufficient for implementing more advanced or genuine quantum information protocols. Any more sophisticated quantum protocol that is truly superior to its classical counterpart requires a non- Gaussian element. This may be included on the level of the measurements, for example, via state preparation conditioned upon the number of photons detected in a subset of the Gaussian modes. Alternatively, one may
48

directly apply a non-Gaussian operation which involves a highly nonlinear optical interaction described by a Hamiltonian at least cubic in the mode operators. Though being a significant first step, communication protocols in which this non-Gaussian element is missing cannot fully exploit the advantages offered by quantum mechanics. For example, the goals in the Gaussian protocols of cv quantum teleportation and dense coding are reliable transfer of quantum information and increase of classical capacity, respectively. However, in both cases, preshared entanglement is required. Using this resource, via teleportation, fragile quantum information can be conveyed through a classical communication channel without being subject to decoherence in a noisy quantum channel. In entanglement-based dense coding, using an ideal quantum channel, more classical information can be transmitted than directly through a classical channel. For transferring quantum information over long distances, however, entanglement must be distributed through increasingly noisy quantum channels. Hence entanglement distillation is needed, and for this, Gaussian resources and Gaussian operations alone do not suffice. Similarly, true quantum coding would require a non-Gaussian decoding step at the receiving end. In general, any cv quantum computation that is genuinely quantum and hence not efficiently simulatible by a classical computer must contain a non-Gaussian element. Among the communication protocols, cv quantum key distribution appears in some sense exceptional, because even in a purely Gaussian implementation it may well enhance the security compared to classical key distribution schemes. The experiments accomplished so far in cv quantum information reflect the observations of the preceding paragraphs. Gaussian state preparation, including (multiparty) entangled states, and Gaussian state manipulation are techniques well understood and implemented in many laboratories around the globe. However, in order to come closer to real applications, both for long-distance quantum communication and for quantum computation,a new generation of experiments is needed, crossing the border between the Gaussian and non-Gaussian worlds. Beyond this border, techniques from the more traditional single-photon based discrete-variable domain will have to be incorporated into the cv approaches. In fact, a real-world application of optical quantum communication and computation, possibly including atom-light quantum interfaces and atomic quantum memories, will most likely combine the assets of both approaches, the continuous-variable one and that based on discrete variables.

49

BIBLIOGRAPHY

[1] S. Aaronson. Quantum lower bound for the collision problem. In Proc. ACM STOC, pages 635642, 2002. quant-ph/0111102.

[2] S. Aaronson. Quantum computing, postselection, and probabilistic polynomial-time. Proc. Roy. Soc. London, A461(2063):34733482, 2005. quant-ph/0412187.

[3] S. Aaronson and G. Kuperberg. Quantum versus classical proofs and advice. Theory of Computing, 3(7):129157, 2007. Previous version in Proceedings of CCC 2007. quantph/0604056.

50

REFERENCES

The links referred are:


[1] www.ieee.com [2] www.google.com [3] http://w3.antd.nist.gov/qin/index.shtml [4] http://www.aljazeera.com/indepth/opinion/2012/02/20122237159922635.html [5] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, (Cambridge, 2000). [6] H. Takayanagi, H. Ando, and T. Fujisawa, From Quantum Effects to Quantum Circuits, Toward the Quantum Computer, NTT REVIEW, Vol. 12, No. 1, pp. 17-25, 2000. [7] T. Fujisawa, Single electron dynamics, to be published in Encyclopedia of Nanoscience and Nanotechnology.

51

Das könnte Ihnen auch gefallen