Sie sind auf Seite 1von 6

Classical simulation of boson sampling with sparse output

Wojciech Roga1 and Masahiro Takeoka1


1
National Institute of Information and Communications Technology, Koganei, Tokyo 184-8795, Japan
E-mail: wojciech.roga@nict.go.jp
(Dated: April 12, 2019)
Boson sampling can simulate physical problems for which classical simulations are inefficient.
However, not all problems simulated by boson sampling are classically intractable. We consider a
situation in which it is known that the outcome from boson sampling is sparse. It can be determined
from a few marginal distributions which are classically calculable. Still, recovering of the joint dis-
tribution can be of high complexity. We show classically efficient methods of the recovery assuming
high sparsity of the joint distribution. Various extensions are discussed including a version involving
quantum annealing.
arXiv:1904.05494v1 [quant-ph] 11 Apr 2019

Background and motivation.– Simulating complicated {0, ..., N }, computes the joint probability
quantum chemistry systems on classical or quantum sim-
ulators is an interesting problem with the industrial im- P rS=(s1 ,...,sM )∼DU [si1 = j1 ∧ ... ∧ sik = jk ]
pact. It is simply cheaper to test many, e.g., molecu-
lar configurations in the simulators than synthetizing the in N O(k) time. Here, S = (s1 , ..., sM ) ∼ DU means that
molecules and testing their crucial properties experimen- the occupation numbers S are sampled according to the
tally. However, some problems are believed to be of com- probability distribution over possible outputs of U .
plexity for which classical computers are inefficient. For As commented by the authors this result in general does
instance, Huh et al. [1, 2] showed that the statistics of not change the complexity of the full boson sampling
Franck-Condon (FC) factors [3, 4] for vibronic transitions problem. To have the complete joint statistics we would
in large molecules [5, 6] is equivalent to the statistics of need marginal distributions with k ≥ N . This implies the
samples in a version of boson sampling [7–15]. This ob- complexity scaling exponentially with the number of pho-
servation has two serious consequences. First, finding FC tons. However, if we invest the knowledge about the spar-
factors may be computationally hard and the complex- sity the complexity may be indeed reduced. We discuss
ity may scale exponentially with the number of atoms the scheme that allows us to recover the joint sparse dis-
in a molecule. Second, FC factors can be simulated by tribution from the marginal ones. We will use the ideas
boson sampling based quantum simulators. It is widely from compressive sensing, i.e., a sparse signal recovery
accepted that boson sampling from interferometers de- from undersampled measurements [18–24]. The incoher-
scribed by the average-case Haar- or Gaussian-random ent measurement in our compressive sampling schemes
unitary transformations is classically computationally in- consists of marginal distributions. The observation about
efficient [7]. However, it is not clear if the problems computability of marginal distributions and the existence
related to quantum chemistry belong to this class, as of compressive sensing algorithms do not guarantee yet
the related matrices are not typically Haar- or Gaussian- that we have an efficient classical algorithm for boson
random [16]. In particular, if these matrices were of low sampling with sparse outputs. We need to show addi-
rank there exists an efficient algorithm for computing tionally that there exists an efficient algorithm which re-
their permanents [17] and so finding the boson sampling covers the sparse joint distribution from compressively
statistics. In this paper we discuss the case when we a sensed data. In this paper we show that such algorithm
priori know that the statistics of outputs from a boson exists. The arguments are inspired by works from the
sampling setup is sparse. This knowledge can be based field of quantum compressive sensing [25–30]. In partic-
on some symmetries or other physical properties of the ular Cramer et al. [26] proposed an algorithm to recon-
problem. Then boson sampling can be at least approx- struct a low rank quantum state from its reduced density
imately classically simulated. Let us summarize our ar- matrices known from quantum tomography of subsys-
gumentation. We observe that although full coincidence tems. They used a modified singular value thresholding
statistics in ideal boson sampling experiments cannot be algorithm (MSVT). The essential part of it is finding the
efficiently simulated classically the marginal distributions ground states of matrices that can be thought as Hamilto-
can be. It is implied by the result of Gurvits cited to- nians. This operation itself is inefficient, but if the Hamil-
gether with the proof in the Aronson and Arkhipov paper tonian is constructed from local nearest-neighbor interac-
[7] as follows: tions the best so-called matrix product state approxima-
tions of the ground states can be found efficiently. Then
MSVT converges to the right solution in time polynomial
Theorem (Gurvitss k-Photon Marginal Algo- in the size of the system. The application of the matrix
rithm)There exists a deterministic classical algorithm product states representation allows as well for signifi-
that, given a unitary matrix U ∈ CM ×M , indices cant reduction of the required memory. The procedure
i1 , ..., ik ∈ [M ], and occupation numbers j1 , ..., jk ∈ we provide is a simplified version of this idea. We show
2

that the complexity bottleneck of a compressive sensing dots. In this explanation we will use a notation for two-
reconstruction from marginal distributions is in localizing detector coincidences, however the formalism can be ex-
the largest element from a long list (counterpart of find- tended to coincidences of different numbers of detectors
ing the ground state). This procedure typically would if necessary. The measurement of modes i and j leads to
require the number of steps and bits of memory which the marginal probability distribution
is O(d), where d is the length of the list. However, we
show that in our case the list may be written as the local yni ,nj = Ani ,nj x (3)
nearest-neighbor interaction Hamiltionian of a classical where yni ,nj is the sum of all entries αn1 ,...,nN with fixed
spin chain. Localization of the maximum energy config- ni and nj . We get this distribution observing frequen-
uration is efficient in this problem both in terms of the cies of outcomes from given modes independently of what
number of operations and memory [31, 32]. happens in the remaining modes. In the chosen conven-
Marginal coincidence distributions.– Let us start con- tion the rows of the so-called measurement matrix A are
sidering the following scenario. For some unitary binary patterns as follows
transformation U which reflects features of a physi-
cal system we want to simulate the transition proba- Ani ,nj = γni γnj , (4)
bilities x = |h1, 1, ..., 1, 0...0|U |n1 , n2 , ..., nM i|2 from a
given (N − 1)-photon occupation numbers state in M - where means the entry-wise multiplication and
modes |1, 1, ..., 1, 0...0i to all occupation numbers states
γni = 1⊗i−1 ⊗ (· · 1 · ··) ⊗ 1⊗M −i
|n1 , n2 , ..., nM i (the left hand side of fig. 1). The
↑ (5)
searched vector x is of length N M . This corresponds
ni + 1
to 0, 1, ..., N − 1 possible photons in each mode. We
where 1 = (1, 1, 1, ...). Here 1 and (· · 1 · ··) are N dimen-
sional vectors. The entry-wise multiplication preserves
the tensor product structure and can be executed as the
entrywise multiplication in each part of the tensor prod-
uct.
The results of k measurement outputs form a k-dim
vector
y = Ax, (6)
FIG. 1. Red or blue bars - the statistics of coincidences of
different numbers of photons at the output of an interferom- where A is a k × N M binary matrix. If x is sparse in the
eter. Left hand side: The distributions measured directly by basis incoherent with the measurement then it can be
M photon-number-resolving detectors. All combinations of determined based on the number of measurements k 
possible numbers of photons in M modes form an N M di- N M as the most sparse solution of y = Ax0 . This problem
mensional vector. Right hand side: Marginal distributions of can be seen as the constraint l1 norm minimization which
the occupation numbers in two chosen modes.
is a convex optimization problem. It is solvable by many
known algorithms [23]. In the next part we analyze the
claim that it is possible to efficiently find x calculating the complexity of particular algorithms adapted to minimize
statistics of marginal distributions from smaller number the computational costs and memory requirements.
of detectors measuring only chosen states and ignoring Complexity analysis.– Assume that we use only 2 de-
the remaining modes (the right hand part of fig. 1) if tectors measuring neighboring modes. For boson sam-
x is sparse, i.e., it contains at most s non-zero entries, pling the marginal distributions can be calculated in
where s  N M . polynomial time in the number of modes. Gurvitss Al-
To analyze the problem in detail let us introduce the gorithm from [7] does it in time N O(2) N 2 M . The num-
following notation. Vector x can be decomposed in the ber of measurements is k = N 2 M , the dimensionality of
basis of measured sets of occupation numbers as follows the sparse vector is d = N M . We allow for sub-linear
x=
X
αn1 ,...,nM |n1 , ..., nM ). (1) scaling of the sparsity s, i.e., the number of non-zero ele-
n1 ,...,nM
ments in x. Therefore, we keep O(s) and O(d) well sepa-
rated. Moreover, we assume that problems of complexity
Here, the numbers αn1 ,...,nN are non-negative, sum up to O(s) = O(k) are efficiently tractable. From the measure-
one and only s of them are non-zero. We will use the ments, we want to recover the most sparse joint distri-
following convention bution knowing the measurement matrix. So, our goal is
|n1 , n2 , ...)T = (· · ·1 · ·) ⊗ (· · 1 · ··) ⊗ ... to solve the underdetermined problem y = Ax, where y
↑ ↑ (2) is a k-dimensional measurement vector of marginal dis-
n1 + 1 n2 + 1 ... tributions, x is a d-dimensional sparse vector which is
searched. In our case, rows of A are well structured or-
where the lower line indicates the positions of 1 in each thogonal patterns. This implies that it is easy to multi-
mode and there is appropriate number of 0s in place of ply A by any sparse vector. For instance, assume that
3

we know that an entry κ of a vector is non-zero. We the eigenvectors under some conditions. Indeed, we may
decompose κ as a N-inary number which gives us imme- apply the matrix product states (MPS) formalism which
diately its tensor product representation as in (2) consis- is a powerful tool, for instance, for finding the ground
tent with the structure of A. For example, assuming that state of a Hamiltonian of a one-dimensional quantum
N = 2, position 14 is represented as 01101 (correspond- spin chain with the nearest neighbors interactions (Ising
ing to binary 13 as 0 occupies the first position) which model). This Hamiltonian is of the form
corresponds to
H = H1,2 ⊗ 13,...,M + 11 ⊗ H2,3 ⊗ 14,...,M + ..., (9)
(10) ⊗ (01) ⊗ (01) ⊗ (10) ⊗ (01). (7)
where Hi,i+1 acts only on modes i and i+ 1 in an ordered
This vector has just one non-zero element in the 14th set of modes with the open boundary condition and we
entry. It is immediate to show what is its overlap with explicitly write identities on the other modes. Let us no-
a particular row of A which, for instance, can be of the tice that in our problem AT r in the support detection
form part can be written exactly as a diagonal local Hamil-
tonian. To simplify the explanation let us consider an
(11) ⊗ (01) ⊗ (10) ⊗ (11) ⊗ (11). (8) example with the readouts from a single detector only.
A row of measurement matrix A corresponding to nm
We need to check only relevant modes. photons in mode m is given as γnm in (5). Observing all
To solve the underdetermined problem we discuss in nm from only mode m we have
detail two first-order greedy algorithms [33]. We con-
sider the matching pursuit method which is simple but AT[m] r[m] = 1⊗m−1 ⊗ (r0m , r1m , r2m , ...) ⊗ 1⊗M −m , (10)
less accurate and slower, and the gradient pursuit which
is more accurate and faster but slightly more computa- where A[m] is a submatrix of A corresponding to different
tionally demanding. These two algorithms are enough to photon numbers recorded in mode m. Here, r[m] is part
discuss the bottleneck for the memory and computational of the residual vector that corresponds to rows of A[m] .
costs, and to show how to overcome these problems. Measurements of other modes have also form of the lo-
The matching pursuit protocol finds the s sparse solu- cal Hamiltonians if understood as diagonal matrices. So,
tion. It is summarized as follows: I. (Initialization) At the same holds for AT r. For coincidence measurements
step 0: the residual r0 = y and the approximate solution of more than 1 modes given that the measurements are
is x0 = 0. II. (Support detection) In step i recognize the taken for the neighboring modes we have
column of A denoted by At which is the most similar to
the current residue ri−1 solving t = argmaxt0 |(AT ri−1 )t0 |. AT[m,m+1] r[m,m+1]=1⊗m−1 ⊗ (r0m ,0m+1, r0m ,1m+1, r0m ,2m+1 ,...)
III. (Updating) Update the solution only in index t i.e., ⊗ 1⊗M −m−1 , (11)
xit = xi−1
t + (AT ri−1 )t and update the residue using t-th
column of A as follows ri = ri−1 − (AT ri−1 )t At . IV. which written in the diagonal form is a typical nearest
Continue iterating until a stopping criterion is matched. neighbor Hamiltonian for a one dimensional spin chain.
The support detection step dominates the complexity. It is clear from this notation that the matrix vector mul-
Without exploiting the structure of A we require O(kd) tiplication requires negligible operational costs and O(k)
bits of memory to store the matrix and the same for bits of memory to store d long vector in its compressed
the operational costs for multiplication AT ri−1 . More- representation as a sum of local matrices.
over, without smart tricks, typically, we would need at Using these observations, the support finding from
least O(d) steps to find the maximum value from the list the matching pursuit algorithm can be determined much
AT ri−1 . The same amount of memory is needed to store faster than in O(d) time. If we consider just two neigh-
the list. boring modes coincidences our problem can be described
Considering specific features of the problem we can in terms of the classical spin chain formalism, we can use
overcome the bottleneck. Let us notice that the first and an explicit strategy from, e.g., [32] to find the optimal
the third part of the algorithm can strongly benefit from configuration of the classical spin chain which is equiva-
the sparsity of vectors involved. Moreover, for any t, lent to finding the position of the maximal value in our
vector At can be found operationally (multiplication of AT r list. Indeed, hm,m+1 (im , im+1 ) from the algorithm
A and a sparse vector). So, At does not need to be stored [32] is equivalent to rim ,im+1 . This procedure reduces the
beforehand and the memory and computational costs of computational costs of the support detection to 2M N 2
these parts are O(k) - size of ri . The entire procedure (factor 2 is from the necessity to repeat the procedure
can be iterated until a given sparsity s of the solution is for −AT r as we are looking for the largest element in the
achieved. Finally, let us consider the operational costs of absolute value).
the inefficient support detection part. As for finding the The matching pursuit (MP) method although compu-
index of the largest element of a list, we notice that it is tationally simple is not the fastest one as the same sup-
equivalent to finding the leading eigenvector of the diag- port index may need to be used many times. Also it does
onal matrix with the list on the diagonal. We know that not guarantee the accurate solution, as the solution is ex-
there are computationally efficient methods for finding pressed by means of relatively small number of columns
4

of A. Some modifications of the method lead more ef- steps are to calculate the marginal distributions using the
ficiently to more accurate results. Among them there Gurvitss k-Photon Marginal Algorithm and then to use
is the orthogonal matching pursuit algorithm (OMP) in an algorithm for compressive sensing sparse signal recov-
which in each step the support is updated and kept in ery. Many of these algorithms quickly converge assuming
the memory. The solution is approximated by the vector that the support localization is efficient. We show that
of coefficients in the best 2-norm approximation of the due to the specific form of the marginal measurements
measurement vector y in terms of the selected columns the problem can be reduced to finding the optimal con-
of A. The gradient pursuit (GP) updates the solution by figuration in a 1 dimensional classical spin chain. The
correcting it in the direction of the largest gradient of the latter is known to be efficient.
2-norm mentioned above by a given step. As discussed
in [33] the only additional cost over what we have in the In this paper we have investigated the situation with
matching pursuit algorithm is the cost of calculating the only nearest neighbor modes measurements. This re-
step size. This can be however done efficiently in our case stricts the tolerable sparsity for the method. When the
due to an easy way of finding the product of a sparse vec- sparsity increases a larger amount of data is needed. Im-
tor and matrix A. For MP and GP the convergence rates plementing three or more nearest neighbors modes mea-
discussed in [33] depend on contracting properties of A. surements is one of the solutions. We could think as
well about relaxing the nearest modes requirement and
Accuracy of the reconstruction.– For successful com- investigating the situation with non-local coincidence dis-
pressive sensing reconstruction the so-called coherence tributions. Then more advance techniques from the the-
between the rows of a measurement matrix and the spar- ory of spin glass could be also applied [35, 36]. Finally,
sity basis must by low. Roughly speaking it means that a our function (11) to be minimized when restricted to
particular measurement is sensitive on changes in a large nearest
part of the measured vector. For this reason random un- P binary modes is equivalent to the Hamiltonian
H = i>j Hi,j , where
structured matrices are of particular interest as they are
incoherent with almost any basis. However, random ma-
trices are problematic for large scale problems. They are
expensive in terms of storage and operational costs as the Hi,j = hi,j 1i,j +h0i,j σiz σjz +h00i,j σiz ⊗1j +h000 z
i,j 1i ⊗σj . (12)
inefficient matrix vector multiplication is usually needed
[34]. Therefore, structured matrices which can be defined
operationally and do not need to be stored are also de- Here σiz is the Pauli z matrix in mode i. The coeffi-
sired. In our case the measurement matrix is structured cients can be chosen to correspond to rni ,nj from (11).
and of low coherence with respect to the measured space The ground state of Hamiltonian H can be found effi-
basis. However, the usefulness of a particular matrix de- ciently by simulated or quantum annealing under some
pends as well on the reconstruction algorithm. There- conditions about spectrum of H and its correspondence
fore, we tested the performance of the chosen GP algo- to the connections in the annealer [37–39]. In our ap-
rithm with measurement matrix A numerically. We have proach we have some freedom in choosing pairs of modes
measured 1000 randomly chosen distributions of sparsity to guarantee that these conditions are satisfied. So, the
s = 4, 5, 6 for the problem with M = 6 output modes and simulated or quantum annealing could be used to extend
N = 4 different events measured in each mode. Matrix our approach. However, detailed study on this matter is
A was associated with all 2-neighboring modes coinci- a subject of separate study.
dence measurements. So, A is a 80 × 4096 matrix. We
have tested how often X = xTtest xsolved /|xtest |2 is larger An important impact of this paper is that we implicitly
than a given threshold, where xtest and xsolved are the suggest a new method for finding Franck-Condon factors
randomly chosen measured signal used in the simulation under the condition that their distribution is sparse. This
and the reconstructed signal respectively. We iterated problem is in general related to calculating permanens
the algorithm not more than 50 times. For s = 4 we ob- of a unitary transformation [1]. In this paper we show
serve that in about 80% cases X > 0.9 and in 74% cases that it is enough to calculate permanents of submatrices
X > 0.99. For s = 5, we have X > 0.9 and X > 0.99 due to the Gurvits’s algorithm and apply the compressive
in 64% and 56% of situations respectively. Finally, for sensing reconstruction.
s = 6 we observe X > 0.9 and X > 0.99 in 47% and 37% Our technique can be applied to marginal distributions
of cases respectively. As there are many possible more measured in realistic experiments or calculated assuming
complicated and more efficient algorithms which can still losses [40, 41]. The joint distribution after losses may
share the feature of the reduced complexity we observe be interpreted as a lossless distribution multiplied by a
in this paper these results can be improved. However, stochastic matrix L describing the process of losses. L is
testing them further is out of the scope of this paper. often in the form of a tensor product. The measurement
Conclusions.– In this paper we show that if we a priori matrix from our technique is now the product of A as
know that an output from a boson sampling experiment previously and L. The new measurement matrix inherits
is sparse enough we can calculate the first-order approx- features allowing for keeping the complexity tractable.
imation efficiently on a classical computer. The crucial The impact of losses may be a direction of further studies.
5

ACKNOWLEDGMENTS

This work was supported by JST CREST Grant No.


JPMJCR1772

[1] J. Huh, G. G. Guerreschi, B. Peropadre, J. R. McClean, [17] A. I. Barvinok, Two algorithmic results for the traveling
and A. Aspuru-Guzik, Boson sampling for molecular vi- salesman problem Math. Oper. Res., 21, 65 (1996).
bronic spectra, Nature Photonics 9 615 (2015). [18] D. L. Donoho, For most large underdetermined systems
[2] J. Huh, and M.-H. Yung, Vibronic Boson Sampling: Gen- of equations, the minimal l1-norm near-solution approx-
eralized Gaussian Boson Sampling for Molecular Vibronic imates the sparsest near-solution Commun. Pure Appl.
Spectra at Finite Temperature, Sci. Rep. 7, 1 (2017). Math. 59 0907 (2006).
[3] J. Franck, Elementary processes of photochemical reac- [19] E. Cands, and T. Tao, Near-optimal signal recovery from
tions, Trans. Faraday Soc. 21, 536 (1925). random projections: Universal encoding strategies IEEE
[4] E. U. Condon, Nuclear Motions Associated with Electron Trans. Inf. Theory 52 5406 (2006).
Transitions in Diatomic Molecules, Phys. Rev. 54, 858 [20] R. Baraniuk R, M. Davenport, R. DeVore, and M. Wakin,
(1928). A simple proof of the restricted isometry property for
[5] E. V. Doktorov, I. A. Malkin, and V. I. Manko, Dy- random matrices Constr. Approx. 28 253 (2008).
namical symmetry of vibronic transitions in polyatomic- [21] E. Cands, and T. Tao, Decoding by linear programming
molecules and Franck-Condon principle J. Mol. Spec- IEEE Trans. Inf. Theory 51, 4203 (2005).
trosc. 64, 302 (1977). [22] S. Foucard, and H. Rauhut, A Mathematical Introduc-
[6] H.-C. Jankowiak, J. L. Stuber, and R. Berger, Vibronic tion to Compressive Sensing, (Springer, Berlin, 2013).
transitions in large molecular systems: Rigorous pre- [23] A. Draganic, I. Orovic, and S. Stankovic, On some com-
screening conditions for Franck-Condon factors, J. Chem. mon compressive sensing algorithms and applications -
Phys. 127, 234101 (2007). Review paper Facta Universitatis, Series: Electronics and
[7] S. Aaronson, and A. Arkhipov, The Computational Com- Energetics, 30, 477 (2017).
plexity of Linear Optics, In Proceedings of the Forty- [24] E. Cands, J. Romberg, and T. Tao, Robust uncertainty
Third Annual ACM Symposium on Theory of Computing principles: exact signal reconstruction from highly in-
(ACM, New York, 2011), 333 (2011). complete frequency information. IEEE Trans. Inf. The-
[8] J. B. Spring, et al. Boson Sampling on a Photonic Chip ory 52, 489 (2006).
Science 339 798 (2013). [25] D. Gross, Y-K. Liu, S. T. Flammia, S. Becker, and J. Eis-
[9] M. Tillmann, B. Dakic, R. Heilmann, S. Nolte, A. Sza- ert, Quantum State Tomography via Compressed Sensing
meit, and P. Walther, Experimental boson sampling Na- Phy. Rev. Lett. 105, 150401 (2010).
ture Photonics 7 540 (2013) [26] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma,
[10] V.S. Shchesnovich, Sufficient condition for the mode mis- D. Gross, S. D. Bartlett, O. Landon-Cardinal, D. Poul-
match of single photons for scalability of the boson- ing, and Y.-K. Liu, Efficient quantum state tomography,
sampling computer, Phys. Rev. A 82, 022333 (2014). Nature Comm. 1, 1 (2010).
[11] M. C. Tichy, K. Mayer, A. Buchleitner, and K. Molmer, [27] Y-K. Liu, Universal low-rank matrix recovery from Pauli
Stringent and Efficient Assessment of Boson-Sampling measurements, Adv. Neural Inf. Process Syst. (NIPS) 24,
Devices, Phys. Rev. Lett. 113, 020502 (2014) 1638 (2011).
[12] A. P. Lund, A. Laing, S. Rahimi-Keshari, T. Rudolph, [28] S. T. Flammia, D. Gross, Y-K. Liu, and J. Eisert, Quan-
J. L. O’Brien, and T. C. Ralph, Boson Sampling from a tum tomography via compressed sensing: error bounds,
Gaussian State, Phys. Rev. Lett. 113, 100502 (2014). sample complexity and efficient estimators New J. Phys.
[13] K. R. Motes, A. Gilchrist, J. P. Dowling, and P. P. Rohde, 14, 095022 (2012).
Scalable Boson Sampling with Time-Bin Encoding Using [29] A. Shabani, et. al. Efficient Measurement of Quantum
a Loop-Based Architecture Phys. Rev. Lett. 113 120501 Dynamics via Compressive Sensing Phys. Rev. Lett. 106,
(2014). 100401 (2011).
[14] S. Rahimi-Keshari, T. C. Ralph, and C. M. Caves, Suf- [30] A. Shabani, M. Mohseni, S. Lloyd, R. L. Kosut, and H.
ficient Conditions for Efficient Classical Simulation of Rabitz, Estimation of many-body quantum Hamiltonians
Quantum Optics Phys. Rev. X 6 021039 (2016). via compressive sensing Phys. Rev. A 84, 012107 (2011).
[15] B. T. Gard, K. R. Motes, J. P. Olson, P. P. Rohde, and [31] E. Ising, Beitrag zur Theorie des Ferromagnetismus, Z.
J. P. Dowling, An Introduction to Boson-Sampling in: Phys., 31 253 (1925).
edited by S. A. Malinovskaya, I. Novikova, From Atomic [32] N. Schuch, and J. I. Cirac, Matrix product state and
to Mesoscale: The Role of Quantum Coherence in Sys- mean-field solutions for one-dimensional systems can be
tems of Various Complexities (WSPC, New Jersey, 2015). found efficiently, Phys. Rev. A 82 012314 (2010).
[16] Y. Cao, J. Romero, J. P. Olson, M. Degroote, P. D. [33] T. Blumensath, and M. Davies, Gradient Pursuit, IEEE
Johnson, M. Kieferová, I. D. Kivlichan, T. Menke, B. Trans. Sig. Proc. 56 2370 (2008).
Peropadre, N. P. D. Sawaya, S. Sim, L. Veis, and A. [34] V. Cevher, S. Becker, and M. Schmidt, Convex optimiza-
Aspuru-Guzik, Quantum Chemistry in the Age of Quan- tion for big data: Scalable, randomized, and parallel al-
tum Computing arXiv:1812.09976 (2018). gorithms for big data analytics IEEE Signal Processing
Magazine, 31, 32 (2014).
6

[35] M. M. Rams, M. Mohseni, and B. Gardas, Heuristic opti- [39] T. Kadowaki, and H. Nishimori, Quantum annealing in
mization and sampling with tensor networks for quasi-2D the transverse Ising model, Phys. Rev. E 58 5355 (1998).
spin glass problems arXiv:1811.06518 (2018). [40] J. Renema, V. Shchesnovich, and R. Garcia-Patron,
[36] K. Jalowiecki, M. M. Rams, and B. Gardas, Brute-forcing Classical simulability of noisy boson sampling,
spin-glass problems with CUDA arXiv:1904.03621 arXiv:1809.01953 (2018).
(2019). [41] M. Oszmaniec, and D. J. Brod, Classical simulation of
[37] A. Lucas, Ising formulations of many NP problems. Front photonic linear optics with lost particles New J. Phys.
Phys, 2, 1 (2014). 20, 092002 (2018).
[38] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Optimiza-
tion by simulated annealing, Science 220, 671 (1983).

Das könnte Ihnen auch gefallen