Sie sind auf Seite 1von 5

Source: Report

Bindel. (2014). High Productivity vs. High Performance. Cornell University.


This lecture discusses the difference between various languages in terms of the
requirements of the problem at hand, with a focus on classic C and Python, which is
based on C. C, a low-level language, is fast for most arithmetic computations but fails to
efficiently perform operations on large matrices, while a high-level language like Python
or MATLAB could easily perform such operations and possibly faster with the correct
library support. Bindels analysis reaches the conclusion that the most optimal system is
one that uses several languages, each one used for operations that are appropriate for its
abilities and strengths. He concedes that such mixed-language programming is difficult to
manage due to compatibility issues, but languages like Julia, which incorporates high and
low level systems, could provide the solution. This source is useful because it describes
the proper use of mixed-language systems, focusing on the same languages that my
research does.
Source: Report
Blumenthal, M. (2007). Encryption: Strength and weaknesses of public-key
cryptography. Villanova, PA: Villanova University.
Public key encryption is a method of ensuring confidentiality in communications
through the exchange of keys. A user that wants to communicate with someone else must
encrypt their message with the recipients public key, which can then only be decrypted
with the recipients private key. Verification is also ensured with digital signatures. This
paper addresses the weaknesses of key exchange and presents solutions for improving
confidentiality. The most glaring weakness of public key encryption is the vulnerability to
a man-in-the-middle attack, as well as the possibility of a brute force attack. The author
concedes that brute force attacks cannot be prevented, but he proposes that public keys
are split among their intended recipients in order to prevent failure at one point
compromising the system. This source is helpful because it explains the nature and
weaknesses of public key encryption, used in PIR schemes.
Source: Report
Cullinan, C., Wyant, C., & Frattesi, T. (2010, June). Computing performance benchmarks
among CPU, GPU, and FPGA. MathWorks.
This report seeks to benchmark the performance of three different computational
processors: CPUs, GPUs, and Field-Programmable Gate Arrays(FPGAs). These three
processors have different uses and specializations, and the authors created a way to test
their performance in a variety of different computations that emphasize computational
speed and communication cost. The result of their experiment determined that GPUs,
when optimized correctly, have the fastest execution time out of all three processors.
CPUs have the overall fastest performance when transfer time is taken into account.
FPGAs have the fastest execution for fixed algorithms that can take advantage of

streaming. This article, while slightly outdated, offers insight into the use of other gates
like the FPGA in processing as well as the overall uses of each processor.
Source: Report
Gasarch, W. (2004). A survey on private information retrieval. University of Maryland at
College Park.
In this paper, Gasarch builds upon classic Private Information Retrieval(PIR)
knowledge with a survey of the performances of various theoretical PIR schemes. While
most theoretical PIR schemes imagine a perfect scenario in which the desired database
has only one copy and it is bounded in its computational ability. Gasarch is searching for
a PIR model that does not allow error and avoids the previous assumptions of the desired
database that has a computational cost less than the cost of simply downloading a
database. Through his survey, Gasarch creates theorems on the behavior of realistic PIR
functions and their bounds in terms of computational cost. This article is very old, but the
information given through Gasarchs research is valuable to my research in PIR.
Source: Report
Gentry, Craig. (2009). Fully homomorphic encryption using ideal lattices. Stanford
University.
In this paper, Gentry proposes a fully homomorphic encryption scheme, a
cryptographic process which can evaluate a circuit without being decrypted. He describes
a bootstrappable circuit, one that can process its own decryption circuit without being
decrypted itself. This bootstrapping forms the basis of his solution, which is then
combined with an ideal lattice cryptographic system, a circuit with a low complexity. His
final solution requires replicating the second stage with less complex depth, forming a
circuit that is bootstrappable and able to process circuits with more depth than the
original circuit. This circuit has both additive and multiplicative homomorphism,
providing extensive use. This system of creating a homomorphic circuit can be applied to
any algorithm with relatively low complexity. This source, while old, serves as a basis for
knowledge on homomorphism which can be used to understand later sources.
Source: Report
Jang, K., Han, S., Han, S., Moon, S., & Park, K. (2011, March). SSLShader: Cheap SSL
acceleration with commodity processors. Boston, MA.
Secure Sockets Layer (SSL) protocol is one of the most widely accepted forms of secure
connection, preventing random listeners from gathering data, but it has been slow to be
adopted by most of the Internet due to high cost. In this report, the authors seek to find a
way to harness the power of graphics processing units (GPUs) to accelerate the process
and allow more sites to use SSL transactions at a lower cost. Through this research they
built SSLShader, a GPU SSL proxy meant to efficiently handle large SSL workloads.
This paper contributes several conclusions. First, they have implemented RSA, AES and
SHA-1 cryptographic algorithms on GPUs. RSA has been found to have the highest

throughput. Second, they create a method for offloading specific functions from the CPU
to the GPU based on workloads, providing the highest efficiency for any operation.
Finally, the authors contribute the testing of their SSLShader offloading algorithm. This
source is helpful because it describes a thorough method of optimizing code structure for
GPU acceleration.
Source: Report
Kalia, A., Zhou, D., Kaminsky, M., & Anderson, D. G. (2015). Raising the bar for using
GPUs in software packet processing.
In this essay, the authors attempt to resolve issues in GPU acceleration that cause
memory latency and overall slower processing. Their analysis of GPU operation also
shows communication costs from a network to a GPU as well as a lack of Random
Access Memory (RAM) that is often necessary for network communications. They
believe that CPUs can be used to hide this memory latency and perform operations at a
higher efficiency than GPU implementations. They created and tested a language called
G-Opt that can accelerate CPU processes by hiding such latencies found in GPU
languages like OpenCL and CUDA. In testing their G-Opt language compared to modern
GPU accelerated code, they found that their CPU based code semi-automatically hides
the memory latency found in GPUs. This source is helpful because it shows that GPU
acceleration can cost more in networking situations.
Source: Report
Lee, V., & Kim, C. (2010, June). Debunking the 100x GPU vs. CPU myth: An evaluation
of throughput computing on CPU and GPU.
This paper seeks to test the throughput of many different application kernels on
CPUs and GPUs to reach several conclusions. First, the authors wish to test the myth that
GPUs offer up to 1000x the throughput when compared to CPUs of similar quality, and
after optimizations and testing, the real number comes out to around 2.5x the throughput,
more or less equalling throughput on CPUs and GPUs. They also seek to find the proper
optimization techniques for different kernels on each platform, as well as the key
architectural differences between the two that limit throughput. 14 different kernels are
tested, and the authors find that the high GPU optimization claimed by some can only be
achieved with algorithms that are well-suited to GPU acceleration, and that CPUs are still
viable options for high-throughput computing. This report is useful because it presents an
analysis of the gains and losses of GPU acceleration, especially with cryptographic
operations.
Source: Report
Melchor, C., & Crespin, B. (2008, September). High-speed private information retrieval
computation on GPU. Universite de Limoges.

In this paper, authors Carlos Aguilar Melchor and Benoit Crespin discuss the use
of graphics processing units (GPUs) in the computations needed for private information
retrieval (PIR). PIR schemes can require large computations with numbers over 1,000
bits long, so more efficient processing is required to compute these algorithms in a timely
manner. GPUs have thousands of cores, and each core can be instructed to perform a
single chunk of a large computation. Using GPU acceleration, Melchor and Crespin
implemented a fast single-database PIR scheme that efficiently uses the structure of a
GPU and tested their algorithm against other common schemes, comparing time elapsed,
cost, and throughput. This source is useful because it outlines an implementation of a PIR
scheme and the process of optimising GPU acceleration.
Source: Report
Prechelt, Lutz. (March, 2010). An empirical comparison Of C, C++, Java, Perl, Python,
Rexx, and Tcl for a search/string-processing program. Karlsruhe, Germany: Universitt
Karlsruhe.
This report compiled 80 implementations of a set of specifications written by 74
programmers in many languages in order to compare the differences between the
languages between different implementations. It describes the origin of each of the
languages and their usual uses. The author concedes that the experimental setup is not
specific enough to validate small differences between languages, but significant
differences could be validated from the data gathered. The author found from empirical
comparison that scripting languages like Python and Rexx were much more efficient for
completing the task. These languages tend to be easier to write for beginners and
experienced programmers alike and are only slightly slower than more traditional
languages like C or Java. This source is helpful because it compares a wide variety of
languages with a large and diverse data set.
Source: Report
Wei, L. (2013). Privacy-preserving regular expression evaluation on encrypted data.
Chapel Hill, NC: University of North Carolina.
This paper introduces a family of search protocols that, once authorized by the
server, allow a client to search with regular expressions on an encrypted file. This ability
allows online data to maintain security and privacy while still allowing users to privately
search the file. This protocol is then extended in two directions, one direction allows the
client to detect misdirection from the server, and the other allows resource-constrained
mobile users to use a modified version of the client that uses a partially-trusted proxy
server to handle the intensive computing and communication of the regular client. This
mobile adaptation is useful, but costs security in trusting the proxy server to maintain
confidentiality while handling the request and results. This source is useful because it
shows the creation and optimization of a client-side encryption of regular expressions.

Source: Report
Zhao, L., Iyer, R., Makineni, S., & Bhuyan, L. (2005). Anatomy and performance of SSL
processing. University of California.
Secure Sockets Layer (SSL) protocol allows the secure transfer of information
over the internet, and it is used extensively for business transactions and banking, and
sometimes even in personal browsing. Though it is secure, SSL comes at a performance
cost that depends on the complexity of the algorithm used. This paper is a detailed
description of a secure session using SSL, analyzing the time and performance costs at
each step. It also analyzes many common algorithms used and their respective time costs.
The authors found that roughly 70% of time in an HTTPS transaction is spent in SSL
processing, a vastly unequal portion of the total time compared to the other steps. They
found that most of the time lost in SSL comes from the exchange of small amounts of
data, which have a large encrypted size that slows communication. This source is old, but
SSL is still widely used and most of the algorithms examined are used as well, so this
report is useful in understanding the time costs of secure communication.

Das könnte Ihnen auch gefallen