Sie sind auf Seite 1von 16

Bryce Plunkett

October 29, 2016


Period 6
Annotated Source List
Accelerating web applications with OpenCL. (2013). Retrieved October 5, 2016, from IBM
Developer Works website: http://www.ibm.com/developerworks/library/wa-opencl/
The introduction of the article explores the history of OpenCL (Open Computing
Language), also known as WebCL (Web Computing Language). The languages draft, created
in 2012 by the Khronos Group, aimed to accelerate browser computing power by utilizing the
graphics processing unit (GPU) and multiple processing cores. The rest of the article provides
examples of use and implementation and a pseudo-tutorial. The articles principle section,
Writing WebCL applications, first provides a basic graph that describes the operation of
OpenCL: The host creates kernels; sends them to the device (the computer running the browser),
which executes the kernels using its hardware; then the device sends data to the host system.
Soon after this graphic is another graphic, a table, with classes, functions, and pertinent
information regarding each class and function. And the article continues to go into more detail
with each paragraph. For instance, it describes kernels, then types of kernels, then operators and
functions that can be used on said kernels, and ultimately examples of implementation of kernels
(including code). Surprisingly, the conclusion of the article discusses issues with WebCl,
including tenacious freezes.
The article, which starts as an article and evolves into a tutorial, will be an important
resource for my research regarding OpenCL and coding at my internship. Understanding how the
language operates, using kernels and harnessing the power of a computers hardware, allows me
to focus my research. In addition, the discussion of various functions and classes enables me to
find information pertinent to my research. For example, the IWebCLContext class is critical to
managing command queues and objects. The conclusion of the article also contributes to my
research because it provides examples of when OpenCl can go wrong. For example, the
execution of kernels can create infinite loops, creating GPU errors (forcing the user to restart his
or her computer).
Adamchik, V. (2005). Graph Theory.
The report, written by a professor at Carnegie Mellon University, opens with analyzing
the uses of graphs. For instance, he writes that graphs can be used to schedule finals while
considering a number of factors, courses, students, and rooms. Then, he discusses the impetus for
the creation of graph theory, the Konigsberg bridge problem. The problem involved a town with
a number of bridges and using each bridge only once. He then includes and defines a number of
important graph-theory terms: The web, networks, programs, edges, nodes (vertices), simple,
multigraph, and psudograph. The latter three terms describe types of graphs, and the author
provides graphics and text description for each one. He continues to do this for the next three
pages with increasingly more complex terminology and equations, such as the Handshaking
theorem and spanning tree. The report ends with a series of famous problems on graphs.
The report is an excellent resource in a number of ways. The included graph terminology
has thorough definitions as well as graphics. Thus, it is easy to understand and will make future

research (when I encounter such terminology) easier. Additionally, I will know which graph
model to use for my research project. And by covering equations and theorems, I will know
how to properly analyze graphs for vertices. For instance, he lists methods of calculating the
number of vertices, degrees of vertices, and number of edges. He also lists methods of
representing and analyzing graphs in code and text (Graphs in table-form) A useful skill if a
research paper or report doesnt actually graph the data. The graph-theory problems provided at
the end of the report arent critical to my research, but they are fun to ponder. Ultimately, the
report will prove critical to my research because it extensively covers graph-theory.
Agrawal D., Bernstein P., Bertino E., Davidson S., Dayal U., Franklin M., . . . . Widom J.
(2012). Challenges and Opportunities with Big Data: A white paper prepared for the
Computing Community Consortium committee of the Computing Research Association.
http://cra.org/ccc/resources/ccc-led-whitepapers/
The paper, as the name implies, examines the challenges and opportunities with Big Data.
The paper starts by introducing issues with Big Data: Scale, complexity, timeliness, and privacy.
For example, it writes that data analysis constrains the use of Big Data because of the datas
complexity. Nevertheless, it states, Big Data has created a multi-billion dollar industry because it
has the potential to spur the growth of many industries, ranging from business to science. It goes
on to provide more-detailed examples of areas Big Data can innovate. It could be used to
measure energy-use patterns, and with that information, a company or house could optimize
energy-savings. The paper also references numerous studies that predict how the big data
industry will expand. One of these studies states that 140,000 people will be needed in the US for
Big Data analytics. Soon thereafter, the paper examines the phases of Big Data analysis,
acquisition, extraction, integration, modeling, and interpretation, covering the potential and
issues of each phase. Ultimately, the paper concludes by stating that to reach Big Datas full
potential the industry needs to overcome many complex, technical issues.
The papers discussion of Big Datas potential motivates me to do further research. It also
makes me seriously consider this as a career. I enjoy what I am doing and knowing the industry
is booming furthers my interest in the field. The technical issues of Big Data that the paper
describes makes me reflect on what I have read and done with Big Data analysis. For example,
the paper says the scale, the size, of the data sets present challenges with data analysis This is a
major stimulus for my research of WebGL because the API accelerates the graphing process.
Therefore, the paper is a critical and thought-provoking resource for both my research project
and research for a potential career,
Barney. (2016). Introduction to Parallel Computing.
This report on parallel computing presents ideas in an easy-to-understand, lesson-like
manner. The report first covers serial computing, where instructions are executed sequentially
and on a single processor. Then, it introduces the basics of parallel computing, stating A
problem is broken down into discrete parts that can be solved concurrently Instructions from
each part execute simultaneously on different processors. The report also provides the three
requirements for parallel computing. It also provides examples of where parallel computing can
often be seen today. For examples, it states that most hardware, such as CPUs and
supercomputers, contain redundant components that enable them to run multiple computations

at once. It then gives numerous reasons why parallel computing should be used. For instance, it
saves time and money. After providing the reasons why to use parallel computing, the article
discusses complex terminology regarding parallel computing. Most of the terminology can only
be applied to hardware (CPUs, supercomputers), but some can be applied to software as well.
Soon thereafter, the report switches its focus from physical (hardware) parallel computing to
software parallel computing. Some of the concepts its covers include: Challenges of parallel
computing in software, parallel programming models, partitioning, debugging, and a
references section. It also mentions a variety parallel computing APIs, but not OpenCL.
The report is extremely useful. It explains difficult concepts in an easy-to-understand
way. Additionally, it provides terminology that is relevant to my project, such as speed-up,
which is major component of evaluating the effectiveness of WebCL. It also provides a number
of different ways to implement parallel computing and partition processes, both critical tasks
when implementing WebCL, and the references section gives me potential sources. On the other
hand, irrelevant information, such as CPU and supercomputer architecture, fills a portion of the
report.
Bryant R. E., Katz R. H., & Lazowska E. D. (2008). Big-Data Computing: Creating
revolutionary breakthroughs in commerce, science, and society: A white paper prepared
for the Computing Community Consortium committee of the Computing Research
Association. http://cra.org/ccc/resources/ccc-led-whitepapers/
The paper, published by the Computing Research Associations, begins by describing
areas where one can see big data emerging. For example, it references a proposed telescope in
Chile that would generate thirty million bytes of data every day. It also references how medical
scanners, such as MRIs, generate huge amounts of data. Then, it enumerates three primary
reasons why big data is flourishing: Advances in sensors, the creation of computer networks,
such as the internet, and increases in the amount of data storage. And the growing importance of
big data has driven innovation in the amount of data storage (Both an impetus and an effect),
data analysis techniques, and security and privacy Because there now are scary amounts of
data all in one place. It then states that large, internet-based companies, including Google and
amazon, are the ones leading the big-data industry. Similarly, it says that university researchers
and government agencies are not because of constraining budgets. The paper concludes by
recommending specific investments and actions in big data to further stimulate innovation.
For example, it recommends investing in high-performance computing sites and renewing the
role of the Defense Advanced Research Projects Agency (DARPA) in the industry.
The paper is interesting, insightful, and useful. Big data inherently plays a major role in
my research project, which focuses on big data analytics and visualizations. Its important to
know why I am researching what I am, so when the paper lists the reasons for the emergence of
big data, I find the information extremely useful. I am also more motivated to research big data
analytics because the paper discusses how big data spurs innovation and plays a major role in
countless unique industries, ranging from space exploration to the internet. The papers
recommendations, both intriguing and important to the industry, give me insight into the future
industry.

A Comparative Study of Native and Web Technologies. (2014). McGill University.


The research study, conducted by a research group at McGill University, had two
principal questions: Does WebCl provide performance improvement versus sequential
Javascript? and Does WebCL provide performance improvements for JavaScript which are
congruent with the performance improvements of OpenCL versus C? The paper opens with a
summary for each programming language and benchmark utility being used. For instance, it
states that OpenCL, developed by the Khronos group, allows software (downloaded on a
computer) to easily take advantage of multiple cores and the graphics card. In addition, it states
that WebCl is OpenCL that has been modified to make it secure on the web. The introduction,
which is extremely thorough, ends after ten pages. Two of the benchmarks related to graphtheory and graphing. After discussing methodology for many pages, the study reveals and
evaluates the results. Interestingly, WebCL performed worst on the two graphing related
benchmarks and was slower than Javascript for one of the two. Further, WebCLs speedups over
Javascript compared to OpenCLs speedups over C were signiciantly lower; the paper then
analyzes why.
This paper may prove critical to my research. Not only is its methodology and
documentation thorough, but it also answers two paramount questions to my research: Is
WebCL faster than Javascript? and Is WebCL comparable (in terms of performance) to
OpenCL? And because it uses a variety of benchmarks, I know that WebCL performs worst
when graphing networks Admittedly a little saddening considering that my project revolves
around using WebCl to accelerate graphing. Ultimately, I can now focus my research on why
WebCL performs worst when graphing because the paper addresses the specific functions
causing the slow-downs.
[Computing Research Association]. (2015). Retrieved October 2, 2016, from http://cra.org/
The Computing Research Association (CRA)s mission is to enhance innovation by
joining with industry, government, and academia to strengthen research and advance education
in computing. Thus, it directs a significant amount of its resources to lobbying policy makers to
get the government to fund Research and Development projects; its website has a section
dedicated to the plethora of policies it supports. It also advertises and hosts a variety of
computing related events, from research lectures to conventions for diversifying the industry.
Sadly, most of these events require one to be at least a graduate student to attend. Another
section of the associations website, the bulletin (essentially a blog), shares news, timely
information about CRA initiatives, and items of interest to the general community. The posts
on the bulletin include biographies and discussions of computer science policies and research.
Other interesting functions of the CRA involve publishing studies about the computer science
industry (Not studies about computing research) and job listings.
The CRA certainly provides a diverse range of tools and functions for professional
computer scientists. The association can help them find jobs, enhance their knowledge, get
funding (indirectly), and learn about their industry. Sadly, many of these services are directed
towards computers scientists with at least a graduate degree. I dont need funding or a job, and a
multitude of the lectures I cant attend because Im in high school. Nevertheless, the bulletin (the
blog) provides summaries of these lectures that can help me learn the basics of a concept and
find a possible field of interest, such as cyber security. Ultimately, the CRA provides a variety of

resources, but because I am a higher schooler, it only provides me with basic background
information.
Congote, Segura, Kabongo, & Ruiz. (2011). Interactive visualization of volumetric data with
WebGL in real-time.
Many scientific fields need to render large amounts of irregular, volumetric data into
three dimensional models. For instance, MRIs produce massive amounts of three dimensional
data. Traditionally, the rendering has been done on expensive, proprietary workstations, and few
solutions exist for rendering on desktop computers. The authors of the paper aim to create an
online, interactive, volumetric data renderer for standard computers through the
implementation of WebGL An API that uses WebCL to the graphics card to accelerate
graphics. Then, the authors discuss current methods for rendering online, three dimensional,
interactive graphics, such as VRML and X3D; the paper says that most of the current methods
lack portability, requiring browser plug-ins or operating systems, and do not utilize systems
graphics cards. The authors hypothesize that WebGL might be the solution because it is highly
portable and utilizes the graphics card. Subsequently, the authors examine (in surprisingly great
detail) how they implemented WebGL and the volumetric data rendering algorithms they created
and modified for it. Soon thereafter, the authors test their online render that used WebGL by
rendering a large, three-dimensional medical scan on different browsers, however they did not
compare it to traditional rendering methods. Ultimately, the authors found their WebGL renderer
to be relatively portable.
The research paper has limited use, though there certainly was no dearth of information.
My research project revolves around accelerating two dimensional graphics rendering, while the
research paper revolves around accelerating (and making more portable) three dimensional
graphics rendering. Thus, a majority of the three dimensional rendering algorithms discussed are
not pertinent. Further, the paper never compares the performance of its WebGL renderer to the
performance of other renderers (that do not use WebGL), so it never answers one of my critical
research questions: how much does WebGL accelerate computations? On the other hand, it the
paper references a plethora of papers on online data rendering; a number of these papers are
wonderful sources. For example, a paper written by Behr (cited by the authors) reviews and
compares the performances of a multitude of online graphics renderers. Additionally, the paper
does answer my research question: Is WebGL portable? A research question critical to my
analyzation of WebGL.
Cross-Platform OpenCL Code and Performance Portability for CPU and GPU Architectures
Investigated with a Climate and Weather Physics Model. (2012).
The goal of the research paper is too assess OpenCL portability, specifically performance
portability using a Climate and Weather Physics model. In other words, the paper attempts to
assess how the same code performs on different hardware configurations: How great of an affect
does the hardware configuration have on the performance of code? Whether one uses the CPU,
GPU, or both, and brands of components all determine hardware configurations and affect
portability. For example, one critical question regarding portability is whether ones code will
perform best using the GPU or CPU? The paper first introduces their claim, then addresses other
works which researched the question. For instance, one paper they reference found that CPUs
and GPUs have nearly equivalent speed-ups - OpenCL performance compared to serial

performance on the same piece of hardware. Soon thereafter, the authors introduce their
experiment that uses climate and weather physics model, known as SOLAR, to benchmark
hardware, such as intel CPUs and Nvidia GPUs. For each test, which varied in amount of input
data, they benchmarked each piece of hardware twice, once using OpenCL and once using serial
computations, and they calculated speed-up using this information. And ultimately, they found
that speed-up on CPUs was significantly higher than on GPUs, and that among CPUs, there
were relatively large discrepancies among test cases. Thus, they concluded OpenCL has poor
performance portability.
The paper includes numerous future sources I can research. For example, it mentions a
paper that discusses performance gains and portability with parallel computing. Additionally, the
paper addresses one of my projects major questions, What are the major flaws with
OpenCL/WebCl? Since the authors answer the question, I can now further research a major
flaw with OpenCL (Which mostly also affects WebCL). Similarly, I now know what might
affect the performance of graphing with OpenCL. But because the paper is relatively old, it uses
an old implementation of OpenCL. Therefore, some of its results might be outdated because
OpenCL has significantly changed since then. Overall, the paper is useful Mostly because it
has many potential sources.
D3 Data-Driven Documents. (2015). Retrieved October 23, 2016, from https://d3js.org/
Like SigmaJS, Date-Driven Documents (D3) is a JavaScript library for data visualization
on web pages. Not only can D3 graph networks, but it can also graph tables, plot functions, and
other common data visualizations. Further, many of D3s settings can be changed. For example,
one can change the size of nodes and add animations to objects. However, D3 does not
implement APIs to optimize graphing performance; it does not use WebGL or any other
graphics API. Additionally, the website for D3 also has a plethora of tutorials and examples of
use. For instance, a New York Times writer implemented it on the newspapers website to
display movie information.
D3 is a brilliant graphing resource. It is more versatile than SigmaJS, so it can model data
in many different ways. And like SigmaJS, one can easily modify it to suit ones needs. On the
other hand, D3 does not utilize WebGL, unlike SigmaJS. Therefore, SigmaJS most likely has a
higher performance. But this is perfect for my research project; I can compare the performance
of SigmaJS to D3 in order to evaluate WebGLs supposed acceleration.
Dinneen, Khosravani, & Probert. (2013). Using OpenCL for Implementing Simple Parallel
Graph Allgorithms.
The research paper first explores the emerging market for GPUs and their potentials for
massive calculations. It then introduces OpenCL, described as a generic overlay with the
purpose of providing a common interface for heterogeneous and parallel processing for both
CPU and GPU systems. In short, its saying that OpenCL enables the programmer to use both
the CPU and GPU and code parallel without having to code for specific devices and hardware.
Then, the report lists various properties of graphs that make graphing algorithms difficult to
parallelize, such as graph irregularities. In its superficial explanation of these properties, the
report also lists various academic papers that better explain them. For instance, it references a
paper on parallel GPU algorithms for graphs written by Harish and Narayanan, which also

resolves these issues. Then, it explains different methods, using OpenCL, to synchronize parallel
processes on the GPU, such as using atomic operations. In the final section of the research paper,
they benchmark three parallel graphing algorithms, two of which used OpenCL and the
syncing methods they suggested. The benchmark involved graphing, and each algorithm was
benchmarked fifty times (each benchmark test was different). They, found the OpenCL
algorithms usually performed better.
The research paper contains a plethora of information that will be useful to my project.
The introduction to OpenCL provides no new information, but the discussion of various methods
to synchronize tasks can be applied to my project. Also, the research reports reference to other
papers give me future, possible sources. The graphing benchmark of OpenCL directly answers
my question, Does OpenCL accelerate graphing? But, it is important to note, that all three
algorithms were parallel and heterogeneous, so one conclusion might be that the OpenCL was
not the determining factor as much as the methods used to synchronize the processes (which
varied between algorithms).
11 MYTHS About OpenCL. (2016). Electronic Design, 64(8), 29.
The articles somewhat click-bait title is not very representative of its advance topic
and scientific publisher. The myths it covers are more common questions than myths. The
article states that OpenCL only uses C and C++. The article then discusses how OpenCL can use
either the CPU, GPU, or both. Similarly, the article states that certain functions run best on the
GPU while other run best on the CPU, but does not list examples of these functions. The article
then analyzes how one can use OpenCL libraries without actually coding in OpenCL, which can
be seen in SigmaJS except using WebCL. The article also states that OpenCL, though written in
C, has different coding conventions in techniques. But yet again, it fails to provide examples of
these different conventions. The article concludes by writing that any GPU can take advantage
of OpenCL, one can have any hardware configuration, and OpenCL can be used in embedded
applications.
The article is not very useful. It does not provide examples, and it covers topics
superficially. In addition, the information in the article adds nothing new to my research. For
instance, the Khronos Group website answered the question Can any GPU take advantage of
OpenCL. Further, the new information, which I have not seen before, is completely irrelevant.
For instance, its explanation of OpenCL use in an embedded application does not pertain to my
research because it has no relation to embedded applications. Ultimately, my research does not
benefit from this article because of the shallow coverage of topics and the lack of examples and
new information.
Foster, I. (1995). Designing and Building Parallel Programs. Addison-Wesley.
The book, Designing and Building Parallel Programs, has numerous similarities to
Introduction to Parallel Computing (book). Like Introduction to Parallel Computing, the book
first covers trends in parallel computing and reasons for using it. Then, it goes over how one can
break traditional algorithms into tasks that can run concurrently (how one can parallelize
algorithms). For instance, it explains the domain decomposition technique for partitioning tasks.
Unlike Introduction to Parallel Computing the book rarely lists and analyzes parallelized forms
of common algorithms, such as graphing and sorting algorithms. One of the exceptions to this

rule, the books case study of Dijkstras algorithm, a graphing algorithm used to find the
shortest path, dissects and explains how one can parallelize the algorithm. But because of the
books age, no modern APIs are mentioned. It also discusses how one can evaluate the
effectiveness of parallelization.
Overall, the book Designing and Building Parallel Programs adds little new information
to my research project. It covers parallelization in a more broad-sense, and as a result, most of
the information it presents can be found in the book Introduction to Parallel Computing.
Additionally, Introduction to Parallel Computing covers the few common algorithms the book
(Designing and Building Parallel Programs) actually dissects and analyze. Moreover, the book
contains large amounts of obsolete information and does not cover implementation of modern
functional parallelization APIs because of its age. The one section that does add information to
my project, the chapter covering evaluation of parallelization, examines evaluating the
effectiveness of an algorithm via speedup calculation and finding issues that might lower
speedup I can use the methods mentioned to calculate speedup for various graphing libraries
and deduce what might limit speedud.
Gaster, Howes, Kaeli, Mistry, & Schaa. (2013). Hetergogeneous Computing with OpenCL.
Morgan Kaufmann.
The book, Heterogeneous Computing with OpenCL, goes in depth into the
implementation of OpenCL. It assumes that one understands most of the concepts and techniques
of parallelization, and thus, it focuses on implementing OpenCL after one has determined how
ones algorithm can be run in multiple concurrent parts. It covers topics including:
Understanding OpenCLs Concurrency and Execution Model, Dissecting a CPU/GPU
OpenCL Implementation, Data Management, multiple case studies, and OpenCL Profiling
and Debugging. For example, the section Dissecting a CPU/GPU OpenCL implementation,
explains how one can use both the cores in a Graphics Processing Unit (GPU) and CPU to run
computations concurrently. Throughout these chapters, the book also explains various functions
of OpenCL and their implementations and OpenCL terminology, such as kernel, which is one of
the concurrent processes. It also provides graphics and outlines to explain these complex topics.
Because the book provides a comprehensive tutorial of OpenCL implementation, it is a
magnificent resource. The sections that discuss OpenCL vernacular and functions will enable me
to understand other complex OpenCL books and papers. Moreover, the sections that discuss the
implementation of functions and debugging will enable me to understand how OpenCL code
projects work. Furthermore, the book contains up-to-date, reliable information, being published
only three years ago. Ultimately, the books only major issue is its assumption that one already
knows how to parallelize algorithms and processes; this presumption makes certain chapters and
sections difficult to understand. Another minor issue is that it only discusses OpenCL; my
project focuses on WebCL. Since WebCLs foundation is OpenCL, the differences are relatively
small.
Introduction to Parallel Computing (2nd ed.). (2003). Addison-Wesley.
The book, Introduction to Parallel Computing, as the name suggests, is in the form of a
text-book. At the start, it lists reasons why one should use parallel computing, such as cost
saving, and the issues associated with using it. Then, many of the chapters discuss how certain
algorithms can be made parallel. For instance, there is an entire chapter dedicated to

parallelizing sorting algorithms, and another chapter dedicated to parallelizing graphing


algorithms. The latter chapter, parallelizing graphing algorithms has the most relevance to me.
It covers traditional, serial methods of graphing; then, it covers the parallelized forms of these
graphing algorithms. For example, the book gives a graphic and description for Dijkstras serial,
single, shortest path algorithm, and few pages later it gives a graphic and description of the
parallelized form of Dijkstras shortest path algorithm. Because few parallelize computing APIs
existed at the time the book was written, many of the chapters are dedicated to solving issues that
modern APIs, such as OpenCL, resolve.
The Introduction to Parallel Computing covers many important algorithms related to
coding for parallel computing. Many of these algorithms, such as the sorting and graphing
algorithms, can be applied to my research project, which involves using WebCL and
parallelization to accelerate the graphing process. Further, the graphics, descriptions, and
pseudocode provided make some of the complex parallelization topics easier to understand.
But because the book is about thirteen years old, it does not cover use of APIs that make
parallelization significantly easier, such as OpenCL and WebCL. In addition, many of the pages
of the books analyze issues, such as coding for certain system architectures, that modern APIs
resolve. Ultimately, the book is a valuable resource because of its discussion of graphing
algorithms, part of the foundation of my project.
Jskelinen, de La Lama, Schnette, Raiskila, Takala, & Berg. (2015). pocl: A PerformancePortable OpenCL Implementation. International Journal of Parallel Programming.
The report begins by discussing the basics of OpenCL, such as the behaviors of its
kernels and the parallelism. Then it discusses OpenCLs primary issue, portability. Most
OpenCL implementations, such as Samsungs, only work on specific platforms or perform best
on certain platforms Some OpenCL implementations may require or are best optimized to a
certain variety of phone, operating system, or graphics card. Similarly, because most OpenCL
implementations are low-level, the programmer will need to optimize his code for a variety of
systems. So to get an OpenCL-based program to function best on a majority of platforms and
systems, the programmer will have to change and optimize his or her code for each OpenCL
implementation and system. These changes to the code and optimizations are both timeconsuming and require the programmer to know the intimacies of the implementations and
systems. The authors of the paper then go on to propose their own implementation of OpenCL,
called pOCl, which addresses the issues of portability. The implementation also adds numerous,
higher-level functions to OpenCL to augment the ease of use. For instance, one function they
added, Bufalloc, aims to optimize the allocation of large continuous buffers. Additionally,
for each new function, they examine the basics of how it works. Similarly, they analyze the
kernel compiler they created, using significant amounts of complex, technical language. This
discussion of the inner-workings of their implementation, which includes algorithms and proofs,
continues for the next twenty pages of the report. And ultimately, the authors report ends with a
comparison of pOCl to other OpenCL implementations; they found pOCLs performance to be
equivalent and sometimes better than the performance of popular implementations.
The report is somewhat useful. The examination of issues with traditional OpenCL
implementations, both thorough and comprehendible, can easily be applied to my research
project (Specifically what is wrong with OpenCL and WebCl). Furthermore, the report will
allow me to increase my research. For example, I can pursue pOCl and find other research

papers that discuss it. But the complex, technical explanations of pOCLadmittedly beneficial
to some made the paper difficult to understand. Moreover, the authors were inherently biased
when writing about pOCL because they created the implementation.
JavaScript. (2015). Retrieved October 16, 2016, from Mozilla Developer Network website:
https://developer.mozilla.org/
The webpage discusses many JavaScript concepts and ideas. The website has many pages
dedicated to complex, comprehensive tutorial of JavaScript. The tutorial also has multiple
projects associated with it, ranging from beginner to advanced difficulties. Furthermore, the
Reference section lists hundreds of objects, expressions, operators, statements, declarations,
and functions. Each item listed has a basic definition associated with it. There is also a section
known as Tools and Resources which lists websites for answering JavaScript related
questions; tools for testing JavaScript code, such as Node.Js, which I currently use; and tools for
compiling JavaScript, such as JsFiddle. Other useful tools include ones used for debugging and
code sharing. For example, JS Bin can be used to collaborate with other coders, such as my
mentor.
The webpage, with its diversity of resources, will greatly help me with my research
project. The variety of tutorials can help me with coding JavaScript for the WebCL API. The
tutorials projects arent directly helpful to my research, but I can complete them to improve my
JavaScript skills. Similarly, the lists of JavaScript items, objects, expressions, operators,
statements, declarations, and functions, I can reference when coding JavaScript or interpreting
others code. The Tools and Resources section also is helpful. For example, I already utilize
when coding for WebCL, and Ill start using the JsFiddle tool for debugging. Ultimately, the
JavaScript webpage is extremely useful because the WebCL API and related libraries are all
written in JavaScript.
Johns Hopkins University Applied Physics Laboratory. (2016). Retrieved October 2, 2016, from
http://www.jhuapl.edu/
Divided into five subsections, About, Mission Areas, Careers, News &
Publications, and Education, the Johns Hopkins Applied Physics Laboratory (APL) website
contains a variety of diverse information. The About section lists background information
about the laboratory, such as that its a non-profit and that its aims to provide innovation in the
national security sector. The Mission Areas section gives overviews of most APL programs
and facilities. For instance, one page discusses APLs sea control program and its mission goals,
which primarily involve submarine warfare and unconventional littoral warfare (littoral near
coast, in shallow waters). The News & Publications section, as the name implies, contains
articles and publications regarding APL innovations. The articles are concise, while the
publications are extremely detailed and lengthy. The Education section has multiple pages
dedicated to ASPIRE and other STEMS programs, and one page includes a thorough list outside
of resources I can use if I need help learning a concept.
The APL website, with its diversity of information, is extremely useful. The mission
goals, found in the About and Mission Areas sections, inform me of the intentions of my
internship provider and research. Furthermore, I can look at the Mission Areas to find other
projects I possibly want to intern in. The articles and publications on the website possess little
relevant information to my research, but I can use them to find other fields of interests and

potential human sources. Similarly, I can use the outside resources listed under Education to
find sources and help with a difficult concept. In short, the APL website contains: Potential
sources (including human sources), descriptions of possible fields of interest, intentions of my
internship provider, and resources if I need assistance.
[The Khronos Group]. (2016). Retrieved October 12, 2016, from https://www.khronos.org/
The Khronos Group, a non-profit organization that creates royalty-free Application
Program Interfaces (APIs), authored both OpenCL and WebCL among other APIs. The website
allows one to sign up for a multitude of international, OpenCl and WebCL, workshops. One
extremely useful page on the website provides a multi-page list of tutorials and examples of
WebCL and OpenCL development. Another page on the website lists various libraries (Make use
of the API easier) that implement the OpenCL and WebCL APIs. Similarly, the website hosts a
forum where one can talk with other WebCL and OpenCL developers. One extensive document
provided by the website, the WebCL specifications document, covers the APIs syntax, coding
conventions, and background processes. There is also a mailing list one can join, a page
dedicated to documenting bugs and updates in the APIs, and a form to report bugs in the APIs.\
Since a major aspect of my project involves OpenCL and WebCL, it is crucial that I am
well informed on the APIs. Thus, the Khronos Groups website is crucial because it documents
every minute aspect of the APIs. For instance, if I wanted to know the method for retrieving a
kernels name (getArgInfo()), I simply need to look at the documentation provided on the
website. If I am confused or have a question regarding the APIs, I can go on the forum or look
on the Resources page for help. And if a major update to the APIs occurs (which might alter
efficiency and previous studies results), I will now know because I am subscribed to the mailing
list for updates. Sadly, the informative workshops provided by the Khronos Group I cant
use. Like the workshops hosted by the Computing Research Association, the workshops hosted
by the Khronos Group are for industry professionals alone; I am not allowed to attend them.
Kohek, ., & Strnad, D. (2015). Interactive synthesis of self-organizing tree models on the GPU.
Computing, 97(2), 145-169. doi:10.1007/s00607-014-0424-7
The lengthy research paper focuses on accelerating the rendering process for realistic
tree models and forests. The author believes parallelization (the simultaneous running of
computations) of tree synthesis computations will achieve this; he also postulates that the
OpenCL API, which takes advantage of both multiple-cores and the GPU, will provide the best
framework for the parallelization. After going into significant depth on tree creation algorithms
(Extreme depth), the author discusses the procedure to test his hypothesis: How he implements
OpenCL, the system he uses to benchmark his tests, and how he analyzes the data. He provides
images, from trees to graphs, to help the reader better understand the procedure. The results from
his experiment affirmed his hypothesis; OpenCL provided significant acceleration of tree
computations compared to baseline CPU implementations.
Though the paper does not directly relate to graph theory, the underlining principle of it,
acceleration of mathematical computations, does. And I can apply the concepts he used to
implement OpenCL on his project to implementing WebCL, a similar API to OpenCL, on my
project. For instance, he mentions a method he uses to make the tree computations parallel.
Additionally, the research paper, published in 2015, contains up-to-date, relevant information.
Most importantly, the article partially answers one of the critical questions of my project: Does

OpenCL accelerate computing processes? His answer: Yes, for tree synthesis, suggests that it
might accelerate most mathematical computations but is never directly stated. On a less relevant
note, it amazes me that a person managed to write a sixty page research paper on creating virtual
trees.
Lemon, J., Kockara, S., Halic, T., & Mete, M. (2015). Density-based parallel skin lesion border
detection with webCL.
Dermatologists typically use hand-drawn images to assess skin lesions, which are defined
by the borders. The authors of the research paper aim to optimize a pre-existing automatic border
detection system using parallelization via OpenCL. The article firsts covers a pre-existing
computerized, automatic border detection system for lesions called DBSCAN, which uses serial
computing, one process at a time. The authors then analyze how they implemented WebCL and
parallelized the processes. For instance, they mention dividing the lesion image into multiple
parts so the device can analyze different parts of the lesion simultaneously; the device allocates
parts of its CPU or GPU, using WebCL, for analyzing each section of the image at the same
time. They also address algorithms they created to partition the image for processing, but they
never actually cover the code they use. And finally, they conclude that the WebCL
implementation of DBSCAN significantly exceeds the performance of serial implementation
of DBSCAN.
The article is somewhat useful. The topic of skin lesions is irrelevant, but the
implementation of WebCL on DBSCAN is not. The algorithms they used to partition parts of
the image analyzation is extremely useful because it provides a layout I can use to partition
parts of the graph rendering process. Additionally, the articles conclusion that WebCL
accelerates the DBSCAN system partially answers one of my principal research questions, does
WebCL accelerate most computational processes. However, implementing WebCL to analyze an
image does not strongly relate to implementing WebCL to graph, which is the focus of my
project, so, many of the concepts the authors used to implement WebCL on DBSCAN cant be
applied to my project.
Mertens, S. (2003). Node by node. (Networks). American Scientist, 91(2), 187+.
The article first describes a network, a set of nodes (dots) connected by links (lines). It
then discusses uses for network graphs, such as representing power lines, social interactions, or
chemical reactions. The next few paragraphs go over the history of graph (network) theory. The
impetus was Leonhard Eulers solution to the puzzle of the Konigsberg bridges of the 17th
century. And for the next two centuries, these graphs (networks) were extremely ordered, unlike
the irregular networks found in the real world. In the 1960s, scientists started to create
randomized, more real-world graphs. The author notes the major flaw with these randomized
graphs: They do not represent real-world networks because the nodes are scaled, meaning most
nodes have a similar number of links. It then provided real-world examples where this concept of
scaled nodes does not hold true, such as the internet Google has significantly more links than a
small, organizations website. Thus, the author goes on to argue that the best networks are both
unordered and scale-free and discusses the theory behind them.
The article is extremely useful. The discussion of network (graph) theory, though not
thorough, provides plenty of background information for my research. For instance, it mentions
the Konigsberg bridges puzzle as well as the transitions from ordered graphs to randomized,

scaled graphs to somewhat randomized, unscaled graphs. Additionally, I better understand


certain concepts behind networks, including order and scale. Therefore, the coding and research
for my project will be easier now, knowing why certain network properties exist. And most
importantly, the real world examples given in the article (The internet, social interactions, power
lines, chemical reactions, telephone lines) gives me ideas for application of my project.
Nevertheless, the article does not contain new information, published in 2003, nor does it contain
ground-breaking research.
Miller, E. (2016). [Personal interview].
Elishiah Miller, with a Masters in Science for software engineering from the University
of Texas at El Paso and a Bachelor in Science for software and internet applications from St.
Marys University in Texas, first worked for APL as a college intern in 2012. He developed data
assessment software for secure line of sight communications. Within a year, APL offered him his
current, full-time position as a software engineer in its secure communications lab, where he
continues to innovate in his field. Some of his research, such as [gathering] and [reporting]
statistics impacting the usage and voice quality of secure mobile devices has already been
applied to enhance phones used by top government officials. And though his career is still at its
beginnings, he has already won a STAR Award from the Society of Hispanic Professional
Engineers and an internal grant.
An OpenCL micro-benchmark suite for GPUs and CPUs. (2014). Journal of Supercomputing.
The paper revolves around creating a micro-benchmark suite using OpenCL for CPUs
and GPUs. A micro-benchmark suite is a collection of micro-benchmarks, which test the
performance of hardware for specific calculations or sections of code. In short, the microbenchmark suite records how well hardware performs for various functions that use OpenCL.
Then, the paper describes each micro-benchmark and the reasoning behind it. For instance,
one micro-benchmark assesses the performance of the systems memory bandwidth by
measuring the amount of time it takes to run a series of read/write operations on different
memory regions. Another micro-benchmark, designed to test scalability, measures the speed-up
as the number of work-items (work-items: data that is inputted into some function or
algorithm) increases; it essentially measures how the number of work-items affects the speed of
the algorithm and the hardware. After introducing the various micro-benchmarks, the paper runs
the micro-benchmark suite for three different hardware configurations. They then displayed the
results from each micro-benchmark, and using that data, concluded which hardware
configurations work best for different OpenCL processes and computations.
The paper and its benchmark are somewhat helpful. The usefulness stems from the
papers benchmarks; they tell me what hardware works best for specific OpenCL processes. For
instance, if Im doing many mathematical calculations (like rooting), the paper concludes that I
should use an Intel CPU because it has better scalability compared to an AMD CPU. Similarly,
the paper states I should also use an NVIDIA graphics processing unit (GPU) instead an AMD
GPU when doing many mathematical calculations because NVIDIA GPUs have better
scalability. So when Im graphing large and small data sets for my project, I should use an Intel
CPU and an AMD GPU because scalability greatly affects performance with these processes.
But since they used surprisingly considering the papers publishing year old hardware for the
benchmarks, their results might be outdated.

OpenGL. (2016). Retrieved October 22, 2016, from https://www.opengl.org/


The website, created by the Khronos group (Creators of OpenGL), includes a variety
of OpenGL-related information, from specification sheets to events. The website defines
OpenGL as an API designed for interactive 2-D and 3-D graphics. In brief, it enables the
programmer to easily utilize a systems graphics processing unit (GPU). Many pages of the
website provide descriptions of OpenGLs characteristics, capabilities, and uses. For example, it
mentions how OpenGL is a pervasive standard for games and professional applications. The
website also has an OpenGL reference sheet, which lists hundreds of OpenGL functions, and a
system specifications sheet. Furthermore, the website includes an extensive collection of
OpenGL programmer resources. For example, there is a page that recommends OpenGL
programming books and another page that recommends OpenGL online tutorials. Other
resources on the website include a forum (For asking OpenGL related questions) and an OpenGL
toolkit library.
The website contains many helpful, brilliant resources for OpenGL programmers and
researchers. The website gives me background information on how the API was created and can
be used. In addition, the reference sheet, which lists OpenGL functions, allows me to easily
translate code that implements OpenGL. Similarly, the reference sheet, the book
recommendation page, and the tutorial page can all help me learn OpenGL. The pages also
provide future potential sources, especially the book recommendations page. Moreover, I can
utilize the OpenGL toolkit to develop, modify, or run an OpenGL application, and I can utilize
the forum if a section of OpenGL code (either in an application or research paper) confuses me.
Thus, the OpenGL website is a great resource for my project.
Performance Evaluation of an OpenCL Implementation of the LBM. (2015).
The authors first introduce the problem: Calculating fluid dynamics requires significant
amounts of resources. The traditional method to solve these problems involves the Lattice
Boltzmann Method, a set of equations that is inherently parallel. Therefore, the Lattice
Boltzmann Method can be easily parallelized for efficient computing, the paper concluded.
The authors parallelized it with their own portable implementation of OpenCL, calling it the
OPAL solver. After discussing the design process of the OPAL solver, they introduce an
experimental evaluation of it. The experiment measured the performance of the OPAL solver on
three different pieces of hardware, an Intel Xeon CPU, an Intel Xeon PHI CPU, and an Nvidia
GPU; it found that the Nvidia GPU performed significantly better than both Intel CPUs, which
had similar performance. Nevertheless, the authors conclude that their OPAL solver and
implementation of OpenCL are relatively portable. In short, they conclude that hardware
configuration aside from different, raw processing powers, does not significantly affect the
performance of their solver.
The research paper adds little new or pertinent information to my project. Because the
experiments data derived from the authors own implementation of OpenCL, the conclusions
cannot be applied to common implementations of OpenCL. For example, the authors state that
the data suggests their implementation of OpenCL is portable; this conclusion cannot be applied
to other, more common implementations of OpenCL because the authors implementation is
radically different. Moreover, much of the paper focuses on the algorithm behind calculating
fluid dynamics, not OpenCL An interesting, but irrelevant topic.

Rossant, & Harris. (2013). Hardware-accelerated interactive data visualization for neuroscience
in Python.
The research paper begins by noting the increasing amount of data generated from
innovative neuro-science experiments. Then, it lists some of these experiments, such as the
Human Connectome Project and states that to analyze these large amounts of data effectively,
data visualization is necessary. It also lists common data visualization tools, including matplotlib
and Bokeh, but it says they do not scale well to very large datasets. Because most of these
visualization tools only use the CPU, they do not have the processing power to graph these
large data sets. It then proposes that to most efficiently graph large data sets, a data visualizer
should utilize the computers graphics processing unit (GPU), which is better designed for such
tasks. Additionally, it proposes that OpenGL, an API for hardware accelerated graphics, would
enable a visualizer to take advantage of the GPU. OpenGL would enable data visualizers to
efficiently graph large data-sets. The paper then introduces its experiment, which uses Galry, an
OpenGL data visualization library, and python to graph neurophysiological data. They found
that Galry, using OpenGL, graphed the large neurophysiological data set significantly faster
than matplotlib. In the discussion of the experiment, the paper analyzes the results and mentions
an extension of the experiment, which would use WebGL instead of OpenGL.
This article contains a trove of useful information. It cites over 45 research papers that I
can use as potential sources. For instance, it references a paper discussing the specifications of
WebGL. In addition, it discusses an effective method to accelerate data visualization, OpenGL.
Similarly, it answers an important question regarding this method: How much faster is it
compared to traditional methods? And to answer this critical research question, it uses a
credible, though-out experiment. Moreover, it examines the implementation of WebGL for
accelerating data visualization Another interesting API I can and will research. Ultimately, the
research paper includes a plethora of useful, new information.
Savkli, C. (2016). [Personal interview].
Dr. Savkli current holds the position: Chief Scientist of the Big Data Analytics Group at
the Applied Physics Laboratory. He earned his Bachelor of Science in Physics from Bogazici
University, his Master of Science in Physics from the University of Pittsburgh, his Master of
Science in Computer Science from Johns Hopkins University, and his Doctorate in Theoretical
Physics from the University of Pittsburgh. Additionally, he has held the position of Senior
Engineer at Lockheed Martin, Senior Analyst, and Senior Analyst at Metron.
SigmaJS. (2016). Retrieved October 22, 2016, from http://sigmajs.org/
SigmaJS is a JavaScript graphing library. By default, it renders graphs with WebGL,
which is significantly faster than traditional graphing methods. It makes graphing relatively easy
by including most of the graphing code in modules. In other words, SigmaJS deals with the
details while the programmer only needs to deal with the basic aspects of the graph. Another
useful feature of SigmaJS is that one can easily change graph settings. For example, one can
disable node labels with a single line of code. It also makes it easy to add a number of features to
the graph, such as the ability to move nodes with the mouse. The SigmaJS not only has the
SigmaJS library, but has tutorials and resources.

SigmaJS is a fundamental part of my project. The library makes it extremely easy to


graph data. And because it uses WebGL, the graphing process is accelerated and one can graph
large data sets. For instance, it can graph over 1,000,000 nodes on my computer. Similarly, I can
test the efficiency of WebGL (When it comes to graphing) with SigmaJS because the library
allows me to enable and disable WebGL rendering. The settings can be changed easily too,
which also contribute to its ease of use. For example, I disabled all graphs labels and changed the
color of the graph in a matter of seconds in the code. If I need to find a SigmaJS function, I can
quickly refer to its well-organized tutorials. Therefore, SigmaJS is a fundamental part of my
project.
WebCL for Hardware-Accelerated Web Applications. (2013). Advanced Technology Lab of
Samsung Information Systems.
The report first introduces WebCL, a proposed JavaScript binding to OpenCL designed
to enable high performance through heterogeneous, parallel processing. Currently, WebCL,
created by the Khronos Group, has multiple prototypes from different companies. Each
prototype has the same foundation, but various aspects, from error handling to naming, vary
between prototypes. The report, created by Samsung (which has its own prototype), often makes
recommendations for the standardization of prototypes. In other terms, the paper provides a
variety of suggestions that it believes the WebCL prototypes should implement to create a
standardized WebCL API. For instance, it states, We support a JavaScript-like exception
handling mechanism to improve compatibility with JavaScript. The report, created by a
Samsung research laboratory, then evaluates the Samsung WebCL prototypes performance
compared to JavaScript. The benchmarks they used to evaluate the prototype ranged from
calculating the effects of forces on particles to applying a filter on an image. The Samsung
WebCL prototype performed at least ten times faster than JavaScript for each benchmark.
Since they have yet to be implemented, the various recommendations the report makes to
WebCL prototypes are not useful. Nevertheless, the reasoning behind the recommendations
interests me and shed some insight into Samsungs WebCL prototype. Further, the report does
not discuss anything new or ground-breaking, and a majority of the information it provides can
be found on the Khronos website. The benchmarks, possibly the most useful element of the
report, affirm my projects thesis that WebCL accelerates most processes. The fact that the report
only benchmarked the Samsung Prototype only reduces some of the validity of the results
because all of the prototypes have thze same foundation, the same core ideas.

Das könnte Ihnen auch gefallen