Sie sind auf Seite 1von 91

Abstract

HTML4 the latest revision of the markup language HTML (Hypertext Markup Language) was published more than a decade ago. Currently HTML5 is under development which aims to fulfill the needs of modern web applications, so that the dependency on external browser plug­ ins is reduced.

This bachelor thesis evaluates the newly introduced elements and interfaces for their use in the JavaScript library OpenLayers, which allows to embed interactive maps into a website. The thesis focuses on the element canvas, an interface to draw graphics, and on the Web Worker API, which is used to run JavaScript files in background.

It was shown that canvas will not replace the currently used technology SVG (Scalable Vector Graphics) to render vector geometries, but creates new ways of visualizing geographic data, which were not possible with JavaScript inside the browser so far. Web workers used in combination with canvas are a promising option to execute pixel­based graphic operations, but data exchange with web workers is a serious weakness, as this thesis will describe.

Zusammenfassung

Die Verö ffentlichung des Standard HTML4 der Darstellungssprache HTML (Hypertext Markup Language) liegt nun mehr als ein Jahrzehnt zur ü ck. Die sich zur Zeit in Entwicklung befindliche Version HTML5 hat sich zum Ziel gesetzt den Anforderungen moderner Web Anwendungen zu entsprechen und somit auch die Abh ängigkeit von externen Browser Plugins zu reduzieren.

Diese Bachelor Arbeit untersucht die Einsatzm ö glichkeiten der neu eingef ü hrten Elemente und Schnittstellen innerhalb der JavaScript Bibliothek OpenLayers, die zur Einbindung interaktiver Kartenanwendungen in eine Internetseite dient. Der Schwerpunkt wurde dabei auf das Element Canvas, eine Komponente zum Zeichnen von Graphiken, und auf die API Web Worker gelegt, die zur Ausf ü hrung von JavaScript Dateien im Hintergrund eingesetzt werden kann.

Es hat sich gezeigt, dass Canvas die bestehende Technologie SVG (Scalable Vector Graphics) zum Rendern von Vektor­Geometrien nicht ersetzen wird, aber dennoch neue M ö glichkeiten zum Visualisieren von geographischen Daten er ö ffnet, die vorher innerhalb des Browsers mit JavaScript nicht realisierbar waren. Web Worker stellen in Kombination mit Canvas zum Ausf ü hren von Pixel­basierten Graphik­Operationen eine viel versprechende Option dar, haben aber auch ihre Schw ächen in der interprozess Kommunikation, was in dieser Arbeit dargestellt wird.

Table of Contents

1

INTRODUCTION

1

1.1 HTML5

3

1.2 OPENLAYERS

4

2

THE CANVAS ELEMENT

5

2.1 INTRODUCTION TO THE CANVAS ELEMENT

5

2.2 REPRESENTATION OF GEOGRAPHIC DATA

9

2.2.1 VECTOR AND RASTER DATA

9

2.2.2 THE OPENLAYERS LAYER SYSTEM

14

2.3 RENDERING VECTOR DATA

18

2.3.1 THE OPENLAYERS VECTOR RENDERING SYSTEM

18

2.3.2 IMPROVING THE CANVAS VECTOR RENDERER

29

2.3.3 PERFORMANCE EVALUATION

32

2.3.4 DISCUSSION

36

2.3.5 OUTLOOK

38

2.4 RENDERING RASTER DATA

42

2.4.1 THE OPENLAYERS RASTER RENDERING SYSTEM

42

2.4.2 ANALYSIS AND IMPLEMENTATION

44

2.4.3 PERFORMANCE EVALUATION

46

2.4.4 BEYOND DISPLAYING TILES

48

2.4.5 DISCUSSION

53

3

WEB WORKERS

55

3.1 INTRODUCTION TO WEB WORKERS

55

3.2 USING WEB WORKERS IN OPENLAYERS

57

3.2.1 CANVAS PIXEL OPERATIONS

57

3.2.2 FILE PARSING

59

3.2.3 GEOMETRY FUNCTIONS

62

3.3 DISCUSSION

63

4

CONCLUSIONS

64

5

REFERENCES

66

6

APPENDIX

I

6.1 VECTOR RENDERER PERFORMANCE TESTS

II

6.2 RASTER RENDERER PERFORMANCE TESTS

XIV

6.3 EXECUTING GEOMETRY FUNCTIONS IN A WEB WORKER

XV

List of Figures

Illustration 1: Website using OpenLayers [SWI10]

4

Illustration 2: Simplified class diagram for HTMLCanvasElement

8

Illustration 3: Different geometry types

10

Illustration 4: Pyramid tiling scheme on OpenStreetMap tiles

13

Illustration 5: Layer class hierarchy in OpenLayers 2.9.1

15

Illustration 6: On the left: Web application with a navigation toolbar and panning panels around the map. On the right: Web application with a user­friendly interaction

17

Illustration 7: OpenLayers element hierarchy in the DOM

17

Illustration 8: Renderer implementations

18

Illustration 9: Basic interactive SVG shapes

19

Illustration 10: Element structure for a vector layer with SVG renderer

21

Illustration 11: Activity diagram for the method Layer.Vector.moveTo()

22

Illustration 12: Dragging the map for the SVG renderer: (a) Initial view (b) During dragging (c) When the dragging is finished

23

Illustration 13: Adjustment of the layer position after dragging

24

Illustration 14: SVG coordinate translation when the map is panned

24

Illustration 15: Example of an R­Tree [WIK10c]

30

Illustration 16: Performance test script for vector data

32

Illustration 17: OpenLayers HeatMap layer

38

Illustration 18: On the left: Gray­scale intensity mask. On the right: After coloring

39

Illustration 19: Example dot density map [DIB10]

40

Illustration 20: Example pie chart map [DIB10]

40

Illustration 21: Pre­loading of tiles (buffer=1) [SCH07]

43

Illustration 22: Chart for test case 01 “Show”

46

Illustration 23: Chart for test case 02 “Show and pan 10 times”

47

Illustration 24: Creating elevation profiles from canvas

48

Illustration 25: Adjusting the brightness and contrast of tiles

49

Illustration 26: Class diagram Grid ­ CanvasFilter

50

Illustration 27: Export map as image

51

Illustration 28: Raster reprojection in OpenLayers

52

Illustration 29: Execution times for pixel­based operations

53

Illustration 30: Execution times for pixel­based operations (blocking and as web worker)

58

Illustration 31: Detailed times for HTTP.Async (10000 points)

62

Illustration 32: Chart for test case 01 ”Show (countries)”

ii

Illustration 33: Chart for test case 02 ”Pan 10 times after the map is shown”

iii

Illustration 34: Chart for test case 03 ”Pan 10 times in a smaller map extent (

)”

iv

Illustration 35: Chart for test case 04 ”Zoom 10 times after the map is shown”

v

Illustration 36: Chart for test case 05 ”Select 10 features (countries)”

vi

Illustration 37: Chart for test case 06 ”Select 10 features (rivers)”

vii

Illustration 38: Chart for test case 07 ”Show (points)”

viii

Illustration 39: Chart for test case 08 ”Select 10 features”

ix

Illustration 40: Chart for test case 09 ”Add features”

x

Illustration 41: Chart for test case 10 ”Show without bounds calculation (countries)”

xi

Illustration 42: Chart for test case 11 ”Show with labels (countries)”

xii

Illustration 43: Chart for test case 12 ”Show in different browsers (countries)”

xiii

List of Tables

Table 1: Browser support for the canvas element [PIL10]

5

Table 2: Results for the first test comparing the existing renderer implementations

27

Table 3: Comparison of the original implementation with the modifications

33

Table 4: Times to calculate the bounds of geometries

34

Table 5: Times for two different panning test cases (Test data: countries­simplified­0.005). 34

Table 6: Results for selecting ten features with different data sets

35

Table 7: Performance test for protocol HTTP.Async

61

Table 8: Results for test case 01 ”Show (countries)”

ii

Table 9: Results for test case 02 ”Pan 10 times after the map is shown”

iii

Table 10: Results for test case 03 ”Pan 10 times in a smaller map extent after the map is shown”

iv

Table 11: Results for test case 04 ”Zoom 10 times after the map is shown”

v

Table 12: Results for test case 05 ”Select 10 features (countries)”

vi

Table 13: Results for test case 06 ”Select 10 features (rivers)”

vii

Table 14: Results for test case 07 ”Show (points)”

viii

Table 15: Results for test case 08 ”Select 10 features”

ix

Table 16: Results for test case 09 ”Add features”

x

Table 17: Results for test case 10 ”Show without bounds calculation” (calculated)

xi

Table 18: Results for test case 11 ”Show with labels (countries)”

xii

Table 19: Results for test case 12 ”Show in different browsers (countries)”

xiii

Table 20: Results for test case 01 ”Show layer”

xiv

Table 21: Results for test case 02 ”Show layer and pan 10 times”

xiv

Table 22: Calculating the area for an array of geometries

xv

Table 23: Performing a coordinate system transformation for an array of geometries

xv

Listing Index

Listing

1: Canvas element in a website

6

Listing 2: Drawing a red rectangle on a canvas element

6

Listing 3: World file for raster image [DAV07]

12

Listing 4: Interactive SVG graphic in a HTML document

20

Listing 5: Basic principle of drawing the tile's image on a canvas in class CanvasImage

45

Listing 6: Basic raster reprojection algorithm

52

Listing 7: Basic use of web workers (main script)

55

Listing 8: Basic use of web workers (worker script)

56

Listing 9: Sending OpenLayers objects (worker script)

59

Listing 10: Sending OpenLayers objects (main script)

60

Glossary

Acronym/Term

Description

GIS

Geographic Information System: computer­based system to collect, maintain, store, analyze and display geographic data

Feature

Geographic Feature: consists of a vector geometry and associated attributes

Geographic Coordi­ nate System

Systems to describe the location of objects on the earth surface using longitude/latitude coordinates

Map Projection

Systems to project locations in a geographic coordinate system onto a 2D plane

OGC

Open Geospatial Consortium: organization that standardizes geospatial data models and services

JSON

JavaScript Object Notation: text­based standard to serialize JavaScript objects into human­readable strings

GeoJSON

Standard that extends JSON to encode geographic data structures

Zoom­Levels

Fixed set of resolutions in which a map can be viewed in web mapping applications

Panning

Changing the shown map extent in a mapping application while keeping the same zoom­level, usually achieved by dragging the map with the mouse pointer

SVG

Scalable Vector Graphic: markup language to describe 2D vector graphics (static and dynamic)

R­Tree

Tree­based data structure to realize spatial indexes

WMS

Web Map Service: protocol standard for the generation of map images

WFS

Web Feature Service: standardized interface to access geographic features

W3C

World Wide Web Consortium: standards organization for the world wide web

WHATWG

Web Hypertext Application Technology Working Group: organization with a primarily focus on the development of HTML and APIs needed for web applications

DOM

Document Object Model: interface to access the content of HTML documents

EPSG Code

European Petroleum Survey Group: system to identify geographic coordinate systems and map projections

Bounds

Minimum bounding rectangle (MBR) of a geometry

1. Introduction

1

Introduction

In 1997, the World Wide Web Consortium (W3C) published HTML 4.0, still the latest revision of HTML (Hypertext Markup Language), the major markup language for web pages. One year later in 1998, CSS 2 was released by the W3C, which is the current specification for CSS (Cascading Style Sheets), a language to describe the presentation of markup languages. Since then, the web and the way the web is used has changed considerably.

Back then, websites were mostly static [HON08]. Nowadays, websites are fully­fledged applications with a high level of interaction, which can keep up with conventional desktop applications. These Rich Internet Applications (RIA) created the need for third­party browser plug­ins like Adobe Flash, Microsoft Silverlight and Google Gears. In 2004, the Web Hypertext Application Technology Working Group (WHATWG) started work on HTML5, which aims to reduce the dependency on additional plug­ins by introducing new elements and APIs that reflect typical usage on modern websites [WIK10]. While the HTML5 specification is still under development [WHA10c], many parts are already implemented and can be used by the time of writing.

One kind of web applications which are still gaining popularity are web mapping websites. In 1996, MapQuest offered the first widely­used online address matching and routing service. Later in 2005, Google provided a JavaScript API for their mapping service Google Maps, which made it easy to integrate interactive maps in a website. Inspired by this innovation and to create a non­proprietary library, the company MetaCarta started developing the open­ source web mapping client OpenLayers, whose first version was released in 2006 [SCH07].

1. Introduction

This thesis evaluates, which new features of HTML5 could provide a benefit for OpenLayers. The limited time frame of ten weeks restricts the focus on the two most promising features:

the new canvas element, an interface to draw graphics in JavaScript, and the Web Worker API, which allows to run JavaScript files in background.

Performance tests showed that canvas is faster in rendering a large number of objects compared to SVG (Scalable Vector Graphics), an interactive vector graphic format which is currently used in OpenLayers to display vector geometries [SMU09]. This thesis will analyze these two technologies, canvas and SVG, for their ability to render vector data in different use cases. Additionally canvas will be evaluated for rendering raster images, as canvas allows to manipulate the pixel values of images, which makes it possible to perform further graphic operations.

Maps created with OpenLayers are highly interactive user interfaces which aim to provide a good user experience. Part of a good usability is a responsive user interface. While AJAX (Asynchronous JavaScript and XML) already allows to make asynchronous server requests, there are still long running operations, like parsing a text­based vector data file, that could be run in a web worker. This thesis will examine how web workers can be used for parsing files and further tasks.

The thesis is structured in two main chapters dedicated to canvas and web workers. First of all, the following two subsections will briefly introduce OpenLayers and the new features of

HTML5.

1. Introduction

1.1

HTML5

The term HTML5 has several meanings. First, it refers to the specification edited by the WHATWG. The WHATWG was formed out of the W3C, when the W3C decided not to extend HTML and CSS in 2004. Later in 2006, the W3C changed their decision and they started collaborating with the WHATWG [PIL10]. But still, the HTML5 specification of the WHATWG contains elements that are not part of the W3C specification, for example the canvas 2D drawing context [KRÖ 10]. Besides, the term HTML5 is also used for new technologies, that actually are not part of the HTML5 specification but are standardized in their own specifications, like the Web Worker API. In this case, HTML5 is used to describe a new kind of browser based application that does not depend on additional browser plug­ins. The latter interpretation of HTML5 is the one used in this thesis.

The following section will introduce a selection of new HTML5 features beside canvas and web workers.

HTML5 adds a number of new elements, for example header/nav/section/footer tags, which represent the common structure of a web page, new form elements, like date picker, number range or progress bar components, and form fields that automatically validate the input, for example email addresses, URLs or phone numbers. Enhancements for CSS include: support for custom fonts, new visual styles like color gradients, shadow and reflection effects, and animations. The video element allows to play videos without having to use Adobe Flash.

HTML5 also addresses mobile devices by providing support for offline web applications. By specifying resources in a MANIFEST file, a website can be cached for offline use. Additionally, data can be stored in Web SQL Databases, a database in the browser [GOO10].

Besides, further features, like the Notification API, the File API or Drag&Drop support, are another step forwards closing the gap between desktop applications and web applications.

1. Introduction

1.2

OpenLayers

OpenLayers is an object­oriented JavaScript API to embed dynamic maps into a website [OPE10]. Different kinds of map data from many sources can be integrated as layers into OpenLayers maps. Maps created with OpenLayers are interactive: users can zoom and drag the map to adjust the displayed map extent, enable or disable the visibility of certain layers, click on markers to show information for the selected location or sketch and edit vector geometries. One of the strengths of OpenLayers is that it can be fully customized to match with the design of a website, as shown in Illustration 1. But OpenLayers also provides a number of extension points to build custom tools and controls and to support new data sources.

custom tools and controls and to support new data sources. Illustration 1: Website using OpenLayers [SWI10]

Illustration 1: Website using OpenLayers [SWI10]

OpenLayers is an open­source library released under a modified BSD (Berkeley Software Distribution) license, which is developed by a community of individuals and companies.

2. The Canvas Element

2 The Canvas Element

This chapter will introduce the HTML canvas element and the different types of geographic data: vector and raster data. Then canvas will be evaluated for its ability to render both types of data within OpenLayers.

2.1 Introduction to the Canvas Element

Canvas is a HTML element that can be used by JavaScript to draw images on. In 2004 it was introduced by Apple in their HTML layout engine WebKit and currently it is about to be standardized by the WHATWG in the HTML5 specification. In 2010, canvas is already supported by most browsers, see Table 1.

IE*

Firefox

Safari

Chrome

Opera

IPhone

Android

9.0+

3.0+

3.0+

3.0+

10.0+

1.0+

1.0+

* Previous versions require the extension ExplorerCanvas [EXP10]

 

Table 1: Browser support for the canvas element [PIL10]

APIs in other programming languages provide similar concepts to draw directly on the user interface. For example the Java library SWT (Standard Widget Toolkit) has the widget Canvas and in Delphi many interface elements have the property Canvas to draw on the surface of the elements.

In a website, canvas can be defined as XML tag like any other HTML element. Its size can be set using the attributes width and height, but all other global element attributes (see

2. The Canvas Element

[WHA10]) like id, style or onclick are also supported. For example the style attribute can be used to set the position of the canvas element or to change the look of borders using CSS, see Listing 1.

<html> <body> <canvas id='example' width='512' height='256' style='border: 1px solid black;'> Shown if the browser does not support HTML5 Canvas. </canvas> </body> </html>

Listing 1: Canvas element in a website

The actual drawing on the canvas is done in JavaScript. Listing 2 demonstrates how to draw a rectangle on a canvas. When the HTML document finishes loading, the JavaScript function drawOnCanvas() is called, which does the drawing.

<html> <head> <script type='text/javascript'> function drawOnCanvas() { var canvas = document.getElementById('example'); var context = canvas.getContext('2d'); context.fillStyle = 'red'; context.fillRect(10, 10, 50, 50);

}

</script> </head> <body onload='drawOnCanvas()'> <canvas id='example' width='512' height='256'> </canvas> </body> </html>

Listing 2: Drawing a red rectangle on a canvas element

The canvas' rendering context provides an interface for creating or manipulating content on

2. The Canvas Element

the drawing surface of a canvas. The rendering context can be accessed using the method canvas.getContext(contextId), which accepts one argument, the type of the context. The HTML5 specification only defines a context for two dimensions, but canvas can also render other contexts. For example the standard WebGL [KHR10] defines an API for 3D graphics.

The two dimensional context is an object implementing the interface CanvasRenderingContext2D which offers the drawing functions and properties to control the drawing style. For example, in Listing 2 the attribute fillStyle is set to change the fill color and then the rectangle is drawn by invoking the method fillRect() with the position and size of the rectangle as arguments.

Besides rectangles the following geometries can be drawn: arcs, circles, bezier and quadratic curves, lines and arbitrary complex shapes as paths. The appearance of the shapes can be modified with the two attributes fillStyle and strokeStyle. The attribute strokeStyle changes the color or style of the lines around the shape and fillStyle is responsible for the inside of the shape. The two attributes accept CSS colors [CSS10], CanvasGradient objects for linear and radial gradients, and CanvasPattern objects for creating repeating patterns of images.

External images can be drawn on the canvas using the method drawImage(). drawImage normally takes HTML image elements as the argument, but will also accept other canvas elements and even video elements for rendering single video frames.

Once a shape is drawn on a canvas, the information used to draw the shape is lost. Canvas does not maintain a scene graph, so the objects rendered onto a canvas can not be accessed afterwards. All drawing operations are directly executed on the canvas' bitmap. Fortunately canvas provides an interface for accessing the single pixels of this bitmap. The method getImageData() returns the ImageData object that contains the data of the canvas. Every ImageData object has an attribute data which is an object of CanvasPixelArray, see Illustration 2. CanvasPixelArray is a one dimensional array of all Red­Green­Blue­Alpha

2. The Canvas Element

(RGBA) values for the pixels of the canvas, ordered from left to right, from top to bottom. The length of the array is the product of width, height and the number 4 (because of four color channels). The first array element is the value of the red component for the pixel at position (0,0) in the left upper corner. Changes to an ImageData object can be written back to the canvas using the method putImageData().

back to the canvas using the method putImageData() . Illustration 2: Simplified class diagram for

Illustration 2: Simplified class diagram for HTMLCanvasElement

2. The Canvas Element

2.2 Representation of Geographic Data

[BOL08] defines GIS as a “computer­based system to aid in the collection, maintenance, storage, analysis, output and distribution of spatial data and information”. According to this definition the central element of GIS is spatial data. The following subsections will give an introduction into the two primary types of spatial data and how this data is represented in OpenLayers.

2.2.1 Vector and Raster Data

In general, geographic data is a digitalized model of real phenomena. Like all models, geographic data is a simplified abstraction of the complex reality, see [WIK10a]. So depending on what information should be represented for a particular purpose, a specific type of model or geographic data is more appropriate. Traditionally there are two fundamental types of geographic information: raster data and vector data.

Both types have in common that they reference objects on the earth's surface using a geographic coordinate system. A geographic coordinate system describes the location of objects on the sphere­like shape of the earth in longitude and latitude values [CHA10]. Printed maps or maps displayed on a screen use a map projection to project locations in a geographic coordinate system onto a flat plane [MIT05]. A common system to identify coordinate systems and map projections is the EPSG code formerly specified by the European Petroleum Survey Group (now: OGP Surveying and Positioning Committee). For example the EPSG code for a widely used coordinate system that uses the datum WGS 84 is EPSG:4326 and EPSG:3857 is the code for Spherical Mercator, a popular projection in web mapping applications (also known as EPSG:900913).

2.2.1.1 Vector Data Model

Vector data uses geometrical shapes to represent objects on the earth surface. The commonly

2. The Canvas Element

used geometry types are points, lines and polygons [CHA10]. Points and their coordinates are the basic element that the other types are built of. Lines consist of at least two points, the end points, and optionally points in between. Polygons are made up of closed lines which are called rings. A polygon has outer and inner rings. Outer rings define the boundaries of the polygon and inner rings represent “holes” that are considered as not part of the polygon. Complex shapes can be expressed as composites of points, lines or polygons. These geometries types are called MultiPoint, MultiLine, MultiPolygon and GeometryCollection which can contain geometries of all those types, see Illustration 3.

contain geometries of all those types, see Illustration 3. Illustration 3: Different geometry types A geographic

Illustration 3: Different geometry types

A geographic feature consists of a geometry and associated attributes. A point feature for

example can represent a city with the attributes name and population. Lines are typically used

to store roads, railways or rivers. And land parcels or state boundaries are stored as polygons.

Which geometry type is used to represent a feature also depends on the map scale and on what information should be visualized. [DAV07] gives an example to illustrate this: On a less detailed scale it makes sense to show a city on a map as point. But in a more detailed scale the outline of the city might be a better representation.

In a GIS, vector data typically is stored as file, for example in the format ESRI Shapefile, or in

a spatial database like PostgreSQL/PostGIS, ESRI ArcSDE or Oracle Spatial/Locator. A spatial database is a conventional relational database which additionally has the ability to store, manipulate and query spatial data.

2. The Canvas Element

Many GIS use a client­server architecture where the server holds the geographic data in a spatial database and desktop or web clients can access the data. In an internal network, a client can directly connect to the database, but especially for web applications data is often published through web services. Web services implementing the Open Geospatial Consortium (OGC) specification Web Feature Service (WFS) allow to retrieve features encoded in the format Geography Markup Language (GML). Other widely used vector data exchange formats are: Well­Known­Text (WKT), Well­Known­Binary (WKB), GPS Exchange Format (GPX), Keyhole Markup Language (KML) and GeoJSON.

2.2.1.2 Raster Data Model

While vector data uses geometry shapes to characterize discrete features, the raster data model has its strength in representing continuous data that has no distinct boundaries, like elevation or imagery [CHA10] and typically requires less disk space to store the information.

The raster data model stores data in a grid. Each cell or pixel of the grid corresponds to an area on the earth surface. The value of a cell represents a characteristic of the specific area. This value can be the color information of satellite or aerial images, but also any other information like categorical data to capture the land use or wildlife habitats.

Like vector data, rasters can be stored in files or in spatial databases. A widely used file format for raster data is Tagged Image File Format (TIFF). The binary headers of TIFF images can contain additional information like geographic content, see [DAV07]. The GeoTIFF specification defines how the boundaries of the area the image covers, the coordinate system and the used projection are stored in the header. A different method to georeference images are World Files. Listing 3 is an example world file for a 2048x1024 pixel satellite image that shows the whole world.

2. The Canvas Element

0,176

# pixel size in x­direction in degrees/pixel

0

# rotation about y­axis

0

# rotation about x­axis # pixel size in y­direction in degrees/pixel # x­coordinate of the upper­left pixel

­0,176

­180

90

# y­coordinate of the upper­left pixel

Listing 3: World file for raster image [DAV07]

Even though raster data is stored in a common image file format, ordinary image viewers might not be able to display the raster information correctly. A pixel of a conventional image contains three values, the red, green and blue color information (some formats also store an alpha value for the opacity). Rasters may also contain additional values for data outside the visual spectrum captured by multispectral sensors, for example to visualize environmental pollution. For categorical data, a pixel may also contain only one value. To interpret this data correctly, these rasters must be viewed in a special software.

In web mapping applications, raster data is mostly shown as Portable Network Graphics (PNG) or JPEG files. These files only contain the visible color information, because web browsers do not support the interpretation of raster data. Raster data in web applications often is satellite and aerial imagery or vector data rendered onto an image.

Commonly raster data is published through web services. While a WFS only can be used to access vector features, a Web Map Service (WMS) renders vector and raster data as images. A WMS allows to customize the requested map image. It accepts parameters to specify the image size, the geographic extent, the map projection and the data sources that should be used to render the image. This high flexibility is also the biggest disadvantage when a WMS is used by a higher number of users. Because every WMS request generates a new image, many simultaneous requests lead to a performance bottleneck.

WMS made it easy to show maps in a web application, but modern web mapping applications take a different approach for being able to serve many users. In most cases, especially for web

2. The Canvas Element

application that address non­GIS users, the flexibility of WMS to configure the map is not required. The first step is to define fixed zoom­levels in which the map can be seen. Then for every zoom­level, the map is pre­rendered with a predefined configuration and cut into tiles. A client can now directly access these tiles which improves the performance.

A widely used tiling scheme is a pyramid scheme introduced by Google Maps which is also utilized by OpenStreetMap and Microsoft Bing Maps. Google Maps uses 256x256 pixel images which are rendered using the Spherical Mercator projection. At zoom­level 0 one tile shows the whole world (without parts of the poles). The next zoom­level has 4 tiles which together have the size 512x512 pixel. Zoom­level 2 has 16 tiles and the size 1024x1024 pixel, and so on (see Illustration 4).

the size 1024x1024 pixel, and so on (see Illustration 4). Illustration 4: Pyramid tiling scheme on

Illustration 4: Pyramid tiling scheme on OpenStreetMap tiles

The specification Web Map Tile Service (WMTS), evolved out of the recommendation WMS­C (WMS Tile Caching) [OSG10] and published by the OGC in 2010, standardizes how map tiles are requested by clients. A tile server that implements this specification must provide

2. The Canvas Element

description metadata about the available maps, the tile size, the map projection, the extent the tiles cover and the map scales or resolutions in which tiles can be requested.

2.2.2 The OpenLayers Layer System

One of the key features of GIS is to display data from different sources with varying coordinate systems and projections in a map. A map consists of one or more layers, which are thematic representations of geographic information [ESR10]. A layer can have one or more data sources and describes how data is symbolized. For example, a layer can visualize a street network and each street is rendered individually using a specific style according to the street's classification (highway, main road, secondary road, …). Every map uses exactly one coordinate system, but a map can contain layers with different coordinate systems. In most desktop GIS software, those layers are on­the­fly reprojected to the map's coordinate system.

2.2.2.1 OpenLayers Layer Types

OpenLayers distinguishes between two kind of layers: base layer and non­base layer (or overlay). A map has exactly one enabled base layer which specifies the coordinate system and zoom­level steps of the map. Every layer can be used as base layer, the property isBaseLayer determines whether a layer is a base layer or not. A base layer can be overlaid with multiple non­base layers, which must share the same coordinate system as the base layer. Currently only vector layers can be reprojected to the base layer's coordinate system. Unlike to desktop GIS software, the reprojection is not executed on­the­fly, only once at the load time. The base/non­base layer concept is unusual in GIS and there are plans to give up this distinction in the next major release of OpenLayers [OPE10a].

OpenLayers supports a wide range of different layer types. The three most relevant categories, represented by their corresponding classes (see Illustration 5), are: EventPane, HTTPRequest/Grid and Vector.

2. The Canvas Element

The class EventPane provides a way to integrate map data of other mapping APIs into OpenLayers, for example for the Google Maps API or the Microsoft Bing Maps API. Sub classes of EventPane act as proxy to translate between OpenLayers and the particular API.

proxy to translate between OpenLayers and the particular API. Illustration 5: Layer class hierarchy in OpenLayers

Illustration 5: Layer class hierarchy in OpenLayers 2.9.1

2. The Canvas Element

The main characteristic of a Grid layer is that it displays imagery data split into tiles. These tiles, organized in a grid, can be served by a number of different services. Classes derived from Grid implement support for the specifications WMS and WMTS, but also for tile servers like TileCache, MapServer and ESRI ArcIMS/ArcGIS Server. A detailed analysis of the internal processing can be found in chapter 2.4.1 “The OpenLayers Raster Rendering System”.

A Vector layer is used to render geometries of vector features. By specifying styles, the appearance of the features in a map can be modified. A vector layer also provides an interface for further interactions with the features. For example by defining hover or onclick events, a specific action can be performed when a user selects a feature, like displaying detailed information for the particular feature. A number of different vector formats are supported in OpenLayers, for example GML, GeoJSON, KML, WKT or GPX. Chapter 2.3.1 “The OpenLayers Vector Rendering System” gives a detailed explanation about how features are rendered.

2.2.2.2 The Slippy Map

Web mapping applications are interactive. The user can freely change the displayed map extent by zooming in and out or by panning (panning moves the map extent while the zoom­ level stays the same). In early web mapping applications, the user had to click a button or other navigation elements to zoom, and panning was often achieved by panels around the map which moved the map by a fixed delta, see Illustration 6.

Nowadays the usability has much improved, mainly influenced by the release of Google Maps in 2005, see [SCH07]. Most web applications still provide the possibility to navigate using buttons in a toolbar. But now zooming can be done using the mouse­wheel or by performing a double­click on the map, and panning by simply dragging the map.

2. The Canvas Element

2. The Canvas Element Illustration 6: On the left: Web application with a navigation toolbar and

Illustration 6: On the left: Web application with a navigation toolbar and panning panels around the map. On the right: Web application with a user­friendly interaction.

OpenLayers also offers this way of interactivity. For being able to pan a map seamlessly ­such maps are also called slippy maps­ OpenLayers bundles all layers in a layer container, which is represented as div­element in the Document Object Model (DOM), an interface to access the content of HTML documents. The layer container itself is enclosed by the viewport container, which also contains controls like the navigation toolbar and the layer overview, see Illustration 7. When a map is dragged, the layer container and thus all layers are moved accordingly.

the layer container and thus all layers are moved accordingly. Illustration 7: OpenLayers element hierarchy in

Illustration 7: OpenLayers element hierarchy in the DOM

2. The Canvas Element

2.3

Rendering Vector Data

2.3.1

The OpenLayers Vector Rendering System

In OpenLayers, vector data is represented by the layer Vector. Each vector layer has an associated renderer which handles the actual drawing of the features, see Illustration 8.

Currently, OpenLayers has three renderers which each use a different technology: the two vector graphic formats Scalable Vector Graphics (SVG) and Vector Markup Language (VML), and HTML canvas. Which renderer is used in an OpenLayers application depends on which technology is supported by a browser. If available, SVG is used, otherwise support for VML is checked and last for canvas. The VML renderer was created for Internet Explorer, because SVG was not supported prior to Internet Explorer 9. As Internet Explorer prior to version 9 also does not support the canvas element (see [WIK10b]), a comparison between canvas and VML is not reasonable. Hence VML will not be considered any further in this thesis.

Hence VML will not be considered any further in this thesis. Illustration 8: Renderer implementations The

Illustration 8: Renderer implementations

The following two chapters will explain how the SVG and the canvas renderer work and then chapter 2.3.1.3 “Comparison” will give a first evaluation of the two implementations.

2. The Canvas Element

2.3.1.1 SVG Renderer

In 2001, the World Wide Web Consortium (W3C) released the first specification for the XML­ based vector graphic format SVG [SVG10]. SVG drawings can contain vector graphic shapes, like points or lines, raster images and text. Due its ability to scale without loss of image quality, SVG is well suited for charts, diagrams, icons and logos, also for printing. Many desktop applications as Adobe Illustrator, CorelDraw, Inkscape or Microsoft Visio support SVG, but one of its strengths is, that it can directly be embedded into HTML documents.

Illustration 9 shows an interactive SVG graphic that displays three of the basic shape elements. Beside the elements circle, line and polygon, SVG also has the elements rect, ellipse and polyline. Arbitrary shapes can be defined using the element path.

Listing 4 contains the code that describes the graphic. An onclick handler is assigned to the element circle, which displays a JavaScript message window when the point is clicked. Further events like mouseover, mousemove or mouseout can also be set.

events like mouseover , mousemove or mouseout can also be set. Illustration 9: Basic interactive SVG

Illustration 9: Basic interactive SVG shapes

2. The Canvas Element

<html xmlns="http://www.w3.org/1999/xhtml"

xmlns:svg="http://www.w3.org/2000/svg"

xmlns:xlink="http://www.w3.org/1999/xlink">

<head> <title>SVG Example</title> <script type="text/javascript"> function onClick() { alert("point clicked");

}

</script> </head> <body> <svg:svg version="1.1" baseProfile="full" width="300px" height="200px"> <svg:circle cx="150px" cy="100px" r="10px" fill="orange" stroke="black" stroke­width="2px" onclick="onClick()" /> <svg:polyline points="10,20 50,50 80,50 70,100 120,150" fill="none" stroke="orange" stroke­width="4px" /> <svg:polygon

points="130,10 130,50 155,70 180,50

180,10"

fill="orange" stroke="black" stroke­width="2px" />

</svg:svg>

</body>

</html>

Listing 4: Interactive SVG graphic in a HTML document

2. The Canvas Element

In OpenLayers when a vector layer is initialized, the SVG renderer inserts a SVG element in the div­container of the layer. The renderer also creates two SVG g(roup) elements. The one

will contain vector elements, and text elements for labels can be inserted in the other group,

see Illustration 10.

can be inserted in the other group, see Illustration 10. Illustration 10: Element structure for a

Illustration 10: Element structure for a vector layer with SVG renderer

Whenever the map is zoomed, panned or the map extent was changed programmatically, the method moveTo(lonlat, zoom, options) is called on the OpenLayers Map object. The Map object then calls the moveTo() method on every layer, so that each layer itself adjusts its presentation, according to the Composite pattern [GAM04].

The following section will explain the moveTo() method of layer Vector when used with a

SVG renderer. Illustration 11 shows the program flow of the method as activity diagram in a

mix of pseudo­code and natural language. The several steps and activities, marked in the

diagram as (1) to (5), will be referenced in the following.

2. The Canvas Element

2. The Canvas Element Illustration 11: Activity diagram for the method Layer.Vector.moveTo() 22

Illustration 11: Activity diagram for the method Layer.Vector.moveTo()

2. The Canvas Element

The method Layer.Vector.moveTo() receives three arguments: the new map extent (bounds), a flag if the zoom­level has changed since the last call (zoomChanged) and a second flag that indicates if the map is currently dragged (dragging). When the map is being dragged, the position of the layer container is updated on every mouse move, see chapter 2.2.2.2 “The Slippy Map”. For performance reasons, the SVG renderer does not make any changes to the SVG element, like drawing new features, during the dragging. Only when the mouse is released, the view is updated, see Illustration 12.

mouse is released, the view is updated, see Illustration 12. Illustration 12: Dragging the map for

Illustration 12: Dragging the map for the SVG renderer: (a) Initial view (b) During dragging (c) When the dragging is finished

So step (1) tests if the map is being dragged at the moment. If not, the position of the layer div element has to be adjusted. The position of the layer container has changed during the dragging, but as the SVG element has a fixed size, the same as the viewport, the layer div must be moved back to its original position, step (2), so that it is exactly overlaid with the viewport, see Illustration 13.

But by moving the layer div, including the SVG element, back to their original position, the layer does not show the actual map extent of the viewport. The features are still displayed at their old positions. Therefor in step (3), a transformation is set on the SVG element, see Illustration 14. A transformation is a x­,y­delta that is added to every coordinate of the vectors.

2. The Canvas Element

2. The Canvas Element Illustration 13: Adjustment of the layer position after dragging Illustration 14: SVG

Illustration 13: Adjustment of the layer position after dragging

13: Adjustment of the layer position after dragging Illustration 14: SVG coordinate translation when the map

Illustration 14: SVG coordinate translation when the map is panned

2. The Canvas Element

The variable coordSysUnchanged in step (3) is set to false if the zoom­level has changed or if the transformation values exceed a limit, which prevents Firefox from locking up, see [OPE10b]. In step (4), if the zoom­level has not changed and coordSysUnchanged was set to true, then the unrendered features are now drawn. Unrendered features haven't been rendered yet, because they were added recently or because they were not inside the previous extent.

Otherwise in step (5), if the layer is drawn for the first time, the zoom­level has changed or coordSysUnchanged is false, then all features in the current extent are (re­)drawn. It might be surprising that the features are redrawn when the zoom­level changes, as SVG has a viewBox attribute which defines the extent of the SVG that is displayed. Setting the viewBox scales the coordinates correctly, but the styles of the features would not be applied as expected. For example, if a point is rendered as a circle with the radius of 10 pixels, after zooming in by setting the viewBox, then the point could have a radius of 20 pixels which is not the expected style. The same effect occurs for the line width, text and for images that are used as symbols.

2.3.1.2 Basic Canvas Renderer

The canvas renderer for vector data is already part of OpenLayers since August 2008. But by the time this thesis is written there is no known OpenLayers application that actually uses it. The well tested SVG renderer is always the first choice for a vector layer.

When the canvas renderer is initialized, a HTMLCanvasElement is created and inserted into the vector layer's div element. The central unit of the canvas renderer is the method redraw(). redraw() loops through all features and draws them using the canvas drawing functions like lineTo(), fill() or stroke(). The concept of the canvas renderer is fairly simple: Whenever the map extent changes and when features are added or removed, redraw() is called and the drawing surface is updated.

Like the SVG renderer, the canvas renderer does not refresh the view during the dragging of the map due to performance reasons.

2. The Canvas Element

2.3.1.3

Comparison

This chapter will compare the two renderer implementations by running a performance test that simulates the typical use of a vector layer.

A basic test suite has been developed to run the tests, which is also used in chapter 2.3.3 “Performance Evaluation”. The test script contains an OpenLayers map which has a single vector layer. For this first test, a GeoJSON file containing the world countries' boundaries as MultiPolygon geometries (169 features with together 4506 vertices) is used as vector data for the layer. The following four test cases are executed for this test:

Show Calls the method zoomToMaxExtent() on the OpenLayers Map object which triggers the initial drawing of the vector layer.

Show and pan 10 times Like test case Show but additionally after displaying the layer, the map is centered at a random position for ten times.

Show and zoom 10 times Like Show and pan 10 times but instead of panning, the zoom­level is changed for ten times.

Select feature Features of a vector layer can be selected by clicking on the map with the mouse pointer. This test case simulates that operation for a random feature.

These four test cases are run for both renderers in the browser Chromium 5. The file version of the renderers is the one as in revision 10554 of the OpenLayers SVN repository [OPE10c]. The test results, based on 50 test runs using a 800x800 pixel map, are shown in Table 2.

2. The Canvas Element

Test Case

Canvas

SVG

Show

44 ms

136 ms

Show and pan 10 times

887

ms

263

ms

Show and zoom 10 times

903

ms

778

ms

Select feature

77 ms

0,3 ms

Table 2: Results for the first test comparing the existing renderer implementations

The first thing to notice is that the canvas renderer is about three times faster than the SVG renderer in showing the vector layer. When a geometry is drawn with canvas, canvas just paints the particular pixels on the drawing surface. But SVG also creates a SVG element for the geometry in the DOM, that contains the coordinates and the styling information. The additional work to manage this internal data structure explains the slower execution time.

But the overhead is justified when the map is panned. With SVG panning simply means specifying a transformation, see 2.3.1.1 “SVG Renderer”. With canvas all features have to be redrawn individually, like for the first time the layer is shown. So the time to run test case Show and pan 10 times should be approximately the time of test case Show multiplied by 11 (a bit less as the canvas element is created only once). But instead of ~484 ms, the test case almost takes twice as long. There is a simple reason for this issue: Because of a bug in the canvas renderer implementation, the method redraw() is called twice whenever the map is panned. Exactly the same happens for test case Show and zoom 10 times: Every time the zoom­level changes, the method redraw() is executed two times.

Zooming is slower than panning with SVG. This is because the elements' coordinates inside the SVG have to be updated when zooming, see 2.3.1.1 “SVG Renderer”.

For the last test case Select feature, there is a big difference between the execution times for SVG and canvas. In SVG, event handlers can be defined for a single geometry. So finding the feature, that the mouse is pointing to, is handled by SVG. But for canvas, events can only be

2. The Canvas Element

defined for the whole canvas element. So when the user clicks on the canvas, the canvas renderer has to find the geometry that was rendered at the position the user clicked at. This is what the method Renderer.Canvas.getFeatureFromId(event) is doing: A 10x10 pixel rectangle is built around the mouse position. Then using this rectangle, intersection tests with the features' geometries are run until a intersecting feature is found. And as the intersection test has to loop through all coordinates of the geometries, the execution time is much slower than with SVG.

2. The Canvas Element

2.3.2 Improving the Canvas Vector Renderer

In the first test of the previous chapter, the canvas renderer implementation had the better

results in only one test (Show). If the times, caused by the bug that renders all features twice, are not taken into account, the renderer is also faster for test case Show and zoom 10 times. So this chapter will focus on optimizing the two other test cases Show and pan 10 times and Select feature.

Both test cases have in common that one part of the operation is finding the features that intersect or that are within a search rectangle. For Select feature this rectangle is the tolerance area around the mouse pointer. And for Show and pan 10 times the rectangle is the map extent, the following will explain why: The canvas renderer always draws all features of a layer, no matter if they are actually within the current map extent or not. So by only processing the visible features, the rendering time could improve. But therefor those features have to be found.

A common method to search geometries within a certain area is using a spatial index, for

example a R­Tree (rectangle tree). R­Trees are hierarchical data structures in which every node only contains child nodes that are within the same minimum bounding rectangle (MBR, bounding box or bounds will be used interchangeable for this term in the following), see Illustration 15. Leaf nodes store the actual features and the MBR of the feature's geometry. New features are inserted in the node that requires the least enlargement of its bounds, so that “nearby” features are grouped together, see [WIK10c]. The complexity of finding features

inside a given rectangle is O log m N , where N is the number of all entries and m is the number of entries of a node [PEI10]. The complexity of the worst­case is O N , like for the linear search.

An alternative approach for spatial indexes is segmenting the 2­dimensional space into a grid, for example with quadtrees [WIK10d]. But as there was an existing R­Tree implementation in

2. The Canvas Element

JavaScript (R­Tree Library for Javascript [RIV10]), this library was integrated into the canvas renderer.

this library was integrated into the canvas renderer. Illustration 15: Example of an R­Tree [WIK10c] Now

Illustration 15: Example of an R­Tree [WIK10c]

Now when a feature is added to the renderer, the feature is inserted into the R­Tree. The R­ Tree search is utilized at two places: Inside the method redraw() the R­Tree is used to find the features within the current map extent. And getFeatureIdFromEvent() selects the features nearby the mouse pointer using a R­Tree query.

For still having the possibility to use the canvas renderer without an additional dependency on the external R­Tree library, three different run modes were created: rtree, interactive and static. Mode rtree uses the R­Tree as described above. Mode interactive is a slight improvement of the original implementation: Method getFeatureIdFromEvent() now first

2. The Canvas Element

executes a bounding box intersection test between the rectangle around the mouse pointer and the MBR of the geometries. And only if this test is successful, the intersection will be tested with the original geometry. Method redraw() also uses a bounding box intersection test to find the features within the current map extent.

Mode static is basically the original implementation, but without the bug of calling the method redraw() twice.

2. The Canvas Element

2.3.3 Performance Evaluation

This chapter will evaluate the changes made to the canvas vector renderer. First the test environment will be described and then the results for the several test cases will be presented.

2.3.3.1 Test Environment

The tests are run using the test script introduced in chapter 2.3.1.3 “Comparison”. The script allows to run various test cases on different data sets using several renderer implementations, see Illustration 16.

using several renderer implementations, see Illustration 16. Illustration 16: Performance test script for vector data The

Illustration 16: Performance test script for vector data

The test data is stored in GeoJSON files. Three different data sets were used to run the tests. One data set consists of files with a varying number of randomly generated points. The second data set contains the world countries boundaries as (Multi­)Polygons [SAN10] and the last one

2. The Canvas Element

main rivers as lines [NAT10]. The world countries and rivers were originally stored in Shapefiles. These Shapefiles were imported into the spatial database PostgreSQL/PostGIS and then simplified geometries were written into GeoJSON files with a varying simplification tolerance. Simplification reduces the number of points in a line or polygon.

All test cases were run with a 800x800 pixel map in Chromium 5.0.375 on Ubuntu 10.04 with a quad core CPU (4 x 2,5 GHz). The browser Chrome/Chromium was chosen because its JavaScript engine was seen as the fastest in performance tests [CEL10].

Originally the plan was to execute every test case of the performance evaluation 10 times and then to use the average result. But it quickly turned out that 10 test runs do not produce reliable results, so that every test case that takes less than 500 milliseconds is run at least 30 times, similar to the recommendations given by [RES08].

2.3.3.2 Test Results

Table 3 shows the results for the original test cases run in chapter 2.3.1.3 “Comparison” with the modified canvas renderer. Further test cases can be found in chapter 6.1 “Vector renderer performance tests” in the appendix.

Test Case

Original

Static

Interactive

R­Tree

 

SVG

Show

44 ms

48 ms

126

ms

132

ms

136

ms

Show and pan 10 times

887

ms

367 ms

433

ms

438

ms

263 ms

Show and zoom 10 times

903

ms

471 ms

487 ms

489 ms

778

ms

Select feature

77 ms

75 ms

9 ms

8 ms

0,3 ms

Table 3: Comparison of the original implementation with the modifications

One of the first things to note is that showing and zooming for all three new canvas renderer modes takes only half of the time of the original renderer, which demonstrates the impact of the bug that called the method redraw() twice.

2. The Canvas Element

It might be surprising that showing a layer for the first time with the two canvas modes Interactive and R­Tree is as slow as with SVG, while canvas mode Static is about as fast as the original canvas renderer. The SVG renderer and the canvas modes Interactive and R­Tree have in common that they require the bounds of the features' geometries. When the features are drawn for the first time, the bounds are cached. But calculating the bounds takes a significant amount of time, for SVG 40 – 60 % of the rendering time, see Table 4. The rendering times, without taking the bounds calculation into account, can be found in Table 17 (appendix).

Data

# of vertices

Bounds calculation

% of the SVG render time

countries­simplified­1

2150

32

ms

45

countries­simplified­0.5

4506

55

ms

37

countries­simplified­0.05

45031

789 ms

62

countries­simplified­0.005

159771

2204

ms

58

countries­non­simplified

403150

5540

ms

61

Table 4: Times to calculate the bounds of geometries

One of the goals in chapter 2.3.2 “Improving the Canvas Vector Renderer” was to improve the panning with canvas by only rendering the visible features. The results for test case Show and pan 10 times in Table 3 do not show an acceleration, because the map was panned at its initial zoom­level where all or most features are visible. But if the map is panned in a smaller extent, where only a smaller number of features is visible, the rendering time can be even less than the one with SVG, see Table 5.

Test case

Static

Interactive

R­Tree

SVG

Pan 10 times

8210 ms

10481 ms

10793 ms

5024 ms

Pan 10 times in smaller extent

11236 ms

1774 ms

2024 ms

3372 ms

Table 5: Times for two different panning test cases (Test data: countries­simplified­0.005)

Selecting features could also be improved by the use of R­Tree queries and bounding box

2. The Canvas Element

intersection tests. The times of SVG could not be reached, but the canvas modes Interactive and R­Tree are fast enough to set click events for large data and even hover events for smaller data. The bounding box intersection tests of mode Interactive scale surprisingly good compared to mode R­Tree. For smaller data, the overhead to maintain the R­Tree does not pay off and the bounding box intersection tests over all features are faster. Only for larger data, mode R­Tree is faster than mode Interactive, see Table 6. And it also depends on the geometry type: While the R­Tree speeds up the selection of complex geometries like polygons, selecting lines works as good as with bounding box intersection tests.

Data

# of vertices

Static

Interactive

R­Tree

SVG

countries­simplified­0.05

45031

2335

ms

104 ms

289 ms

1,3 ms

countries­simplified­0.005

159771

26372 ms

2451 ms

1620 ms

1,5 ms

rivers­simplified­0.005

121281

3203

ms

4,8 ms

4,3 ms

1,4 ms

rivers­non­simplified

253004

4872

ms

4,9 ms

4,8 ms

1,4 ms

Table 6: Results for selecting ten features with different data sets

2. The Canvas Element

2.3.4

Discussion

The test results in the previous chapter did not reveal a clear winner. Both the canvas renderer and the SVG renderer have their advantages and disadvantages. The canvas renderer is strong in displaying a layer for the first time and zooming. The SVG renderer has its strength in feature selection and panning (even though not for smaller map extents). So which implementation to chose depends on the use­case.

For editing geometries, like moving a point, modifying polygons or sketching new geometries, SVG is still the first choice. When editing, the map has to be redrawn very often, for example on every mousemove event when dragging a point or vertex. So even for a small number of shown features, the canvas renderer is not fast enough in redrawing the map view.

For static maps that require no further interaction like panning, zooming or selecting features, the canvas renderer is the best solution, especially for large data. But when it gets to setting hover events, it is still better to use SVG, also for large data.

But in general, one has to keep in mind that canvas and SVG are two different techniques. All improvements made to the canvas renderer are attempts to implement a scene graph in JavaScript, while for SVG the browser natively handles this task quite efficiently [KAI09]. So instead of reinventing SVG with canvas in JavaScript, it may be more appropriate focusing on the particular strengths of the two techniques. The next chapter describes two ideas that take advantage of the abilities of canvas and also combine the strengths of SVG and canvas.

But it may also be worth optimizing the SVG renderer, especially since the HTML5 specification now officially allows to use SVG as inline element in HTML documents [W3C10] and Microsoft is adding SVG support to Internet Explorer 9 [MIC10]. One optimization may be improving the zooming. Instead of re­calculating the coordinates of all shapes, see 2.3.1.1 “SVG Renderer”, only re­calculating the style values (line width and

2. The Canvas Element

symbol size) and the circle radius for points, might lead to a performance benefit. Another possibility to enhance the SVG rendering times is to calculate the geometries' bounds on the server side. As shown in the previous chapter, calculating the bounds takes 40 to 60 % of the time to initially show a layer. Heavy operations like these are better performed on a server, especially since the additional amount of data, that has to be downloaded by the client, is low (at minimum four floating point numbers per feature).

2. The Canvas Element

2.3.5

Outlook

While the improvements made in chapter 2.3.2 “Improving the Canvas Vector Renderer“ only try to implement the same functionality of SVG with canvas, the following subsections describe two examples that actually take advantage of the abilities of canvas.

2.3.5.1 Heat Map Generation

In GIS there are various ways to symbolize vector data. One approach to visualize the density of points is to create so called heat maps. Areas with a high concentration of points are colored with a “warm” color and a “cold” color is used for areas with a low concentration, see Illustration 17.

for areas with a low concentration, see Illustration 17. Illustration 17: OpenLayers HeatMap layer To demonstrate

Illustration 17: OpenLayers HeatMap layer

To demonstrate the strength of canvas, a new OpenLayers layer type HeatMap has been developed, which can be used to render such heat maps. The class HeatMap inherits from Layer.Vector, but it has the limitation that it only can be used with points. Each HeatMap object has an associated CanvasHeatMap renderer, which is a sub class of Renderer.Canvas. CanvasHeatMap takes care of the actual rendering of the heat map.

2. The Canvas Element

Rendering a heat map consists of two steps: creating a gray­scale intensity mask and then coloring the mask [VES08]. The intensity mask is created by drawing a radial gradient for every point. The color at the center point of the gradient circles is rgba(10, 10, 10, 255) and the transition ends with the transparent color rgba(10, 10, 10, 0), so that the alpha value varies. If two points are close together, the gradient of the one point is drawn on top of the other point and the alpha values add up. Once all points are drawn, the coloring starts. The coloring uses a color map with 256 elements, which is a linear gradient with multiple color steps. All pixels of the intensity mask are looped and, depending on the alpha value of that particular pixel, the associated color from the color map is assigned, see Illustration 18.

color from the color map is assigned, see Illustration 18. Illustration 18: On the left: Gray­scale

Illustration 18: On the left: Gray­scale intensity mask. On the right: After coloring

While creating the gray­scale intensity map could also be done with SVG, the pixel­based coloring can only be performed with canvas. This example demonstrates how canvas opens new opportunities in visualizing vector data on the client side, which would not be possible with SVG. Another example would be representing quantities with a dot density map, see Illustration 19. Dot density maps are used to visualize the amount of an attribute within an area, for example the population [ESR10].

The coloring of the heat map has to process each pixel individually, which tends to become slow on larger maps. But used in conjunction with web workers, see 3.2.1 “Canvas Pixel Operations”, the user interface can be kept responsive.

2. The Canvas Element

2. The Canvas Element Illustration 19: Example dot density map [DIB10] Illustration 20: Example pie chart

Illustration 19: Example dot density map [DIB10]

2. The Canvas Element Illustration 19: Example dot density map [DIB10] Illustration 20: Example pie chart

Illustration 20: Example pie chart map [DIB10]

2. The Canvas Element

2.3.5.2 Combining SVG and Canvas

Values associated to a feature are typically represented with a color or by setting a corresponding point size or line width. But if more than one attribute should be visualized for a feature, bar/column or pie charts can be used as symbol, see Illustration 20.

As canvas is fast in drawing graphics and SVG has its strength in interactive vector graphics, those two technologies could be combined. The charts would be drawn in canvas and then for each feature the charts would be embedded into SVG using the foreignObject attribute.

This chart symbology could also be achieved only with canvas and also only with SVG. But with canvas all charts would have to be redrawn for every zooming and panning, and with SVG there would be an additional overhead as a new SVG element would be created for every component of the chart. Though if the chart itself should also be interactive, the plain SVG implementation would be the better solution [ADC04].

2. The Canvas Element

2.4 Rendering Raster Data

This chapter will explain how raster data is rendered in OpenLayers and then canvas will be evaluated for displaying tiles and further uses.

2.4.1 The OpenLayers Raster Rendering System

The OpenLayers layer type Grid is the base class for all raster layers. Raster data displayed with this layer type is organized in tiles. The Grid layer maintains a grid of tiles that are required to display the current map extent. A tile is represented by the OpenLayers class Tile and its subclass Tile.Image which is responsible for displaying the tile's image as HTML image element in the DOM.

A Grid layer can be used in two different modes: singleTile and tiled. If the property

singleTile is set to true, the grid consists of only one tile. This tile has the size of the map, but

by setting the property ratio a larger tile can be created. A larger tile has the advantage, that,

when panning, a new tile image has to be requested only then, when the map extent is not covered by the tile anymore. Otherwise a new image must be requested whenever the map is moved. Mode singleTile can only be used for services that support the customization of the map image, for example WMS.

Commonly, the Grid layer is used in tiled mode, with multiple tiles that all have the same size (usually 256x256 pixel). Similar to mode singleTile, a larger extent than actually visible can

be loaded to improve panning, which is demonstrated in Illustration 21. Part (a) shows how

the map is split into tiles. In (b), the visible map extent is highlighted with a red rectangle. All

tiles covered by this extent must be loaded (c). A number of additional tiles, specified by the property buffer, can be pre­loaded (d). When the map is panned, these tiles can be displayed without delay.

Like for layer Vector, whenever the map extent changes, the method moveTo() is called. If the

2. The Canvas Element

map is panned, then the tile grid is moved accordingly by removing rows and columns that are no longer required and by adding new tiles that become visible or are inside the pre­loading buffer area. If the zoom­level has changed, the whole tile grid has to be recalculated and all new tiles are loaded. Tiles in the center of the grid are loaded first, then the algorithm continues in a spiral to the outside.

then the algorithm continues in a spiral to the outside. Illustration 21: Pre­loading of tiles (buffer=1)

Illustration 21: Pre­loading of tiles (buffer=1) [SCH07]

2. The Canvas Element

2.4.2 Analysis and Implementation

This chapter will explain the modifications made to layer Grid to support rendering tiles on canvas elements.

There were already experiments that displayed tiles using canvas [OPE10d]. These experiments are using a single canvas element per layer, on which all tiles are drawn once all images of the tiles have finished loading. If the loading of one tile is delayed because of network errors, then the whole layer will not be drawn until the request of that one tile returns, successful or not.

The implementation developed for this thesis uses a slightly different approach. Instead of waiting on all tiles to finish loading, the tiles are drawn once they are ready. The first tile of a render request that finishes loading, resets the canvas drawing surface so that the previous view is cleared. Additionally, a second canvas mode for layer Grid has been implemented which draws every tile on its own canvas element, instead of all tiles on a single element. Its advantage is that only the new tiles have to be drawn when panning. Otherwise all tiles must be drawn again, which produces a flickering effect when the map is panned several times in short intervals. Layer Grid can now be used in three modes: NoCanvas, OneCanvasPerLayer and OneCanvasPerTile.

A new class CanvasImage, which inherits from OpenLayers.Tile has been created for mode OneCanvasPerTile. Mode OneCanvasPerLayer uses the class VirtualCanvasImage, which is a subclass of CanvasImage. When the map extent changes, layer Grid updates the tile grid and then method draw() is called on every tile. If a tile is drawn for the first time, a new canvas element is created and added to the layer's div element in the DOM. After that the position of the canvas element is set and the actual loading of the tile's image can start. To draw an image on a canvas, the image file first must be downloaded from the browser. To ensure that the image is available in the moment it is drawn on the canvas, an onload event handler is used.

2. The Canvas Element

Listing 5 shows the basic principle of how the image is drawn. An Image object is created and the reference to that object is stored on the tile. Then the onload handler is set on the image object, whereas the event handler function is bound to the variable context so that the keyword this references the variable context inside the function. The handler function simply redirects the event to the method onLoad(). Because Tile objects are reused when the tile grid is recalculated, the tile might already have requested a new image at the moment the event handler is called. So the image is only drawn on the canvas, if it is the last image that the tile requested.

] draw: function() { ] var image = new Image(); this.lastImage = image;

[

[

var context = { image: image, tile: this

};

var onLoadProxy = function() { this.tile.onLoad(this);

}; image.onload = OpenLayers.Function.bind( onLoadProxy, context);

image.src = this.url;

},

onLoad: function(context) { if (context.image !== this.lastImage) { return;

}

this.canvasContext.drawImage(context.image, 0, 0);

this.events.triggerEvent("loadend");

},

Listing 5: Basic principle of drawing the tile's image on a canvas in class CanvasImage

2. The Canvas Element

2.4.3 Performance Evaluation

In this section two test cases will compare the performance of the original implementation, which uses HTML image elements, to the two newly introduced canvas modes.

The first test case measures the time to display a layer that uses OpenStreetMap tiles [OSM10]. The tiles were cached on a local hard disk, so that there is no interference due to network jitters while downloading tiles. The test case was run for varying map sizes (from 512x512 to 4096x4096 pixel) and thus for a varying number of tiles (from 9 to 289 tiles). The buffer property of the layer is set to 0, so that only visible tiles are loaded. The results for the first test case are shown in Illustration 22.

Show 300 250 200 NoCanvas 150 OneCanvasPerLayer OneCanvasPerTile 100 50 0 512 1024 1536 2048
Show
300
250
200
NoCanvas
150
OneCanvasPerLayer
OneCanvasPerTile
100
50
0
512
1024
1536
2048
2560
3072
3584
4096
4608
Map size in pixels
t [ms]

Illustration 22: Chart for test case 01 “Show”

Both canvas modes are slower than the original implementation. This is not a surprise as displaying images using the HTML image element is one of the operations a browser is optimized for.

The second test case first shows the layer at point (­160, 60) in zoom­level 5 and then pans ten times by a fixed offset (x=+15 and y=­10). The results are shown in Illustration 23.

2. The Canvas Element

Show and pan 10 times 3500 3000 2500 2000 NoCanvas OneCanvasPerLayer 1500 OneCanvasPerTile 1000 500
Show and pan 10 times
3500
3000
2500
2000
NoCanvas
OneCanvasPerLayer
1500
OneCanvasPerTile
1000
500
0
512
1024
1536
2048
2560
3072
3584
4096
4608
Map size in pixels
t [ms]

Illustration 23: Chart for test case 02 “Show and pan 10 times”

One of the first things to notice is that mode OneCanvasPerLayer is slower than mode OneCanvasPerTile, especially for larger map sizes. This is because the whole canvas has to be redrawn every time the map is panned. For mode OneCanvasPerTile only new tiles are drawn.

The only purpose of these two tests was to show that the performance of the new canvas modes is still reasonable regarding the user experience. Drawing on a canvas is slower than directly displaying the image in a HTML image element, but it is not significantly slower. This is an important precondition for further uses beyond displaying tiles, as the next chapter will describe.

2. The Canvas Element

2.4.4 Beyond Displaying Tiles

It does not make sense to use canvas just to display tiles, the original implementation that uses HTML image elements is more suitable for this task. But canvas can do more than just showing tiles. Taking advantage of the features of canvas enables to create new use cases, that were not possible without canvas. The following subsections will describe four examples that demonstrate the capabilities.

2.4.4.1 Elevation Diagram Generation

Once a tile is drawn with canvas, the single pixels of the image can be accessed using the canvas function getImageData(). The values read from the pixels can be displayed or processed further. The left side of Illustration 24 shows a map with a elevation layer served by a NASA WMS server [NAS10]. The elevation values of this layer are scaled to 8 bit. The highest value is the color white (rgb(0, 0, 0)) and the lowest value the color black (rgb(255, 255, 255)). Values in between are gray tones where the color values for red, green and blue are the same. When the map is hovered with the mouse pointer, the pixel information at the mouse position is read and plotted in a graph.

at the mouse position is read and plotted in a graph. Illustration 24: Creating elevation profiles

Illustration 24: Creating elevation profiles from canvas

The values shown in the graph are no real heights. But if the maximum and minimum height would be known, then an approximate height could be calculated from the scaled values. The

2. The Canvas Element

NASA WMS also servers layers that encode the heights as shortint or floating point number (real). But when the layer image is drawn onto the canvas, the browser does not know how to interpret the encoding and the real values are lost.

But if the data would be encoded like a conventional image using the four slots for the color information (8 bits each for red, blue, green and alpha value), then these values could be read from the canvas. The resulting image will probably not have a meaningful visual representation, but the layer could be used as transparent overlay, from which just the data is read.

2.4.4.2 Tile Graphic Filters

Pixels can not only be read from a canvas, they can also be written back to the canvas using the function putImageData(). This can be used to apply graphic filters to tile images, for example to adjust the brightness and contrast of satellite images, see Illustration 25.

and contrast of satellite images, see Illustration 25. Illustration 25: Adjusting the brightness and contrast of

Illustration 25: Adjusting the brightness and contrast of tiles

For the implementation, a new abstract class Tile.CanvasFilter has been introduced, see Illustration 32. Subclasses must override the method process() which accepts an Image object and is supposed to return a Canvas object. Instances of these subclasses can be assigned to

2. The Canvas Element

the property canvasFilter of Layer.Grid objects. When a tile's image is about to be drawn, the method process() of the filter is called, according to the Strategy pattern [GAM04]. Then the returned canvas is displayed instead of the original image.

returned canvas is displayed instead of the original image. Illustration 26: Class diagram Grid ­ CanvasFilter

Illustration 26: Class diagram Grid ­ CanvasFilter

In the above example, the JavaScript library Pixastic [SEI10] is used to execute the graphic filter. But in general, all kind of pixel operations can be applied. For example a filter could be used to change the color saturation, hue and lightness or to adjust color channels. Besides color values could be reclassified, for example to colorize the gray­scale elevation images of the previous chapter, see also [F Ö R09].

2.4.4.3 Map Export

Currently, exporting a map involves making a request to a server which generates an image or PDF file by a given set of parameters. Using canvas, the creation of an image file can be done on the client side.

The map in Illustration 27 consists of three layers: two different WMS layers and one vector layer. All three layers are rendered using canvas. Now, to export the map, the several canvas elements are simply drawn onto a new canvas in the right order by using drawImage(). In Firefox the exported image can be saved as file by performing a right­click on the element and by choosing “Save Image As”. For other browsers a download URL can be generated by calling the canvas function toDataURL(), which serializes the canvas as URL string. Additionally the image type (PNG or JPEG) can be specified by an optional parameter.

2. The Canvas Element

2. The Canvas Element Illustration 27: Export map as image 2.4.4.4 Raster Reprojection During the last

Illustration 27: Export map as image

2.4.4.4 Raster Reprojection

During the last annual conference Free and Open Source Software for Geospatial (FOSS4G) in 2009, Klokan Petr P ř idal developed a first prototype of reprojecting raster images in JavaScript using canvas [PRI09]. Based on this experiment a small generic library for reprojecting images has been created and integrated into OpenLayers to reproject WMS layers.

The algorithm used takes a simple approach, see Listing 6. The individual pixels of the target drawing surface, on which the reprojected image should be drawn, are looped and the following operations are executed for each pixel: First, the real­word coordinates in the target map projection are calculated for the pixel. Then the coordinates are reprojected to the source map projection by using the JavaScript library Proj4JS [PRO10]. Using these new coordinates, the pixel position on the source image is computed and the color information at this position on the source image is copied to the pixel on the target image.

A raster image may be seen as a grid of sample locations, a pixel represents the value for these locations. But a grid with the same bounds in a different map projection may not use the same sample locations [GRA10]. So practically speaking, an image reprojected with the above algorithm may have been generated without using all pixel information of the source image. That is why professional raster reprojection libraries like GDAL [GDA10] use resampling

2. The Canvas Element

methods (nearest­neighbor, bilinear and cubic) to interpolate the values for points in between the sample locations.

}

for (var y = 0; y < targetImageData.height; y++) { for (var x = 0; x < targetImageData.width; x++) { var targetPixel = {x: x, y: y};

var targetLonLat = GDALWarp.getLonLatFromPixel( targetPixel, targetBounds, targetSize);

var sourceLonLat = GDALWarp.transform( targetLonLat, targetCRS, sourceCRS);

var sourcePixel = GDALWarp.getPixelFromLonLat( sourceLonLat, sourceBounds, sourceSize);

GDALWarp.copyPixelData(sourcePixel, sourceImageData, targetPixel, targetImageData);

}

Listing 6: Basic raster reprojection algorithm

But even without the resampling, the quality of the reprojected images is still good enough, which creates the possibility to use map images with different projections in OpenLayers, without having to reproject the data on the server side. Illustration 28 shows OpenStreetMap tiles in Spherical Mercator overlaid with a WMS layer that uses the projection EPSG:4326.

overlaid with a WMS layer that uses the projection EPSG:4326 . Illustration 28: Raster reprojection in

Illustration 28: Raster reprojection in OpenLayers

2. The Canvas Element

2.4.5

Discussion

As demonstrated, rendering tiles on canvas creates new opportunities which would have been impossible without canvas. Reading pixel values, as shown in example 2.4.4.1 “Elevation Diagram Generation”, creates the possibility to perform raster analysis operations on the client side. Typical examples for raster analysis are: statistical analysis like computing the frequency distribution for land use grids, cell­based (cost) distance calculations or surface analysis like finding south­facing fields [ESR10]. Besides, the results of raster analysis can be visualized by writing the result grid back to a canvas, like in example 2.4.4.2 “Tile Graphic Filters”. This could be used to generate hillshade and contour images from elevation data, to reclassify cell values or for water­based analysis to simulate a flood.

Pixel-based canvas operation (invert) 12000 10000 8000 Chromium 5.0.375 6000 Firefox 3.6.8 Opera 10.60 4000
Pixel-based canvas operation (invert)
12000
10000
8000
Chromium 5.0.375
6000
Firefox 3.6.8
Opera 10.60
4000
2000
0
200
600
1000
1400
1800
2200
2600
3000
3400
Canvas size in pixels
t [ms]

Illustration 29: Execution times for pixel­based operations

These raster operations, just like the image reprojection, are pixel­based, so every pixel is processed individually. The performance of this approach is reasonable for small maps, but the execution time gets worse with an increasing map size. But still, the processing time for a 1000x1000 pixel map is less than 1 second for the browsers Chromium/Chrome, Firefox and Opera, see Illustration 29. Additionally by using web workers, the user interface can be kept

2. The Canvas Element

responsive during the operation and the times can even be reduced, see chapter 3.2.1 “Canvas Pixel Operations”.

The first choice to integrate maps with different projections is still to do the reprojection on the server side, especially since WMS natively allows to specify a projection code. But for exporting map images, as in example 2.4.4.3 “Map Export”, raster reprojection would be an interesting possibility.

3. Web Workers

3 Web Workers

3.1 Introduction to Web Workers

The specification Web Workers [WHA10a] defines an API for running scripts in background to avoid blocking the user interface with long tasks. A web worker is executed in its own thread, but the concurrency concept is a bit different than in other programming languages. As JavaScript has no language constructs for the synchronized access of variables, the data exchange between main script and web worker is realized through messages.

var worker = new Worker('worker.js');

worker.onmessage = function(event) { var result = event.data; console.log('Sum: ' + result);

};

var task = { a: 2, b: 3

};

worker.postMessage(task);

Listing 7: Basic use of web workers (main script)

Listing 7 and Listing 8 give a trivial example on how to use web workers. Listing 7 contains the code of the main script which creates the web worker. A web worker is started by calling the Worker constructor and by passing the path to the web worker script. The initialized Worker object acts as interface for the communication between main script and web worker. Messages can be sent to the web worker by calling the method postMessage() and messages

3. Web Workers

received from a web worker can be handled by a function assigned to the property onmessage. Besides, a handler, which is called when an error occurs inside the web worker, can be set on the property onerror and a web worker can be canceled using the method terminate().

onmessage = function(event) { var task = event.data; var result = task.a + task.b;

postMessage(result);

close();

};

Listing 8: Basic use of web workers (worker script)

Listing 8 shows the code for the web worker file. Like in the main script, messages can be sent and received using postMessage() and onmessage. As a web worker runs in a new separate environment, JavaScript files imported in the main script are not available in the web worker. These files and/or additional scripts can be included using the function importScripts(). A web worker can terminate itself by calling close().

An important limitation of the current web worker specification is, that web workers do not have access to the DOM and also can not create Node objects. The global attribute document is not available, because the DOM is not thread­safe. Apart from that, all kinds of operations can be performed inside a web worker, also creating new web workers and making XMLHttpRequests.

Messages sent between main script and web worker can be simple data types, like integer or boolean, but also arbitrary objects. Before a message is sent, a copy is made using the internal structured cloning algorithm [WHA10]. This algorithm traverses the object structure and creates a copy of each element depending on its type, but cycle references between objects are not supported. Functions and the prototype of an object, a way to emulate classes in JavaScript, can not be cloned as well.

3. Web Workers

3.2 Using Web Workers in OpenLayers

The following subsections will evaluate how web workers could be used in OpenLayers. Many parts of OpenLayers require access to the DOM, but there are some long running operations that could be executed in a web worker, as the following will show.

3.2.1 Canvas Pixel Operations

The heat map generation in chapter 2.3 “Rendering Vector Data”, tile filters and raster reprojection in chapter 2.4 “Rendering Raster Data”, all three are performing pixel­based operations on a canvas, so the execution time will increase accordingly for larger maps. To keep the user interface responsive during the processing, web workers can be utilized.

The pixel values are stored as array in ImageData objects. These objects can be sent to a web worker, manipulated in the web worker and then sent back to the main script, where the ImageData object is written to a canvas element again. The canvas element itself, as DOM element, can not be passed to a web worker, thus drawing functions like fillRect() or lineTo() can not be executed inside a web worker. So the first step of the heat map generation, drawing the gray­scale intensity map, is still run in the main script, only the pixel­based coloring can be parallelized.

ImageData objects are created using the canvas method getImageData(), which also allows to specify a region of the canvas that just should be exported. And only this particular region can be updated when writing the modified pixel data back to the canvas using putImageData(). This creates the possibility to distribute the processing of a canvas on multiple web workers, each web worker independently operates on a part of the canvas.

A class CanvasBarrier has been introduced, which takes care of splitting the canvas into a given number of rows and starting web workers for each row. Once all web workers have finished their work, the canvas is reassembled and a specified handler function is called.

3. Web Workers

Illustration 30 shows the results of a performance test comparing the execution times for inverting a canvas, run in the main script and split on a varying number of web workers. The tests were executed on a system with two dual­cores.

Pixel-based canvas operation (invert) 2000 1800 1600 1400 Blocking 1200 1 Web Worker 1000 2
Pixel-based canvas operation (invert)
2000
1800
1600
1400
Blocking
1200
1
Web Worker
1000
2
Web Workers
800
4
Web Workers
8
Web Workers
600
400
200
0
200
400
600
800
1000
1200
1400
1600
1800
Map size in pixels
t [ms]

Illustration 30: Execution times for pixel­based operations (blocking and as web worker)

The execution in a single web worker is slower than the blocking version, because of the overhead to start the web worker and to pass the pixel data. But even a single web worker has the advantage that the user interface is not blocking during the long lasting operation, which can also be canceled.

And when using multiple web workers, the execution time can actually be reduced. Processing a 1600x1600 pixel canvas in four web workers takes only half of the time of the blocking execution. Currently, there is no possibility to retrieve the number of available CPU cores in JavaScript, so that the number of web workers could be scaled accordingly. But the WHATWG is considering to provide this information in a future version of the web workers specification [WHA10b].

3. Web Workers

3.2.2 File Parsing

In OpenLayers, reading the features for a vector layer consists of two steps: loading a file or URL and then parsing the text­based content. While the loading is done in background using an asynchronous XMLHttpRequest [W3C10a], analyzing the response and creating OpenLayers Feature and Geometry objects is a blocking operation. As requests can also be made inside a web worker, the whole process can be run asynchronously, which might reduce the blocking time spent in the main script. For being able to send objects between web worker and main script, a number of issues must be considered, which will be discussed in the following.

When objects are sent between web worker and main script, the objects lose the reference to

similar to a link to their class. So objects will just consist of their

attributes, the methods can not be called any more. But for being able to add the features created in a web worker to a layer, the prototypes or classes have to be reassigned. Thus, before sending objects, the class name of each OpenLayers object is stored as attribute directly on the object, see Listing 9.

their prototype (

proto

),

onmessage = function(event) { fakeEnvironment(); importOpenLayersScripts();

 

var point = new OpenLayers.Geometry.Point(0, 0);

point.CLASS_NAME = point

proto

CLASS_NAME;

postMessage(point);

close();

};

Listing 9: Sending OpenLayers objects (worker script)

 

Then inside the main script, a reference to the class can be obtained using the class name, similar to Class.forName() in Java [ORA10]. To do so, the JavaScript function eval() is called,

3. Web Workers

which allows to execute a string of JavaScript code and therefor must be used with care. Then using the reference to the class, the default prototype of the received objects can be overridden. The reassigned prototype restores the object, so that methods can be called again, see Listing 10.

var worker = new Worker('worker.js');

worker.onmessage = function(event) { var point = event.data;

 

var clazz = eval(point.CLASS_NAME);

point

proto

= clazz.prototype;

console.log(point.getBounds());

};

worker.postMessage('make me a point, please');

Listing 10: Sending OpenLayers objects (main script)

In OpenLayers, all Geometry and Feature objects have a unique identifier, which is assigned during initialization. But when the identifier is generated inside a web worker, there is no guarantee that the identifier is unique inside the main script. So all identifiers have to be regenerated for objects created in a web worker.

Geometry objects in a geometry collection keep a back­reference to the parent geometry. As the internal structured cloning algorithm does not support cycle­references, this reference has to be resolved before sending. Then in the main script, the reference can be reassigned.

In Listing 9, before creating the Point object, the two functions fakeEnvironment() and importOpenLayersScripts() are called. OpenLayers, in some parts, tries to access the global attributes document and window, which are not available inside a web worker. So the function fakeEnvironment() creates dummy objects for document and window, which allows to use the OpenLayers library in a web worker. The required scripts of OpenLayers are included in importOpenLayersScripts(). Currently the several OpenLayers source files are imported, but

3. Web Workers

for production all required files may be compiled into a single, minified source file (minification removes all unnecessary characters like white spaces and comments [WIK10e]).

The functionality for preparing objects and for restoring them again when sent as message has been bundled into the methods WorkerTools.exportData() and WorkerTools.importData(). Additionally, filters to regenerate the identifiers or to resolve and reassign cycle­references can be used.

The newly introduced OpenLayers protocol HTTP.Async takes care of starting a web worker that executes the loading and parsing of a file. Table 7 shows the results of a performance test comparing the new protocol with the original implementation. The times are in milliseconds, column “Total”is the time from triggering the operation until the geometries are ready to use and column “Blocking” is the time spent in the main script.

 

Original

Web Worker

# of points

Total

Blocking

Total

Blocking

10

5,2

1,1

97

0,7

100

9,4

4,3

110

2,3

500

23

17

163

12

1000

40

34

219

16,2

5000

189

176

770

126

10000

402

377

1462

229

20000

823

783

2931

515

Table 7: Performance test for protocol HTTP.Async

The new implementation reduces the blocking time, but the total time is significantly higher. To understand why it is so much slower, the times to execute the several operations of protocol HTTP.Async have been measured, see Illustration 31. Only operation “Load file” and “Parse file” are also executed in the original implementation, all other operations are overhead to run

3. Web Workers

the process in a web worker. The time just to start the web worker and to import the OpenLayers scripts takes about 100 milliseconds. But the majority of the time is spent on preparing, passing and restoring the parsed features (“Restore/Post/Prepare output parameters”). All three operations have to traverse the whole object tree which slows down.

HTTP.Async 1600 Restore output Restore input parameters parameters 1400 Post output Import scripts parameters
HTTP.Async
1600
Restore output
Restore input
parameters
parameters
1400
Post output
Import scripts
parameters
1200
Prepare output
Post input
parameters
parameters
1000
Parse file
Start Web
Worker
800
Load file
Prepare input
parameters
600
400
200
0
t [ms]

Illustration 31: Detailed times for HTTP.Async (10000 points)

3.2.3 Geometry Functions

Geometry functions like calculating the area of a polygon or performing a coordinate system transformation seem to be slow operations, because each coordinate of the geometry has to be processed individually. But as it turned out, these operations are not that slow. And because of the overhead to send large object structures, as shown in the previous chapter, geometry functions are better executed directly in the main script. For example, the time to calculate the areas for 246 polygons with 403150 vertices takes 20 milliseconds, but executed in a web worker more than 25 seconds. Detailed results can be found in chapter 6.3 “Executing geometry functions in a web worker” (appendix).

3. Web Workers

3.3

Discussion

The examples in the previous chapter showed that there are use­cases that can benefit from the use of web workers. But also that there are other cases that do not benefit. For sure, not every operation is suitable for being parallelized. But one important aspect is the data exchange between main script and web worker, as all data is cloned.

While the cloning works efficiently for simple data structures like pixel arrays, posting complex object trees, which additionally have to be treated manually, takes a significant amount of the execution time. This time might be reduced by using the Flyweight pattern [GAM04], so that objects are also stored in simple data structures. This may work for points, but for hierarchical objects like polygons, this approach would already be difficult to realize, especially because it involves deep changes to OpenLayers.

Ideally the amount of data exchanged between main script and web worker is low. A web worker could retrieve its data by making XMLHttpRequests, perform long­lasting operations on this data and then pass the aggregated result to the main script.

4. Conclusions

4 Conclusions

This thesis demonstrated that HTML5 has a huge potential in web mapping. While canvas will not replace SVG for rendering vector data, it creates new ways of visualizing geographic data using JavaScript, for example by generating heat or density maps as shown in chapter 2.3.5 “Outlook”. Also for raster data, canvas opens new opportunities to perform grid­based spatial analysis that are common in desktop GIS but were not possible inside the browser so far.

In this context it is worth mentioning the framework Cartagen [CAR10], which uses canvas to render interactive, vector­based maps that are styled with so called Geographic Style Sheets (GSS). This project aims to provide a flexible solution for rendering dynamic maps on the client­side instead of using pre­rendered static map tiles [CAR10a]. Currently only Internet Explorer 9 is hardware accelerating the drawing on a canvas [MIC10]. But it is to expect that the other browser vendors will improve their canvas implementations, so that Cartagen and the OpenLayers canvas renderer could gain performance speed­ups in near future.

Web workers and canvas are a good combination to execute pixel­based graphic operations. And the WHATWG is considering to introduce a “off­screen canvas” interface to perform canvas drawing functions inside a web worker, as right now only single pixels of a canvas can be modified in background [WHA09]. This would allow to use web workers as a graphic buffer, which does the rendering of vector geometries and passes the drawn canvas to the main script. Besides, the use of web workers in OpenLayers is limited. In many parts OpenLayers requires access to the DOM and sending complex object structures between main script and

4. Conclusions

web worker is a serious bottleneck.

This thesis only focused on canvas and web workers, but the other features of HTML5 also have a good potential in web mapping applications. For example the Geo Location API allows to retrieve the users' location and additional information like speed, heading and altitude (for sure only when the user gives permission). The Geo Location API uses GPS devices, the IP address and nearby WIFI access points to determine the position. This API provides serious benefit for so called location­aware websites, which take the users' location into account, for example by simply centering the map on the users' position or for searching the closest restaurant.

The newly introduced File API can be used to read files (that the user selected) directly from the local file system in JavaScript, without having to upload the files to a server. In web mapping this can be used to overlay a map with data from the user, for example text­based formats like KML or GPX, but also binary formats like Shapefiles as [SHP10] demonstrates.

Offline web applications gain in importance, especially for mobile devices. There are experiments using the browser­based Web SQL Database to cache vector data, so that the features can be displayed while offline, see [VER10]. This technique could also be used in GIS to capture data on the field and then to synchronize the local cache with a central database once the device is connected again. Background map tiles could also be cached using an application cache manifest file. Disk space is limited on mobile devices, so that the user would have to select a map extent which should be accessible when offline, and only the tiles in this extent would be cached. Alternatively, the tiles along a route could be cached for offline navigation.

Ian Hickson, the editor of the HTML5 specification, expects that HTML5 will reach W3C recommendation status (two 100% complete implementations) in 2022 or later [WHA10c]. But many features are already implemented, so the time to start using HTML5 is now.

5. References

5 References

[ADC04]

dynamically generated Internet mapping, last visited in August 2010,

http://www.svgopen.org/2004/papers/SVG_Open_Abstract/

Vincent T. Adcock , Implementing an integrated SVG application for real time

[BOL08]

Paul Bolstad, GIS Fundamentals, 2008

[CAR10]

Jeffrey Warren, Cartagen - a framework for dynamic mapping, last visited in

August 2010, http://cartagen.org/

[CAR10a]

practice VIII: HTML5 and the canvas element for interactive online mapping, last visited in August 2010, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2838837/

[CEL10]

http://jsbenchmark.celtickane.com/Results.aspx

Maged N Kamel Boulos, Jeffrey Warren, Jianya Gong, Peng Yue, Web GIS in

CelticKane, JSBenchmark Results, last visited in August 2010,

[CHA10]

Kang-tsung Chang, Introduction to GIS, 2010

[CSS10]

World Wide Web Consortium, CSS Color Module Level 3, last visited in

August 2010, http://dev.w3.org/csswg/css3-color/

[DAV07]

Scott Davis, GIS for Web Developers, 2007

[DIB10]

David DiBiase, The Pennsylvania State University, Nature of Geographic

Information, last visited in August 2010, https://www.e-

education.psu.edu/natureofgeoinfo/c3_p1.html

[ESR10]

ESRI, Inc., ArcGIS Help Library, 2010

[EXP10]

ExplorerCanvas, ExplorerCanvas Project Page, last visited in August 2010,

http://code.google.com/p/explorercanvas/

[FÖR09]

http://tirolatlas.uibk.ac.at/papers/svgopen2009/paper.html

[GAM04]

Elements of Reusable Object-Oriented Software, 2004

[GDA10]

http://www.gdal.org

[GOO10]

2010, http://slides.html5rocks.com/

Klaus Förster, Using Canvas in SVG, last visited in August 2010,

Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides, Design Patterns:

GDAL, Geospatial Data Abstraction Library, last visited in August 2010,

Google, HTML5 - Web Development to the next level, last visited in August

5. References

[GRA10]

2010, http://grass.osgeo.org/grass64/manuals/html64_user/r.proj.html

[HON08]

in August 2010, http://www.hongkiat.com/blog/websites-we-visit-how-they-look-like-10- years-ago/

[KAI09]

Trivial Drawing Application, last visited in August 2010, http://svgopen.org/2009/papers/54- SVG_vs_Canvas_on_Trivial_Drawing_Application/

[KHR10]

2010, http://www.khronos.org/webgl/

[KRÖ10]

gesagt?, last visited in August 2010, http://www.peterkroener.de/was-ist-html5-und-was-nicht- und-was-haette-der-kaiser-dazu-gesagt/

[MIC10]

Microsoft, Microsoft Announces Hardware-Accelerated HTML5, last visited in

August 2010, http://www.microsoft.com/presspass/press/2010/mar10/03-

16mix10day2pr.mspx

GRASS Development Team, GRASS GIS manual: r.proj, last visited in August

Hongkiat, Websites We Visit: How They Look Like 10 Years Ago, last visited

Samuli Kaipiainen, Matti Paksula (University of Helsinki), SVG vs. Canvas on

Khronos Group, WebGL - OpenGL ES 2.0 for the Web, last visited in August

Peter Kröner, Was ist HTML5 und was nicht? Und was hätte der Kaiser dazu

[MIT05]

Tyler Mitchell, Web Mapping Illustrated, 2005

[NAS10]

NASA, OnEarth JPL WMS Server, last visited in August 2010,

http://onearth.jpl.nasa.gov/

[NAT10]

http://www.naturalearthdata.com/downloads/10m-physical-vectors/10m-rivers-lake-

Natural Earth, Rivers + lake centerlines, last visited in August 2010,

centerlines/

[OPE10]

OpenLayers, Free Maps for the Web , last visited in August 2010,

http://www.openlayers.org/

[OPE10a]

Dichotomy, last visited in August 2010, http://trac.openlayers.org/wiki/three/RemoveOverlayBaseLayerDichotomy

[OPE10b]

range of values, last visited in August 2010, http://trac.openlayers.org/ticket/669

[OPE10c]

http://trac.openlayers.org/browser/trunk?rev=10554

[OPE10d]

2010, http://openlayers.org/pipermail/users/2009-November/014984.html

[ORA10]

2010, http://download-

llnw.oracle.com/javase/6/docs/api/java/lang/Class.html#forName(java.lang.String)

[OSG10]

http://wiki.osgeo.org/wiki/WMS_Tile_Caching

OpenLayers, OpenLayers 3: Remove (or Reduce) Overlay / Base Layer

OpenLayers, OpenLayers Ticket #669: Firefox SVG does not support full

OpenLayers, OpenLayers SVN Revision 10554, last visited in August 2010,

Kris Geusebroek, First try with html5 canvas for layers, last visited in August

Oracle, Java SE 6 API Documentation: java.lang.Class, last visited in August

OSGeo, WMS Tile Caching, last visited in August 2010,

5. References

[OSM10]

http://www.openstreetmap.org/

[PEI10]

http://www.cs.sfu.ca/CC/454/jpei/slides/R-Tree.pdf

[PIL10]

http://www.diveintohtml5.org/

[PRI09]

HTML5 Canvas, last visited in August 2010, http://blog.klokan.cz/2009/10/raster-map- reprojection-warping-with.html

[PRO10]

http://www.proj4js.org/

[RES08]

http://ejohn.org/blog/javascript-benchmark-quality/

[RIV10]

http://github.com/imbcmdth/RTree

[SAN10]

http://thematicmapping.org/downloads/world_borders.php

[SCH07]

Browsing in web browsers, 2007

[SEI10]

2010, http://www.pixastic.com/

[SHP10]

http://github.com/RandomEtc/shapefile-js

[SMU09]

http://www.borismus.com/canvas-vs-svg-performance/

[SVG10]

Edition), last visited in August 2010, http://www.w3.org/TR/SVG/index.html

OpenStreetMap, The Free Wiki World Map, last visited in August 2010,

Jian Pei, Database Systems: R-Tree, last visited in August 2010,

Mark Pilgrim, Dive Into HTML 5, last visited in August 2010,

Klokan Petr Přidal, Raster map reprojection (warping) with JavaScript and

Proj4JS, Cartographic Projections Library, last visited in August 2010,

John Resig, JavaScript Benchmark Quality, last visited in August 2010,

Jon-Carlos Rivera, R-Tree Library for Javascript, last visited in August 2010,

Bjorn Sandvik, World Borders Dataset, last visited in August 2010,

Emanuel Schütze, Current state of technology and potential of Smart Map

Jacob Seidelin, Pixastic: JavaScript Image Processing, last visited in August

Tom Carden, Shapefile JavaScript, last visited in August 2010,

Boris Smus, Performance of Canvas versus SVG, last visited in August 2010,

World Wide Web Consortium, Scalable Vector Graphics (SVG) 1.1 (Second

[SWI10]

SwitzerlandMobility, Map, last visited in August 2010, http://map.veloland.ch/

[VER10]

Joe Vernon, HTML5′s local SQL database & OpenLayers, last visited in

August 2010, http://mobilegeo.wordpress.com/2010/03/03/html5s-local-sql-database- openlayers/

[VES08]

2010, http://dylanvester.com/post/Creating-Heat-Maps-with-NET-20-(C-Sharp).aspx

[W3C10]

in August 2010, http://www.w3.org/TR/html5-diff/

[W3C10a]

http://www.w3.org/TR/XMLHttpRequest/

Dylan Vester, Creating Heat Maps with .NET 2.0 (C#), last visited in August

World Wide Web Consortium, HTML5 differences from HTML4, last visited

World Wide Web Consortium, XMLHttpRequest, last visited in August 2010,

5. References

[WHA09]

canvas" /Access to canvas functionality from a worker, last visited in August 2010,

http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-December/024451.html

[WHA10]

Specification, last visited in August 2010, http://whatwg.org/html5

[WHA10a]

visited in August 2010, http://www.whatwg.org/specs/web-workers/current-work/

[WHA10b]

example, last visited in August 2010, http://lists.whatwg.org/htdig.cgi/whatwg-

whatwg.org/2009-November/023993.html

[WHA10c]

HTML5 be finished? , last visited in August 2010,

http://wiki.whatwg.org/wiki/FAQ#When_will_HTML5_be_finished.3F

[WIK10]

http://en.wikipedia.org/wiki/HTML_5

[WIK10a]

http://en.wikipedia.org/wiki/Scientific_modelling

[WIK10b]

2010, http://en.wikipedia.org/wiki/Comparison_of_layout_engines_(HTML5)

Web Hypertext Application Technology Working Group, [whatwg] "offscreen

Web Hypertext Application Technology Working Group, HTML 5

Web Hypertext Application Technology Working Group, Web Workers, last

Web Hypertext Application Technology Working Group, About the delegation

Web Hypertext Application Technology Working Group, FAQ - When will

Wikipedia, HTML5, last visited in August 2010,

Wikipedia, Scientific modelling, last visited in August 2010,

Wikipedia, Comparison of layout engines (HTML5), last visited in August

[WIK10c]

Wikpedia, R-Tree, last visited in August 2010, http://en.wikipedia.org/wiki/R-

tree

[WIK10d]

Wikipedia, Quadtree, last visited in August 2010,

http://en.wikipedia.org/wiki/Quadtree

[WIK10e]

http://en.wikipedia.org/wiki/Minification_(programming)

Wikipedia, Minification (programming), last visited in August 2010,

6 Appendix

Appendices

6APPENDIX

I

6.1VECTOR RENDERER PERFORMANCE TESTS

II

6.2RASTER RENDERER PERFORMANCE TESTS

XIV

6.3EXECUTING GEOMETRY FUNCTIONS IN A WEB WORKER

XV

6.1

Vector renderer performance tests

The times for all following tests are in milliseconds.

Test case 01: Show (countries)

Data

# of vertices

Static

Interactive

R­Tree

SVG

countries­simplified­1

2150

35

84

74

71

countries­simplified­0.5

4506

55

120

131

146

countries­simplified­0.05

45031

342

1119

1126

1263

countries­simplified­0.005

159771

834

3149

3683

3782

countries­non­simplified

403150