Sie sind auf Seite 1von 30

JuliaCon 2016

Workshops
Morning

Track 1 - Room 32-123

Track 2 - Room 32-141

9:00

Invitation to Intermediate-Level Julia Introduction to Writing High


Performance Julia

12:00

LUNCH

13:30

Plots with Plots

15:00

BREAK

15:15

Parallel computing with Julia

17:15

END

Creating, Distributing, and Testing


Julia Packages with Binary
Dependencies

Invitation to Intermediate-Level Julia


David P. Sanders ( Department of Physics, Faculty of Sciences, National Autonomous
University of Mexico [UNAM] & Julia group, MIT)
This is a tutorial workshop on intermediate-level Julia, suitable for anybody who has some
programming experience and knows basic Julia syntax.
It follows on from the basic Invitation to Julia tutorial from JuliaCon 2015. It is
recommended that you browse through that material before attending this tutorial,
available at:
https://www.youtube.com/watch?v=gQ1y5NUD_RI (video)
https://github.com/dpsanders/invitation_to_julia (IJulia notebooks)
We will cover material from the following topics:
Composite types
Composite types are the data part of "objects" in other languages. Julia is not objectoriented in the traditional sense: methods live outside objects, enabling one of the key
features of Julia, multiple dispatch.
Using built-in composite types
Build your own composite type
Parametrisation: a powerful tool
Conversion and promotion
Multiple dispatch and composability: examples
Metaprogramming
Metaprogramming is a powerful technique, and hence frightening and to be
respected: it allows you to write Julia code that creates other Julia code. We will build
up some examples to simplify this as much as possible.
What is metaprogramming?
Simple example: generating a complicated function
What do macros do?
A full example
Wrapping your code into a package
Open source is about sharing your code. Julia provides a simple way to make a

package for your own use that you can easily share with other people to get feedback
and contributors. You can convert it into a registered package once it's really ready for
distribution.
Make your own package
Registering your package (once it's really ready)
Bio: David P. Sanders is an associate professor of computational physics in the
Department of Physics, Faculty of Sciences, National University of Mexico (UNAM), and is
on sabbatical in the Julia group at MIT during 2016. David discovered Julia at the start of
2014 and now uses it exclusively in both teaching and research. He is an author of the
ValidatedNumerics.jl package for rigorous numerics, and has given tutorials on Julia at

SciPy 2014 and JuliaCon 2015 (see here), with collectively nearly 50,000 views on
YouTube.

Introduction to Writing High Performance Julia


Arch D. Robison, Intel Corporation
This workshop is an introduction to writing high performance code in Julia. We'll start
with a high level view of the hardware and Julia, and how Julia semantics differ from
languages such as C/C++/Fortran. Next we'll cover how Julia compiles your program,
from your source text down to the machine instructions. The key is to make Julia's typeinference work for you instead of against you, and cater to the hardware. We'll also look at
"deals with the devil" annotations (@inbounds, @fastmath, @simd) so that you understand
the trade you make with those annotations. Finally, we'll look the art of writing
vectorizable code, which brings many of the topics together. Overall, the goal is to
understand what you need to do, and what to leave to the compiler, to get high preforming
code. Attendees are encouraged to bring a computer with Julia to try exercises that involve
speeding up slow examples.
Bio: Arch D. Robison was the lead developer for KAI C++, the original architect of Intel
Threading Building Blocks, and one of the authors of *Structured Parallel Programming:
Patterns for Efficient Computation*. He contributed type-based alias analysis and
vectorization support to Julia. Arch took 2nd place in Al Zimmermans "Delacorte
Numbers" programming contest using Julia exclusively. His Erds number is 3.

Plots with Plots


Tom Breloff
A hands-on workshop of how to hack visualizations with Plots.jl and the various
backends it supports.
Bio: Tom Breloff has spent a decade in finance building and running algorithmic trading
operations. A self-proclaimed mad scientist, studying different subjects related to AGI
(neuroscience, deep learning, etc), he has a heavy background in high throughput systems
and data visualization in finance. Tom has a B.A. in Mathematics and B.S. in Economics
from the Unversity of Rochester, and a M.S. from NYU Courant Institute.

Creating, Distributing, and Testing Julia


Packages with Binary Dependencies
Tony Kelman, Julia Computing
I will cover the process and tools for creating Julia packages that wrap C (or Fortran)
libraries. Working through a small example I will cover how to initially get basic
functionality working interactively from the REPL, then structure the code as a Julia
package. We will begin working from a single development platform, then proceed to show
how to build, distribute, and leverage automated testing tools to get the C library and
wrapper Julia package working across common Linux distributions, Mac OS X, and
Windows. Time permitting I might work through a Python-wrapper example, and even
Java if we want to get really ambitious.
Bio: Tony Kelman recently completed a Ph.D. in Mechanical Engineering at Berkeley,
doing research in optimization based control. He began contributing to open source in
2012 with build system improvements to the COIN-OR set of optimization solver libraries.
He started using and contributing to Julia in early 2014, and joined Julia Computing in
late 2015.

Parallel Computing with Julia

Viral Shah, Shashi Gowda, Andreas Noack, Ranjan Anantharaman, Amit Murthy
This workshop will give an overview of tools in Julia for dealing with large amounts of
data.
Building Blocks for parallel computing in Julia:
RemoteChannels
Futures
@parallel and pmap

Multi-Threading Julia:
What kinds of programs can benefit from Julia's multi-threading
GPUs
Capabilities of Julia on GPUs
MPI and Elemental
MPI.jl overview
Elemental.jl overview

Linear algebra with Elemental.jl


Parallel SVD
ComputeFramework - out of core parallel computations
Playing with random data
Saving data to disk
API overview
Some matrix and/or vector operations
Example application (logistic regression)
What works and what doesn't with limited RAM
Working at the lower level with Blobs
Bio: Prolific contributors to the Julia ecosystem.

JuliaCon 2016 Home Page

Introduction
Goal

iwnte

code

that

is

'
.

onto
Perform
precise

Focus

Julia

Julia

w/

Generic

High Performance

writing

to

lecture

of

single

thread

execution
another

in

parallel
session

ToP1=

profile
Julia

Tookhain
Hardware

2-

Julia

can

doing

Profilers
nfill

If

user

would

is

as
give
well

can

can

as

info
.

Mistakes

warming

them
Fnfiefitfrhftway
unstable

Julia

print

Timing / pokier
Not

What

on

Compiler
profiler

manual

level

transistor

()

Vs.

Loops

Sims

foo

Profile

1)

automatic

feedback

get
at

=)

Caveat :

System

vectorization

We

Type

optimizations

the

up

.
compile

first

to

code

run

*)
processor

system

warm

twice

up

to

frequency

clock

by

on

factor

Stabilize

computer
of

two

computers

can
or

vary

more

Clock

3)

Timing

conversely

necessary

4)

Timing

of

Short

too

to

running

profile

something that
t print
values
,
that
they
ex

println

run

Heap

II

of

Pofkis

HARDWARE

optimizer

the
for

instance

removes

to

ensure

optimized

aren't

away

(a) )

( hash

compile
forces

something

to

it

RESOURCES

Hierarchy

Queue

Instruction

pegiarr

not

is

Profilers

Allocation

Memory

hours

for

do

Types

Data

Cache

Outer

level

I
Mlmunf

Cache

Instruction

Ceche

Penury

Computing

:*
* IEEE
Ideal

>

of

Use

Hardware

units

slmd

Avoid
Float
for

32

from

cache

computation

half

URS

has

>

SIMD

banduitw

half

process

can

instructions

Use

needed

Semantics

Implementation

64

when

twice

Float

64

footprint
may

as

the

is

precision

Implementation

vs

Semantics

Float

cache

the

cache

LI

misses

than

faster

often

is

fit

accesses

merry

Stalls

speed

full

at

going

most

make

what does the


documentation

Reason

about

Say
CORRECTNESS

Reason

about

PERFORMANCE

int

fools

local

=z

Variable

is

memory

X|T

memory

int

an

Boxing

value

name

3.1

for

compile

objects

wlo

Known

type

time

Compiler

is
to

used

"

variable
bound

Truncation
occurred bk
its

in

Foo

Ij

location

Semantics

function

Fan
:ixi
int

Julia

Vs

Semantics

works

to

avoid

Boxing

PENALTIES

FROM

Indirection

2)

Heap

3)

Allocation

General
Julia

In

Julia

BOXING

Dispatch
How

use

iirnsensitive

:
-

values

are

type

inference

stored
in
locations

Julia

Compilation
anyone

of

lmpkmentatim

look

can

the

developer

Code

intermediate

at
even

compiler

cyo
on

having
system

stages

the
.

4Yaeert
code

eYYiYnYa

Code

lowered

warnthpl

to

you
it

tell

what

is

Code

won

native

doing

Syntax

machine

Concrete
Non

concrete

non

vs

compiler

ex

lat 32

Union

Vector

requires

...

1ntG4}

Real

Int

unknown

Vector

tstemplahe

type

type

know

boxing

"
,

Concrete
-

doesn't

layout

bit

Tyres

concrete

Int }

foo

64
64

Float
Float

yi

concrete
If

lose

everything
be

make

we

ability

Parametric

to

Compromise

Circle {

type

to

Real

we

When

return

the

the
type

entire

Circle

type

Can't

compiler
of

Call

function

chain

Floated

figure
Tt

now

TAND

concrete
-

parametric
out

poisons

the

propagates
functions

to

that

inefficiencies
we

those

into

other

results

Type

M Two

type

ein
)

Immutable

vs

'
.

: :

Mtwo

( MTWK 1,2 )

Mtwo

( 3,4J )

#IsF#9
D :

"

immutable
A

ITWO
:

Itwocl

{ T}

end

C)

Itwo

2)
,

BI

,Ttwo

( 3,41 )

Compiler 's Knowledge

Issue

Gotcha

Julia

The

polymorphic

is

number

Use

Now

ftnogtz

Y=

yso ,

afloat

it

make

Int

an

float

3.1

is
Hardware
Monomorphic

but

always

is

to

fetus

have

we

boxing

bloat

def

type

Stable

of

Convert

function

convert

Y=
( TO

lets

output

you

predict

type

convert

float

3.1

y=

Input

ftnogtz

Tame

the

Now

boxing

have

we
+

bloat

The

predicting
X

not

is

issue

=3

types
1

Float

but

Fine

It

ys

types

mixing

about

problem

Fne

;
What

Int

is

type

orFloat
statements

Different

problems

cause

slow

to

way

function

sum

tally

fast

for

vinx

to

way
I

zero

sum

eltypea

5t=

can

ed
s

end
we

'

promotions

use

can

function

promote

Global

guarantee type stability

to

Variables

reassignment

for

inference

type
globab
woncanun&=
no

of

Const

Use

Const
but

not
means

N#

Of

the
that

that

assigning
same

identifier

object

variable

once

CKH

as

is
is

const

never
invariant

rebound
.

Const
a

Use Ref

[ 0{

0,0

This

Const

okay

is

even

4th

Hack

Hype
const

2,3

f
f=

Ref

( o 4)
.

0.5

okay

Assertion
:

tv

:c

Floats

it

check

type

here

for

instead

Reuse
Note

of

new

impact

collector

garbage

in

Speed

to

Reallocating

Julia

up

Code

0.4

reduces

is

eltype
llngthcs

( eltupeca ) )

zero

s=

Function
:Dahua
faster
Gotcha

when

Loop

vs

Gap

vectored

ARRAY

speed

For

BLAS

vs

linear

Algebra ,

just

response

Lin

Automatic
Nudge

Kinds
.

Manually

Is

fly

the

of
the

call

"

on

julialang

vectorization

style

's

EEstillfaster

See

problems

length away

zero

away
decreasing

is

loop

cause

is

loop

between

an

blog

fast

BLAS

numeric

"

Optimizations
compiler

do it

Questions

transformation

alway
likely

legal ?
profitable

Julia
over

Some

default

by
choice

fast

when

Prefers
it

to

Must

be

make

accurate

the

gotchas

w/

Floating

Algebra

Print

DON'T

work

D=

RK

dfastmath

macro

fastmath

apply

Summation

compiler
Not

"

is
as

grants

to

permission

unsafe

algebra

good

at

good

Integer

at

Floating

"

arithmetic

point

t(y

as
it
Tffloating
try

%
soirees
is

int

using

order

of

tr

operations

compiler

up

@ inline

Inlining

legal

Always

SOMETIMES
noinine

off

To

automatic

Bounds

in

overhead

saves

enables

of

turn

lining

speed

Yes

WE

get

command

profitable

Likely

to
code

further

specialise

calla

cache

misses

possible

Checking

Sometimes

you

mould

turn

this

OFF

w/

play

Check

bounds

cqheok
==yes

"

'

CL

If

,f

'

bounds

command

bounds

no

code

then

Code

in

Know

Avoid

local

Hoist

Manual

benefit

reducing

readabdty

this

Stuff

scalar

Unrolling

Let

JIT

Best

done

Tends

to

Sometimes
unroll
w/

is

Make

by
-

does

sometimes

the

Hoist

Do

Julia

outweighs
code

compiler

point

inbounds

from

inbounds

Invariants

you

can

Refrain doing
@ unless

Integers

Hoisting

speed

you

sprinkle

Incited

affects

Change

to

Check

global

Mir

column
sure

row

do

after

make

not

it

some

code

always ,

optimization

harder

you

could

to

read
beat

begin
compiler

major

travel

aways

by

columnist

not

"

Translation
"

Look

Buffers

Ande

oblivious

cache

"

look

c-

"

Algorithms

up

look

c-

up

Vectorization
the

Know
"

serial

ii.

different

two

Has

definitions

difference

evaluation

of

order

today

"

isz

ffi?o*f\
Don't

left
as

horizontal

assume
-

of

tr

night
nrw

Compiler

order

Picks

is

this

Implicit

Explicit

VS

using

simd

Vectorization

Gives

reorder

in

ptrgramme

"

Variable

reduction

vectorization

Implicit
.

integer
+

fast

sometimes
code

"

for

works

permission
run

&

Parallel
vouches

for

to

things

'

for

vectorization
.

reductions
,

*
,

&

hoariest

math

you

simd

have
.

to

restructure

your

1
,

Put

dependencies

all

make

to

sure

support

to

build

in

gahub
for oldest

in

Julia Con 2016


directory

workshop

includes

workshop ipynb

Rmathjuwa
-

Unix

[ Blank

on

purpose

Packages

include

always

Julia

in

Can

you
"

host

frau 's

github

use

binaries

&
'

github

on

file

yml

'

in

travis .gml
Github

packages
to

Releases

'

'

.CI

Travis

wing

is

Watch

for

out

different

library

Oses

Build
available

the

Docker

Ubuntu

between

Linux

the

newest

dinm

you
compiler

sole

virtual

lightweight

is

dependencies
oldest

to support

libstdct
-

of

version

on

want

old

an

Machine

\.EE?.aoanadVDocklr
hEetoYI
.

Containers

debt

build

BWDEPS

Thin
you

.jl

script

what

to

Julia

package

Carefully
depend

about
on

ex

include

which

of

versions

0.4
Julia

Mylibray

0.2<6

...

etc.

packages

Birders

lining

Binpeps

library

Declare

&

chip

Depend

the

Binders

looking
ex

good

Look

Julia

Julia

of

wring

has

also

finds

on

is

your

Most

the
-

@FKE

Most

Bintray

do

people
.

people

system,

target

this

to

Don't

git

repo
to

refer

'

the

wrong

way

by

writing

Macro

the

it

explicitly

link

Absolute
file
jl
-

way

package

the

in

best

own

it

gitignore

automating

examples

use

put
There

code

source

existing
it

automatically

depsjl

by

Opt
Binpepso

good

libraries

will

Binteps

fond

be

qq.eafya.de

package

it

"

master

etc.

libraries

package

Cairo
if

of

versions

Can

at

example

dependencies ,

latest

' '

examples
into

ue

specific

on

Not

to

of

path

expands
currently

to

running

refer

to

Another

their

own

binary

packages

hooting

incorrectly

site

Search

This repository

Pull requests

Issues

Gist

tkelman / Rmath.jl

Watch

Unstar

Fork

forked from dmbates/Rmath.jl

Code

Branch: master

Pull requests 0

Wiki

Pulse

Graphs

Rmath.jl / deps / build.jl

Find file
d4d1c32 2 hours ago

tkelman add a best-practices comment about not downloading master


1 contributor

31 lines (24 sloc)


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

1.1 KB

Raw

Blame

History

using BinDeps
@BinDeps.setup
libRmath = library_dependency("libRmath", aliases=["libRmath-julia2"])
version = "0.1"
# Best practice to use a fixed version here, either a version number tag or a git sha
# Please don't download "latest master" because the version that works today might not work tomorrow
# TODO replace tkelman with JuliaLang later
provides(Sources, URI("https://github.com/tkelman/Rmath-julia/archive/v$version.tar.gz"),
[libRmath], unpacked_dir="Rmath-julia-$version")
prefix = joinpath(BinDeps.depsdir(libRmath), "usr")
srcdir = joinpath(BinDeps.srcdir(libRmath), "Rmath-julia-$version")
# If your library uses configure or cmake, good idea to do an
# out-of-tree build - see examples in JuliaOpt and JuliaWeb
provides(SimpleBuild,
(@build_steps begin
GetSources(libRmath)
CreateDirectory(joinpath(prefix, "lib"))
@build_steps begin
ChangeDirectory(srcdir)
`make`
`mv src/libRmath-julia.$(Libdl.dlext) $prefix/lib/libRmath-julia2.$(Libdl.dlext)`
end
end), [libRmath], os = :Unix)
@BinDeps.install Dict(:libRmath => :libRmath)

2016 GitHub, Inc.

Terms

Privacy

Security

Lots
in

Contact

of

this

Help

Status

good
code

we

Case

API

Training

Shop

examples

Blog

About

There

is

Checksum

package

.jl

SHA

TRAVIS

Julia

CI

travis

yml

rates

of updates

limit
:

can

if

you

you
keep

Travis

to

locally

ofcomm.IR

push

chunks

in

Builds

Polling
but

lots

doing

are

you
include

travis

are

can

bah

25

this

have

doesn't

CI

fail

fast

loheofu

.sn

do

it

FAAFAK

if

it

fails

quickly

PP

Vlyor

pveyor

Service

Travis

Yun

that's
for
CI

exits

basically
Windows

Homebrew

pnvatl

Il
"

isolated

is

form

system

Julia

Homebrew

vernon
.

Homebrew

wide

of

Blank

on

Purpose ]

PARALLEL
Sometimes

Asychronous

but

gpu.jnliabox.org

Computing
( Ramjan )

Fire

Array

serial

is

(Andreas )

Data

JULIA

IN

parallelism

T0P

Reading

Code

,
"

"

CODE

expresses
wqe
:*

f.
e.

Computeframework

even

Why

'

use

GPU

qwtrjl
Compute

.EE#jYEEsTi:::am
computing ?

compile

Julia

GPU

on

directly

town

Framework

Graphviz

package

Parallelisation

will

shows
occur

how

you

the

UMAUZAHON

Das könnte Ihnen auch gefallen