Sie sind auf Seite 1von 46

Machlne Learnlng AlgorlLhms

for 8eal uaLa Sources


wlLh Appllcauons Lo CllmaLe Sclence
Clalre MonLeleonl
CenLer for CompuLauonal Learnlng SysLems
Columbla unlverslLy
Challenges of real daLa sources
We face an exploslon ln daLa!
lnLerneL Lransacuons
unA sequenclng
SaLelllLe lmagery
LnvlronmenLal sensors
.
8eal-world daLa can be:
vasL
Plgh-dlmenslonal
nolsy, raw
Sparse
SLreamlng, ume-varylng
Sensluve/prlvaLe
Machlne Learnlng
Clven labeled daLa polnLs, nd a good classlcauon rule.
uescrlbes Lhe daLa
Cenerallzes well
L.g. llnear classlers:
Machlne Learnlng algorlLhms
for real daLa sources
Coal: deslgn algorlLhms Lo deLecL pauerns ln real daLa sources.
!"#$ &'()&#$ "*+,-)$./01 2)$. 3&-4,-/"#(& +5"-"#$&&06
uaLa sLreams
Learnlng algorlLhms for sLreamlng, or ume-varylng daLa.
8aw (unlabeled or parually-labeled) daLa
Acuve learnlng: AlgorlLhms for semngs ln whlch unlabeled daLa ls
abundanL, and labels are dlmculL Lo obLaln.
ClusLerlng: Summarlze daLa by auLomaucally deLecung clusLers" of
slmllar polnLs.
Sensluve/prlvaLe daLa
rlvacy-preservlng machlne learnlng: AlgorlLhms Lo deLecL cumulauve
pauerns ln real daLabases, whlle malnLalnlng Lhe prlvacy of lndlvlduals.
new appllcauons of Machlne Learnlng
CllmaLe lnformaucs: Acceleraung dlscovery ln CllmaLe Sclence wlLh
machlne learnlng.
CuLllne
ML algorlLhms for real daLa sources
Learnlng from daLa sLreams
Learnlng from raw daLa
Acuve learnlng
ClusLerlng
Learnlng from prlvaLe daLa
CllmaLe lnformaucs
ML for CllmaLe Sclence
Learnlng from daLa sLreams
lorecasung, real-ume declslon maklng, sLreamlng daLa appllcauons,

onllne classlcauon,




resource-consLralned learnlng.
Learnlng from daLa sLreams
uaLa arrlves ln a sLream over ume.
L.g. llnear classlers:
Learnlng from daLa sLreams
1. Access Lo Lhe daLa observauons ls one-aL-a-ume.
Cnce a daLa polnL has been observed, lL mlghL never be seen agaln.
Cpuonal: Learner makes a predlcuon on each observauon.
! Models forecasung, real-ume declslon maklng, hlgh-dlmenslonal,
sLreamlng daLa appllcauons.
2. 1lme and memory usage musL noL grow wlLh daLa.
AlgorlLhms may noL sLore all prevlously seen daLa and perform baLch learnlng.
! Models resource-consLralned learnlng.
ConLrlbuuons Lo
Learnlng from daLa sLreams
Cnllne Learnlng: Supervlsed learnlng from lnnlLe daLa sLreams
[M & !aakkola, nlS 2003]: Cnllne learnlng from ume-varylng daLa, wlLh experL
predlcLors.
[M, 8alakrlshnan, leamsLer & !aakkola, Analyucs 2007]: Appllcauon Lo compuLer
neLworks: real-ume, adapuve energy managemenL, for 802.11 wlreless nodes.
[M, SchmldL, Saroha & Asplund, SAM 2011 (Cluu 2010)]: 1racklng cllmaLe
models: appllcauon Lo CllmaLe lnformaucs.
Cnllne Acuve Learnlng: Acuve learnlng from lnnlLe daLa sLreams
[uasgupLa, kalal & M, !ML8 2009 (CCL1 2003)]: lasL onllne acuve learnlng.
[M & kaarlalnen, Cv8 workshop 2007]: Appllcauon Lo compuLer vlslon: opucal
characLer recognluon.
SLreamlng ClusLerlng: unsupervlsed learnlng from nlLe daLa sLreams
[Allon, !alswal & M, nlS 2009]: ClusLerlng daLa sLreams, wlLh approxlmauon
guaranLees w.r.L. Lhe k-means clusLerlng ob[ecuve.
CuLllne
ML algorlLhms for real daLa sources
Learnlng from daLa sLreams
Learnlng from raw daLa
Acuve learnlng
ClusLerlng
Learnlng from prlvaLe daLa
CllmaLe lnformaucs
ML for CllmaLe Sclence
Acuve Learnlng
Many daLa-rlch appllcauons:
lmage/documenL classlcauon
Cb[ecL deLecuon/classlcauon ln vldeo
Speech recognluon
Analysls of sensor daLa
unlabeled daLa ls abundanL, buL labels are expenslve.
Acuve Learnlng model: learner can pay for labels.
Allows for lnLelllgenL cholces of whlch examples Lo label.
Coal: glven sLream (or pool) of unlabeled daLa, use fewer labels Lo
learn (Lo a xed accuracy) Lhan vla supervlsed learnlng.
Acuve Learnlng
Clven unlabeled daLa, choose whlch labels Lo buy, Lo aualn a
good classler, aL a low cosL (ln labels).
[Cohn, ALlas & Ladner '94, uasgupLa '04]:
1hreshold funcuons on Lhe real llne: .
2
(7) = slgn(7 - 2), P = [.
2
: 2 " R}
Supervlsed learnlng: need 1/! examples Lo reach error raLe # !.
Acuve learnlng: glven 1/! unlabeled polnLs,
8lnary search - need [usL log(1/!) labels, from whlch Lhe resL can be
lnferred! Lxponenual lmprovemenL ln sample complexlLy.
w
+ -
Can acuve learnlng really help?
! Powever, many negauve resulLs, e.g. [uasgupLa '04], [kaarlalnen '06].
ConLrlbuuons Lo Acuve Learnlng
ln hlgh dlmenslon, ls a generallzed blnary search
posslble, allowlng exponenual label savlngs?
[uasgupLa, kalal & M, !ML8 2009 (CCL1 2003)]: Cnllne acuve
learnlng wlLh exponenual error convergence.
1heorem. Cur onllne acuve learnlng algorlLhm converges Lo
generallzauon error ! aer 0(d log 1/!) labels.
Corollary. 1he LoLal errors (labeled and unlabeled) wlll be aL mosL
0(d log 1/!).
?LS!
ConLrlbuuons Lo Acuve Learnlng
[M, Cpen roblem, CCL1 2006]: Coal: general, emclenL acuve
learnlng.
[uasgupLa, Psu & M, nlS 2007]: Ceneral acuve learnlng vla
reducuon Lo supervlsed learnlng.
1heorem. upper bounds on label complexlLy:
never more Lhan Lhe (asympLouc) sample complexlLy.
SlgnlcanL label savlngs for classes of dlsLrlbuuons/problems.
1heorem. Lmclency: runnlng ume ls aL mosL (up Lo polynomlal
facLors) LhaL of supervlsed learnlng algorlLhm for Lhe problem.
1heorem. ConslsLency: algorlLhm's error converges Lo opumal.
ln general, ls lL posslble Lo reduce acuve learnlng Lo
supervlsed learnlng?
?LS!
Ceneral acuve learnlng vla reducuon
llrsL reducuon from acuve learnlng Lo supervlsed learnlng.
Any daLa dlsLrlbuuon (lncludlng arblLrary nolse)
Any hypoLhesls class
1eacher
Acuve learner
Supervlsed learner
uon'L ask.
Ask Leacher for label.
CuLllne
ML algorlLhms for real daLa sources
Learnlng from daLa sLreams
Learnlng from raw daLa
Acuve learnlng
ClusLerlng
Learnlng from prlvaLe daLa
CllmaLe lnformaucs
ML for CllmaLe Sclence
ClusLerlng
WhaL can be done wlLhouL any labels?
unsupervlsed learnlng, ClusLerlng.
Pow Lo evaluaLe a clusLerlng algorlLhm?
8-means clusLerlng ob[ecuve
ClusLerlng algorlLhms can be hard Lo evaluaLe wlLhouL prlor
lnformauon or assumpuons on Lhe daLa.
WlLh #, assumpuons on Lhe daLa, one evaluauon Lechnlque ls
w.r.L. some ob[ecuve funcuon.
A wldely-clLed and sLudled ob[ecuve ls Lhe k-means clusLerlng
ob[ecuve: Clven seL, 9 ! :
;
, choose < ! :
;
1

|<| = k1

Lo mlnlmlze:

C
=

xX
min
cC
x c
2
8-means approxlmauon
Cpumlzlng k-means ls n hard, even for k=2.
[uasgupLa '08, ueshpande & opaL '08].
very few algorlLhms approxlmaLe Lhe k-means ob[ecuve.
uenluon: b-approxlmauon:
uenluon: 8l-crlLerla (a,b)-approxlmauon guaranLee: a"k cenLers,
b-approxlmauon.
Wldely-used k-means clusLerlng algorlLhm" [Lloyd '37].
Cen converges qulckly, buL lacks approxlmauon guaranLee.
Can suer from bad lnluallzauon.
[ArLhur & vassllvlLskll, SCuA '07]: k-means++ clusLerlng
algorlLhm wlLh C(log k)-approxlmauon Lo k-means.

C
b
OPT
ConLrlbuuons Lo ClusLerlng
[Allon, !alswal, & M, nlS '09]: ApproxlmaLe Lhe k-means
ob[ecuve ln Lhe sLreamlng semng.
SLreamlng clusLerlng: clusLerlng algorlLhms LhaL are llghL-welghL
(ume, memory), and make only one-pass over a (nlLe) daLa seL.
ldea 1: k-means++ reLurns k cenLers, wlLh C(log k)-approxlmauon.
! ueslgn a varlanL, kmeans#, LhaL reLurns C(k"log k) cenLers, buL has
a consLanL approxlmauon.
ldea 2: [Cuha, Meyerson, Mlshra, MoLwanl, & C'Callaghan, 1kuL '03
(lCCS '00)]: dlvlde-and-conquer sLreamlng (a,b)-approxlmaLe
k-medold clusLerlng.
! LxLend Lo k-means ob[ecuve, and use k-means# and k-means++.
ConLrlbuuons Lo ClusLerlng
1heorem. WlLh probablllLy aL leasL 1-1/n, k-means# ylelds an
C(1)-approxlmauon, on C(klog k) cenLers.
1heorem. Clven (a,b), and (a',b')-approxlmauon algorlLhms Lo Lhe
k-means ob[ecuve, Lhe Cuha &$ "*6 sLreamlng clusLerlng algorlLhm ls
an (a', C(bb'))-approxlmauon Lo k-means.
Corollary. uslng Lhe Cuha &$ "*6 sLreamlng clusLerlng framework, where:
(a,b)-approxlmaLe algorlLhm: k-means#: a = C(log k), b = C(1)
(a',b')-approxlmaLe algorlLhm: k-means++: a'= 1, b' = C(log k)
ylelds a one-pass, sLreamlng (1, C(log k))-approxlmauon Lo k-means.
! MaLches Lhe k-means++ resulL, ln Lhe sLreamlng semng!
CuLllne
ML algorlLhms for real daLa sources
Learnlng from daLa sLreams
Learnlng from raw daLa
Acuve learnlng
ClusLerlng
Learnlng from prlvaLe daLa
CllmaLe lnformaucs
ML for CllmaLe Sclence
rlvacy-reservlng Machlne Learnlng
roblem: Pow Lo malnLaln Lhe prlvacy
of lndlvlduals, when deLecung
cumulauve pauerns ln, real-world daLa?
Lg., ulsease sLudles, lnsurance rlsk
Lconomlcs research, credlL rlsk
rlvacy-reservlng Machlne Learnlng:
ML algorlLhms adherlng Lo sLrong prlvacy proLocols,
wlLh learnlng performance guaranLees.
[Chaudhurl & M, nlS 2008]: rlvacy-preservlng loglsuc regresslon.
[Chaudhurl, M & SarwaLe, !ML8 2011]: rlvacy-preservlng Lmplrlcal
8lsk Mlnlmlzauon (L8M), lncludlng SvM, and parameLer Lunlng.
CuLllne
ML algorlLhms for real daLa sources
Learnlng from daLa sLreams
Learnlng from raw daLa
Acuve learnlng
ClusLerlng
Learnlng from prlvaLe daLa
CllmaLe lnformaucs
ML for CllmaLe Sclence
CllmaLe lnformaucs
CllmaLe sclence faces many presslng quesuons, wlLh
cllmaLe change polsed Lo lmpacL socleLy.
Machlne learnlng has made profound lmpacLs on Lhe
naLural sclences Lo whlch lL has been applled.
8lology: 8lolnformaucs
ChemlsLry: CompuLauonal chemlsLry
CllmaLe lnformaucs: collaborauons beLween machlne
learnlng and cllmaLe sclence Lo acceleraLe dlscovery.
uesuons ln cllmaLe sclence also reveal new ML problems.
CllmaLe lnformaucs
ML and daLa mlnlng collaborauons wlLh cllmaLe sclence
ALmospherlc chemlsLry, e.g. MuslcanL eL al. '07 ('03)
MeLeorology, e.g. lox-8ablnovlLz eL al. '06
Selsmology, e.g. kohler eL al. '08
Cceanography, e.g. Llma eL al. '09
Mlnlng/modellng cllmaLe daLa, e.g. SLelnbach eL al. '03,
SLelnhaeuser eL al. '10, kumar '10
ML and cllmaLe modellng
uaLa-drlven cllmaLe models, Lozano eL al. '09
Machlne learnlng Lechnlques lnslde a cllmaLe model, or for
callbrauon, e.g. 8raverman eL al. '06, krasnopolsky eL al. '10
ML Lechnlques wlLh ensembles of cllmaLe models:
8eglonal models: Saln eL al. '10
Clobal CllmaLe Models (CCM): 1racklng CllmaLe Models
WhaL ls a cllmaLe model?
A complex sysLem of lnLeracung maLhemaucal models
! noL daLa-drlven
! 8ased on sclenuc rsL prlnclples
MeLeorology
Cceanography
Ceophyslcs
.
CllmaLe model dlerences
! Assumpuons
! ulscreuzauons
! Scale lnLeracuons
Mlcro: raln drop
Macro: ocean
CllmaLe models
lCC: lnLergovernmenLal anel on CllmaLe Change
nobel eace rlze 2007 (shared wlLh Al Core).
lnLerdlsclpllnary sclenuc body, formed by un ln 1988.
lourLh AssessmenL 8eporL 2007, on global cllmaLe change
430 lead auLhors from 130 counLrles, 800 conLrlbuung auLhors,
over 2,300 revlewers.
nexL AssessmenL 8eporL ls due ln 2013.
CllmaLe models conLrlbuung Lo lCC reporLs lnclude:
8[erknes CenLer for CllmaLe 8esearch (norway), Canadlan CenLre for CllmaLe Modelllng
and Analysls, CenLre nauonal de 8echerches MLorologlques (lrance), CommonwealLh
Sclenuc and lndusLrlal 8esearch Crganlsauon (AusLralla), Ceophyslcal lluld uynamlcs
LaboraLory (rlnceLon unlverslLy), Coddard lnsuLuLe for Space SLudles (nASA), Padley
CenLre for CllmaLe Change (unlLed klngdom MeLeorology Cfce), lnsuLuLe of ALmospherlc
hyslcs (Chlnese Academy of Sclences), lnsuLuLe of numerlcal MaLhemaucs CllmaLe Model
(8usslan Academy of Sclences), lsuLuLo nazlonale dl Ceoslca e vulcanologla (lLaly), Max
lanck lnsuLuLe (Cermany), MeLeorologlcal lnsuLuLe aL Lhe unlverslLy of 8onn (Cermany),
MeLeorologlcal 8esearch lnsuLuLe (!apan), Model for lnLerdlsclpllnary 8esearch on CllmaLe
(!apan), nauonal CenLer for ALmospherlc 8esearch (Colorado), among oLhers.
CllmaLe model predlcuons
Clobal mean LemperaLure anomalles. 1emperaLure anomaly: dlerence w.r.L.
Lhe LemperaLure aL a benchmark ume. MagnlLude of LemperaLure change.
Averaged over many geographlcal locauons, per year.
10 20 30 40 50 60 70 80 90 100
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time in years (19002008)
G
l
o
b
a
l

m
e
a
n

t
e
m
p
e
r
a
t
u
r
e

a
n
o
m
a
l
i
e
s
Thick blue: observed
Thick red: average over 20 climate model predictions
Other: climate model predictions
CllmaLe model predlcuons
20 40 60 80 100 120 140 160 180
0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Time in years (19002098)
G
l
o
b
a
l

m
e
a
n

t
e
m
p
e
r
a
t
u
r
e

a
n
o
m
a
l
i
e
s
Thick blue: observed
Thick red: average over 20 climate model predictions
Black (vertical) line: separates past from future
Other: climate model predictions
luLure fan-ouL.
1racklng cllmaLe models
no one model predlcLs besL all Lhe ume.
Average predlcuon over all models ls besL predlcLor over ume.
[8elchler & klm, 8ull. AMS '08], [8elfen & 1ouml, C8L '09]
lCC held 2010 LxperL Meeung on how Lo beuer comblne model
predlcuons.
Can we do beuer? Pow should we predlcL fuLure cllmaLes?
Whlle Laklng lnLo accounL Lhe 20 cllmaLe models' predlcuons
[M, SchmldL, Saroha & Asplund, SAM 2011 (Cluu 2010)]:
Appllcauon of Learn- algorlLhm [M & !aakkola, nlS '03]: 1rack a seL of
experL" predlcLors under changlng observauons.
1racklng cllmaLe models, on LemperaLure predlcuons, aL global and reglonal
scales, annual and monLhly ume-scales.
8esL aper!
Cnllne Learnlng
Learnlng proceeds ln sLages.
AlgorlLhm rsL predlcLs a label for Lhe currenL daLa polnL.
redlcuon loss ls Lhen compuLed: funcuon of predlcLed and Lrue label.
Learner can updaLe lLs hypoLhesls (usually Laklng lnLo accounL loss).
lramework models supervlsed learnlng.
8egresslon, or classlcauon (many hypoLhesls classes)
Many predlcuon loss funcuons
roblem need noL be separable
non-sLochasuc semng: no sLausucal assumpuons.
no assumpuons on observauon sequence.
Cbservauons can even be generaLed onllne by an adapuve adversary.
Analyze regreL: dlerence ln cumulauve predlcuon loss from LhaL of Lhe
opumal (ln hlnd-slghL) comparaLor algorlLhm for Lhe observed sequence.
Learnlng wlLh
experL predlcLors
Learner malnLalns
dlsLrlbuuon over # experLs."
LxperLs are black boxes: need noL be good predlcLors, can vary wlLh
ume, and depend on one anoLher.
Learner predlcLs based on a probablllLy dlsLrlbuuon 3
$
=)> over experLs, ),
represenung how well each experL has predlcLed recenLly.
?=)1 $> ls predlcuon loss of experL ) aL ume $. uened per problem.
updaLe 3
$
=)> uslng 8ayeslan updaLes:
Mulupllcauve updaLes algorlLhms ((4. Pedge," WelghLed Ma[orlLy"),
descended from Wlnnow," [LlulesLone 1988].
p
t+1
(i) p
t
(i)e
L(i,t)
Learnlng wlLh experLs: ume-varylng daLa
1o handle changlng observauons, malnLaln 3
$
=)> vla an PMM.
Pldden sLaLe: ldenuLy of Lhe currenL besL experL.
erformlng 8ayeslan updaLes on Lhls PMM ylelds a famlly of
onllne learnlng algorlLhms.

p
t+1
(i)

j
p
t
(j)e
L(j,t)
p(i|j)
1ransluon dynamlcs:
SLauc updaLe, @( ) | A ) = "()1A) glves [LlulesLone&WarmuLh'89]
algorlLhm: WelghLed Ma[orlLy, a.k.a. SLauc-LxperL.
[PerbsLer&WarmuLh'98] model shllng concepLs vla llxed-Share:
p
t+1
(i)

j
p
t
(j)e
L(j,t)
p(i|j)
Learnlng wlLh experLs: ume-varylng daLa
AlgorlLhm Learn-
[M & !aakkola, nlS 2003]: 1rack Lhe besL #-experL:
sub-algorlLhm, each uslng a dlerenL # value.
p
t+1
() p
t
()e
L(,t)
p
t+1;
(i)

j
p
t
(j)e
L(j,t)
p(i|j; )
erformance guaranLees
[M & !aakkola, nlS 2003]: 8ounds on regreL" for uslng wrong
value of # for Lhe observed sequence of lengLh 1:
1heorem. C(1) upper bound for llxed-Share(#) algorlLhms.
1heorem. %(1) sequence dependenL lower bound for
llxed-Share(#) algorlLhms.
1heorem. C(log 1) upper bound for Learn-# algorlLhm.
8egreL-opumal dlscreuzauon of # for xed sequence lengLh, 1.
uslng prevlous algorlLhms wlLh wrong # can also lead Lo poor
emplrlcal performance.
1racklng cllmaLe models: experlmenLs
Model predlcuons from 20 cllmaLe models
Mean LemperaLure anomaly predlcuons (1900-2098)
lrom CMl3 archlve
PlsLorlcal experlmenLs wlLh nASA LemperaLure daLa.
ClS1LM
luLure slmulauons wlLh perfecL model" assumpuon.
8an 10 such global slmulauons Lo observe general Lrends
CollecLed deLalled sLausucs on 4 represenLauve ones: besL and worsL
model on hlsLorlcal daLa, and 2 ln beLween.
8eglonal experlmenLs: daLa from knMl CllmaLe Lxplorer
Afrlca (-13 - 33L, -40 - 40n)
Lurope (0 - 30L, 40 - 70n)
norLh Amerlca (-60 - -180L, 13 - 70n)
Annual and monLhly ume-scales, hlsLorlcal & 2 fuLure slmulauons/reglon.
20 40 60 80 100 120 140 160 180
0
1
2
3
4
5
Time in years (1900!2098)
S
q
u
a
r
e
d

l
o
s
s


Worst expert
Best expert
Average prediction over 20 models
Learn!alpha algorithm
20 40 60 80 100 120 140 160 180
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Time in years (1900!2098)
S
q
u
a
r
e
d

l
o
s
s


Best expert
Average prediction over 20 models
Learn!alpha algorithm
Learnlng curves
Cn 10 fuLure slmulauons (lncludlng 1-4 above), Learn-# suers less
loss Lhan Lhe mean predlcuon (over remalnlng models) on 73-90 of Lhe years.
Clobal resulLs
8eglonal resulLs: hlsLorlcal

Annual MonLhly
8eglonal resulLs: fuLure slmulauons
luLure work ln CllmaLe lnformaucs
Macro-level: Comblnlng predlcuons of Lhe mulu-model ensemble
LxLenslons Lo 1racklng CllmaLe Models
ulerenL experLs per locauon, spaual (ln addluon Lo Lemporal) Lransluon
dynamlcs
1racklng oLher cllmaLe benchmarks, e.g. carbon dloxlde concenLrauons
[Seml,un}-supervlsed learnlng wlLh experLs. Largely open ln ML.
CLher ML approaches, e.g. baLch, Lransducuve regresslon
Mlcro-level: lmprovlng Lhe predlcuons of a cllmaLe model
CllmaLe model parameLerlzauon: resolvlng scale lnLeracuons
Pybrld models: harness boLh physlcs and daLa!
Callbraung and comparlng cllmaLe models ln a prlnclpled manner
8ulldlng Lheoreucal foundauons for CllmaLe lnformaucs
Coordlnaung on reasonable assumpuons ln pracuce, LhaL allow for Lhe
deslgn of Lheoreucally [usued learnlng algorlLhms
1he llrsL lnLernauonal Workshop on CllmaLe lnformaucs!
luLure work ln Machlne Learnlng
ClusLerlng uaLa SLreams
Cnllne clusLerlng: clusLerlng lnnlLe daLa sLreams.
Lvaluauon frameworks: analogs Lo regreL for supervlsed onllne learnlng.
AlgorlLhms wlLh performance guaranLees wlLh respecL Lo Lhese frameworks.
unsupervlsed, and seml-supervlsed learnlng wlLh experLs.
lor regresslon, could be appllcable Lo CllmaLe lnformaucs.
[Choromanska & M, 2011]: onllne clusLerlng wlLh experLs.
Adapuve clusLerlng
rlvacy-reservlng Machlne Learnlng
rlvacy-preservlng consLralned opumlzauon, L and comblnaLorlal opumlzauon.
rlvacy-preservlng approxlmaLe k-nearesL nelghbor
rlvacy-preservlng learnlng from daLa sLreams
Acuve Learnlng
Acuve regresslon
leaLure-emclenL acuve learnlng
Acuve learnlng for sLrucLured ouLpuLs
new appllcauons of ML: collaborauve research ls key!
1hank ?ou!
B#; $."#80 $, /C (,"5$.,-0D
nlr Allon, 1echnlon
Parl 8alakrlshnan, Ml1
kamallka Chaudhurl, uC San ulego
San[oy uasgupLa, uC San ulego
nlck leamsLer, Ceorgla 1ech
uanlel Psu, 8uLgers & u enn
1omml !aakkola, Ml1
8agesh !alswal, ll1 uelhl
Mam kaarlalnen, nokla 8esearch & u Pelslnkl
Adam kalal, Mlcroso 8esearch
Anand SarwaLe, uC San ulego
Cavln SchmldL, nASA & Columbla
/C 0$5;&#$0 "#; 3,0$;,(0D lor more lnformauon:
Lva Asplund, Columbla www1.ccls.columbla.edu/cmonLel
Anna Choromanska, Columbla www1.ccls.columbla.edu/cmonLel/cl.hLml
CeeLha !agannaLhan, Columbla
Shallesh Saroha, Columbla
"#; /C (,**&"+5&0 "$ <<?E1 <,*5/F)"6

Das könnte Ihnen auch gefallen