11 views

Uploaded by tarillo2

asgdg

- Subject CA2-Model Documentation, Analysis and Reporting
- HW2 statistics
- Triola Cover&Contents
- National-strategy-for-Artificial-Intelligence.pdf
- Copy move forgery detection
- Evaluating the Scientiﬁc Veracity of Publications by Dr. Jens Förster
- cs231n_2017_lecture16
- Ristan to 2018
- 3. Eng-The Study on Use of Information-Prashant Shah
- Introduction
- Lesson 3 Logistic Regression Diagnostics
- Introduction to Statistics
- IntroductionToEmpiricalBayes-DavidRobinson
- Chapter_5 (New).pdf
- 3
- j.1740-9713.2004.026.x
- Automatic Image Annotation and Retrieval using Multi-Instance Multi-Label Learning
- dafafa.pdf
- LR Matrix template.docx
- [11]

You are on page 1of 43

Shakir Mohamed

DeepMind

shakirm.com

@shakir_za

9 October 2015

Abstract

Deep learning and Bayesian machine learning are currently two of the most

active areas of machine learning research. Deep learning provides a powerful

class of models and an easy framework for learning that now provides state-ofthe-art methods for applications ranging from image classification to speech

recognition. Bayesian reasoning provides a powerful approach for information

integration, inference and decision making that has established it as the key

tool for data-efficient learning, uncertainty quantification and robust model

composition that is widely used in applications ranging from information

retrieval to large-scale ranking. Each of these research areas has shortcomings

that can be effectively addressed by the other, pointing towards a needed

convergence of these two areas of machine learning; the complementary

aspects of these two research areas is the focus of this talk. Using the tools of

auto-encoders and latent variable models, we shall discuss some of the ways in

which our machine learning practice is enhanced by combining deep learning

with Bayesian reasoning. This is an essential, and ongoing, convergence that

will only continue to accelerate and provides some of the most exciting

prospects, some of which we shall discuss, for contemporary machine learning

research.

Bayesian Reasoning and Deep Learning

Deep Learning

Bayesian Reasoning

Better ML

Deep Learning

classification and sequence prediction.

+ Scalable learning using stochastic

approximations and conceptually simple.

- Hard to score models, do

model selection and

complexity penalisation.

Bayesian Reasoning and Deep Learning

Bayesian Reasoning

inference, prediction and decision making

models

variability of outcomes

- Potentially intractable

inference leading to

expensive computation or

long simulation times.

selection and composition.

Bayesian Reasoning and Deep Learning

Deep Learning

classification and sequence

prediction.

+ Scalable learning using stochastic

approximation and conceptually

simple.

+ Easily composable with other

gradient-based methods

- Only point estimates

- Hard to score models, do selection

and complexity penalisation.

Bayesian Reasoning

models

- Potentially intractable inference,

computationally expensive or long

simulation time.

+ Unified framework for model

building, inference, prediction and

decision making

+ Explicit accounting for uncertainty

and variability of outcomes

+ Robust to overfitting; tools for

model selection and composition.

6

Outline

Bayesian Reasoning

Deep Learning

expect to be successfully combined.

1

Review of deep learning

Limitations of maximum likelihood and MAP estimation

Case study using auto-encoders and latent variable models

Approximate Bayesian inference

Semi-supervised learning, classification, better inference

and more.

Table 1: Correspondence between link and activations functions in

generalised regression.

Target

Regression

Link

Inv link

Activation

= w> x + b

Real

Binary

p(y|x) = p(y|g(); )

function, e.g., ane, convolution.

g(.) is an inverse link function that well

refer to as an activation function.

Linear

Logistic

Binary

Probit

Binary

Gumbel

Binary

Logistic

Categorical

Multinomial

Counts

Counts

Non-neg.

Sparse

Ordered

Poisson

Poisson

Gamma

Tobit

Ordinal

Identity

Logit log 1-

Identity

Sigmoid

Inv

Gauss

-1

CDF

()

Compl.

log-log

log(-log())

Gauss

CDF

()

Gumbel CDF

-x

e-e

Probit

Hyperbolic

Tangent

tanh()

Multin. Logit

Tanh

log()

p

()

Reciprocal

1

1+exp(-)

Sigmoid

Softmax

Pi

j j

exp()

2

1

max max(0; )

Cum.

Logit

( k - )

ReLU

There are many link functions that allow us to make other distributional assumptions for the target (response) y. In deep learning, the

link function is referred to as the activation function and I list in the

table below the names for these functions used in the two fields. From

this table we can see that many of the popular approaches for specifying neural networks that have counterparts in statistics and related

literatures under (sometimes) very different names, such multinomial

regression in statistics and softmax classification in deep learning, or

rectifier in deep learning and tobit models is statistics.

Optimise the negative log-likelihood

L=

1.2

log p(y|g(); )

Constructing a recursive GLM or deep deep feed-forward neural network using the linear predictor as the basic building block. GLMS

Recursively compose the basic linear functions.

Gives a deep neural network.

E[y] = hL . . . hl h0 (x)

A general framework for building non-linear, parametric models

Problem: Overfitting of MLE leading to limited generalisation.

Bayesian Reasoning and Deep Learning

Regularisation Strategies for Deep Networks

Regularisation is essential to overcome the limitations of maximum

likelihood estimation.

Regularisation, penalised regression, shrinkage.

A wide range of available regularisation techniques:

Large data sets

Input noise/jittering and data augmentation/expansion.

L2 /L1 regularisation (Weight decay, Gaussian prior)

Binary or Gaussian Dropout

Batch normalisation

10

MAP estimators and limitations

Power of MAP estimators is that they provide

some robustness to overfitting.

Creates sensitivities to parameterisation.

1. Sensitivities aect gradients and can make learning hard

Invariant MAP estimators and exploiting natural

gradients, trust region methods and other

improved optimisation.

2. Still no way to measure confidence of our model.

Can generate frequentist confidence intervals

and bootstrap estimates.

Bayesian Reasoning and Deep Learning

11

Proposed solutions have not fully dealt with the underlying issues.

Issues arise as a consequence of:

Reasoning only about the most likely solution and

Not maintaining knowledge of the underlying variability (and

averaging over this).

regularisation and optimisation, let us develop a

Pragmatic Bayesian Approach for

Probabilistic Reasoning in Deep Networks.

Bayesian reasoning over some, but not all parts of our models (yet).

Bayesian Reasoning and Deep Learning

12

Outline

Bayesian Reasoning

Deep Learning

expect to be successfully combined.

1

Review of deep learning

Limitations of maximum likelihood and MAP estimation

Case study using auto-encoders and latent variable models

Approximate Bayesian inference

Semi-supervised learning, classification, better inference

and more.

13

Unsupervised learning and auto-encoders

A generic tool for dimensionality

reduction and feature extraction.

Minimise reconstruction error using an

encoder and a decoder.

+

using deep networks for encoder and

decoder.

computational graph and train using

SGD

No representation of variability of the

representation space.

z = f(y)

Decoder

g(.)

Encoder

f(.)

y* = g(z)

Data y

L=

log p(y|g(z))

L = ky

2

g(f (y))k2

14

What is the model we are interested in?

Why use an encoder?

How do we regularise?

z = f(y)

Decoder

g(.)

Encoder

f(.)

y* = g(z)

Data y

Probabilistic model of interest and

Mechanism we use for inference.

15

Latent variable models:

Generic and flexible model class for density estimation.

Specifies a generative process that gives rise to the data.

BXPCA

Probabilistic PCA, Factor analysis (FA), Bayesian Exponential

Family PCA (BXPCA).

Latent Variable

z N (z|, )

Observation Model

= Wz + b

y Expon(y|)

W

y

n = 1, , N

Bayesian Reasoning and Deep Learning

16

DLGM

E.g., non-linear factor analysis, non-linear Gaussian belief

networks, deep latent Gaussian models (DLGM).

z2

fl (z) = (Wh(z) + b)

Deterministic layers

h4

h3

W1

z1

hi (x) = (Ax + c)

h2

Observation Model

= Wh1 + b

h1

y Expon(y|)

Bayesian Reasoning and Deep Learning

n = 1, , N

17

Our inferential tasks are:

2. Make predictions:

p(y |y) =

3. Choose

Z the best model

p(y|W) =

p(y|z, W)p(z)dz

z1

h2

h1

y

n = 1, , N

18

Variational Inference

Use tools from approximate inference to handle intractable integrals.

KL[q(z|y)kp(z|y)]

Approximation class

True posterior

q (z)

Reconstruction

Reconstruction cost:

Expected log-likelihood

measures how well

samples from q(z) are able

to explain the data y.

Penalty: Explanation of

the data q(z) doesnt deviate

too far from your beliefs

p(z) - Okhams razor.

Penalty

KL[q(z)kp(z)]

Penalty is derived from your model and does not need to be designed.

Bayesian Reasoning and Deep Learning

19

z ~ q(z | y)

Approx. Posterior

Reconstruction

KL[q(z)kp(z)]

Penalty

to true posterior p(z|y), one of the unknown

inferential quantities of interest to us.

Inference network: q is an encoder or inverse model.

Parameters of q are now a set of global parameters

used for inference of all data points - test and train.

Amortise (spread) the cost of inference over all data.

Inference/

Encoder

q(z |y)

Data y

amortised posterior inference

Bayesian Reasoning and Deep Learning

20

F(y, q) = Eq(z) [log p(y|z)]

Approx. Posterior

KL[q(z)kp(z)]

Reconstruction

z ~ q(z | y)

Model

p(y |z)

Inference

Network

q(z |y)

Penalty

Inference (Encoder): variational distribution q(z|y)

implement variational inference.

y ~ p(y | z)

Data y

variable models using inference networks

Variational Auto-encoder

But dont forget what your model is, and what inference you use.

Bayesian Reasoning and Deep Learning

21

+

interesting deep generative models.

with non-linear models.

loss functions that automatically include

appropriate penalty functions.

our models and why this is a good idea.

questions.

with our latent variables.

KL[q(z)kp(z)]

z ~ q(z | y)

Model

p(y |z)

Inference

Network

q(z |y)

y ~ p(y | z)

Data y

22

F(y, q) = Eq(z) [log p(y|z)]

KL[q(z)kp(z)]

selection using the free energy.

missingness assumption

and improved optimisation tools.

computational graph and simple Monte

Carlo gradient estimators.

any large-scale deep learning system.

z ~ q(z | y)

Model

p(y |z)

Inference

Network

q(z |y)

y ~ p(y | z)

Data y

Bayesian Reasoning and Deep Learning

23

...

Data Visualisation

MNIST Handwritten digits

...

28x28

DLGM

...

500

...

...

100

...

300

...

28x28

...

100

...

400

...

96x96

24

DLGM

Visualising MNIST in 3D

25

DLGM

Data Simulation

Data

Samples

26

Original Data

unobserved pixels

Inferred Image

DLGM

10%

observed

50%

observed

27

Outline

Bayesian Reasoning

Deep Learning

expect to be successfully combined.

1

Review of deep learning

Limitations of maximum likelihood and MAP estimation

Auto-encoders and latent variable models

Approximate and variational inference

Semi-supervised learning, recurrent networks, classification,

better inference and more.

28

Semi-supervised Learning

Semi-supervised DLGM

Can extend the marriage of Bayesian reasoning and deep learning to the

problem of semi-supervised classification.

z

W

x

n = 1, , N

29

Semi-supervised DLGM

Analogical Reasoning

30

Figure 7. MNIST generation sequences for DRAW without atWe can

also combine other tools from deep learning to design

tention. Notice how the network first generates a very blurry imthat is subsequently

refined.generative models: recurrent networks

even age

more

powerful

and attention.

Figure 8. Generated MNIST images with two digits.

attention

it constructs the digit by tracing the lines

nt Neural Network For with

Image

Generation

much like a person with a pen.

DRAW

ts scenes

d by the

.

y step is

ne while

ew years

by a seby a sin& Hinton,

to, 2014;

014; Serequential

h can be

s such as

model in

possible

nse it re-

P (x|z)

decoder

FNN

ct

write

ct

write

. . . cT

dec

decoder

P (x|z1:T )

decoder

ht motivation

The main

for using

1

RNN

RNNan attention-based generative model is that large images can be built up iteratively,

z

zt+1

zt

decoding

by adding to a small

part of the image at a time.

To test

(generative

model)

sample this capability sample

sample

in a controlled

fashion, we trained DRAW

encoding

two

28 |x,

28 zMNIST

images choQ(z|x) to generate

Q(ztimages

|x, z1:t with

Q(z

1)

t+1

1:t )

(inference)

sen at random and placed at random locations in a 60 60

encoderIn casesencoder

black background.

where the two digits overlap,

henc

t 1

RNN

RNNtogether at each point and

encoder the pixel intensities were added

FNN

clipped to be noread

greater thanread

one. Examples of generated

data are shown in Fig. 8. The network typically generates

x the other, suggesting

x

x

one digit and then

an ability to recreate composite scenes from simple pieces.

Figure 2. Left:

Conventional

Auto-Encoder. Dur4.4. Street

View House Variational

Number Generation

ing generation, a sample z is drawn from a prior P (z) and passedFigure 9. Generated SVHN images. The rightmost column

MNIST digits are very simplistic in terms of visual strucshows the training images closest (in L2 distance) to the generthrough the

feedforward

decoder

network

to

compute

the

probature, and we were keen to see how well DRAW performed

ated images beside them. Note that the two columns are visually

bility of the

input

P

(x|z)

given

the

sample.

During

inference

the

on natural images. Our first natural image generation exsimilar, but the numbers are generally different.

input x is periment

passed to

thetheencoder

network,

producing

an approxused

multi-digit

Street View

House Numbers

datasetQ(z|x)

(Netzer etover

al., 2011).

used the same

preprocessimate posterior

latentWe

variables.

During

training, z

ing

as

(Goodfellow

et al.,

2013),

a 64 64

31

Bayesian Reasoning

and

Deep

Learning

is sampled

from

Q(z|x)

and

then

usedyielding

to compute

thehouse

total de-highly realistic, as shown in Figs. 9 and 10. Fig. 11 reveals

We can also combine other tools from deep learning to design even more

powerful generative models: recurrent networks and attention.

WY 1

0.1

0.5

h2 H 1

H2

0.1

0.7

W2 H3

1.3

H1

H2

H3

0.2

1.2

h1

W3

Figure 1. Left: each weight has a fixed value, as provided by classical backpropagation. Right: each weight is assigned a distribuytion, as provided by Bayes by Backprop.

the par

through

regressi

this c

Inputs

tion on

tion (gi

transfor

The we

tion (M

the ML

n = 1, , N

(Kingma and Welling, 2014; Rezende et al., 2014; Gregor

32

In Review

Deep learning as a framework for building highly

flexible non-linear parametric models, but

regularisation and accounting for uncertainty

and lack of knowledge is still needed.

inference that allows us to account for

uncertainty and a principled approach for

regularisation and model scoring.

Combined Bayesian reasoning with auto-encoders and

showed just how much can be gained by a marriage of these

two streams of machine learning research.

z ~ q(z | y)

Model

p(y |z)

Inference

Network

q(z |y)

y ~ p(y | z)

Data y

33

Danilo Rezende, Ivo Danihelka, Karol Gregor, Charles Blundell,

Theophane Weber, Andriy Mnih, Daan Wierstra (Google DeepMind),

Durk Kingma, Max Welling (U. Amsterdam)

Thank You.

34

Some References

Probabilistic Deep Learning

Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. "Stochastic backpropagation and approximate inference in deep generative models."

ICML (2014).

Kingma, Diederik P., and Max Welling. "Auto-encoding variational Bayes." ICLR 2014.

Mnih, Andriy, and Karol Gregor. "Neural variational inference and learning in belief networks." ICML (2014).

Kingma, D. P., Mohamed, S., Rezende, D. J., & Welling, M. (2014). Semi-supervised learning with deep generative models. NIPS (pp. 3581-3589).

Gregor, K., Danihelka, I., Graves, A., & Wierstra, D. (2015). DRAW: A recurrent neural network for image generation. arXiv preprint arXiv:

1502.04623.

Rezende, D. J., & Mohamed, S. (2015). Variational Inference with Normalizing Flows. arXiv preprint arXiv:1505.05770.

Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015). Weight Uncertainty in Neural Networks. arXiv preprint arXiv:1505.05424.

Hernndez-Lobato, J. M., & Adams, R. P. (2015). Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks. arXiv

preprint arXiv:1502.05336.

Gal, Y., & Ghahramani, Z. (2015). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. arXiv preprint

arXiv:1506.02142.

35

Variational Principle

General family of methods for approximating

complicated densities by a simpler class of densities.

KL[q(z|y)kp(z|y)]

Approximation class

True posterior

q (z)

Deterministic approximation procedures

with bounds on probabilities of interest.

Fit the variational parameters.

Bayesian Reasoning and Deep Learning

36

Integral problem

Proposal

Importance Weight

Jensens inequality

log

p(x)g(x)dx

log p(y) = log

p(y|z)p(z)dz

q(z)

p(y|z)p(z)

dz

q(z)

p(z)

log p(y) = log p(y|z)

q(z)dz

q(z)

Z

p(z)

log p(y)

q(z) log p(y|z)

dz

q(z)

=

Bayesian Reasoning and Deep Learning

q(z)

q(z) log

p(z)

KL[q(z)kp(z)]

37

F(y, q) = Eq(z) [log p(y|z)]

Stochastic encoder

Data code-length

KL[q(z)kp(z)]

Hypothesis code

latent variables, implies that the data is compressible.

MDL: inference seen as a problem of compression

we must find the ideal shortest message of our data y:

marginal likelihood.

Must introduce an approximation to the ideal

message.

Encoder: variational distribution q(z|y),

Decoder: likelihood p(y|z).

z ~ q(z | y)

Decoder

p(y |z)

Encoder

q(z |y)

y ~ p(y | z)

Data y

38

F(y, q) = Eq(z) [log p(y|z)]

Stochastic encoder

Reconstruction

(z, y)

Penalty

features of data (i.e. latent variable explanations).

The variational approach requires you to be explicit

about your assumptions. Penalty is derived from your

model and does not need to be designed.

z ~ q(z | y)

Decoder

p(y |z)

Encoder

q(z |y)

y ~ p(y | z)

Data y

39

Repeat:

E-step

For i = 1, N

n

/ r Eq

r KL[q(zn )kp(zn )]

for every data point n, we can

instead use a model.

M-step

1 X

/

r log p (yn |zn )

N n

z

z ~ q(z | y)

Model

p(y |z)

Inference

Network

q(z |y)

y ~ p(y | z)

Data y

Parameters of q are now a set of global parameters

used for inference of all data points - test and train.

Share the cost of inference (amortise) over all data.

Combines easily with mini-batches and Monte Carlo

expectations.

Can jointly optimise variational and model

parameters: no need for alternating optimisation.

40

Avoid deriving pages of gradient updates for variational inference.

Variational inference turns integration

into optimisation:

Automated Tools:

Dierentiation: Theano, Torch7, Stan

Message passing: infer.NET

Forward pass

Prior

p(z)

z H[q(z)]

Backward pass

r

Prior

p(z)

log p(z)

Inference

q(z |x)

other preconditioned optimisation.

Same code can run on both GPUs

or on distributed clusters.

Probabilistic models are modular,

can easily be combined.

Model

p(x |z)

Model

p(x |z)

Data x

log p(x|z)

log p(z)]

Inference

q(z |x)

using variational inference.

41

Stochastic Backpropagation

A Monte Carlo method that works with continuous latent variables.

Original problem

r Eq(z) [f (z)]

2

Reparameterisation

Backpropagation

with Monte Carlo

z N (, )

z = + N (0, 1)

r EN (0,1) [f ( + )]

EN (0,1) [r={, } f ( + )]

Can use any likelihood function, avoids the need for additional lower bounds.

Low-variance, unbiased estimator of the gradient.

Can use just one sample from the base distribution.

Possible for many distributions with location-scale or other known

transformations, such as the CDF.

42

More general Monte Carlo approach that can be used with both discrete

or continuous latent variables.

Property of the score function:

r q (z|x)

r log q (z|x) =

q (z|x)

Original problem

r Eq

Score ratio

Eq

log q(z|y)]

Eq

MCCV Estimate

to control the variance of the estimator.

43

- Subject CA2-Model Documentation, Analysis and ReportingUploaded byemi_benette
- HW2 statisticsUploaded byArsen Irgibayev
- Triola Cover&ContentsUploaded byHanaLe
- National-strategy-for-Artificial-Intelligence.pdfUploaded byarpit9964
- Copy move forgery detectionUploaded byShubham Shukla
- Evaluating the Scientiﬁc Veracity of Publications by Dr. Jens FörsterUploaded byFoliaNL
- cs231n_2017_lecture16Uploaded byfatalist3
- Ristan to 2018Uploaded byAnonymous OtK7tAtK0
- 3. Eng-The Study on Use of Information-Prashant ShahUploaded byImpact Journals
- IntroductionUploaded byVishant Chaudhary
- Lesson 3 Logistic Regression DiagnosticsUploaded byalaa_h1
- Introduction to StatisticsUploaded byJonnifer Quiros
- IntroductionToEmpiricalBayes-DavidRobinsonUploaded byrkarthik403
- Chapter_5 (New).pdfUploaded byMoein Razavi
- 3Uploaded byApam Benjamin
- j.1740-9713.2004.026.xUploaded bysjalum22
- Automatic Image Annotation and Retrieval using Multi-Instance Multi-Label LearningUploaded byBONFRING
- dafafa.pdfUploaded byjohnyjb
- LR Matrix template.docxUploaded bySyai Genj
- [11]Uploaded bySAI CHAKRADHAR G
- Linear RegressionUploaded byAhly Zamalek Masry
- 38_Artificial.pdfUploaded byseravanakumar
- Bayesian, RRSR,RV, 16-9-2017 dr amal.docxUploaded byHeba Fathy Mohamed
- logitUploaded byraradiyahayu5142
- 2000, Pritchard, Inference of Population Structure, Multilocus, DataUploaded byMARKCODNS
- Regression 1Uploaded byemo_bot
- ESTOCASTICOSUploaded byanon-494447
- Problem Set 4Uploaded byljdinet
- SPMA07Uploaded byAnonymous 1aqlkZ
- 38 ArtificialUploaded byseravanakumar

- Readme.txtUploaded bytarillo2
- Akhamet.5e.iconics.previewUploaded bytarillo2
- Cinema Graph TutorialUploaded bytarillo2
- Numenera ReferenceUploaded bytarillo2
- Readme en USUploaded byjacey_kinnaird4792
- 160930 StrategyStudio.logUploaded bytarillo2
- Software LicenseUploaded bytarillo2
- 100 Data and Analytics Predi 301430Uploaded bytarillo2
- verghese_sanders_observers.pdfUploaded bytarillo2
- On the Application of Quasi-renewal Theory in Optimization of Imperfect Maintenance PoliciesUploaded bytarillo2
- ccUploaded bytarillo2
- Quant Classroom Oct2011Uploaded bytarillo2
- Mat Lab Plotting ColorsUploaded bytarillo2
- Dungeonmasterguidetoscrum.blogspot.de-across the Plains of Software DevelopmentUploaded bytarillo2
- Excel HelpUploaded bytarillo2
- ProgramFileList - CopyUploaded bytarillo2
- Hamzehal.blogspot.com-Hamzeh Alsalhi - Software ProjectsUploaded bytarillo2
- adabagUploaded bytarillo2
- Ryan Adams 140814 Bayesopt NcapUploaded bytarillo2
- Star Wars Names and CharactersUploaded bytarillo2
- Adobeville SanUploaded bytarillo2
- d30 AdventureSeedGenerator Part1 NBDUploaded bytarillo2
- Roll1d12.Blogspot.com-The Sea Witchs War KitUploaded bytarillo2
- Roll1d12.Blogspot.com-The Lichs Guest ListUploaded bytarillo2
- Roll1d12.Blogspot.com-The Dragons Other PrisonersUploaded bytarillo2
- Roll1d12.Blogspot.com-The Dragons Gourmet Night MenuUploaded bytarillo2
- Roll1d12.Blogspot.com-Army of Evil 3 Cavalry and Air SupportUploaded bytarillo2
- Roll1d12.Blogspot.com-Newly-bred Henchmonsters and Utility BeastsUploaded bytarillo2
- Roll1d12.Blogspot.com-Xenolinguistics Means of CommunicationUploaded bytarillo2

- Generating Parsers With JavaccUploaded byMorgan Peeman
- Mind Maps for BusinessUploaded byKoteshwar Rao
- Masturbation Prevention TreatmentUploaded byZeo Fénix
- Test- Myers Study Guide, Unit II- Progr..., Multiple Choice [AP Psych] | QuizletUploaded bylala
- Esp AssignmentUploaded byveronika_ciut2296
- NAAC - SSR Application 09-10-2013Uploaded byJayaraman Tamilvendhan
- A STUDY ON RESIDENT’S PERCEPTION FOR CONSERVATION ON THE HISTORIC STRUCTURE IN MALOLOS, BULACAN: THE BARASOIAN CHURCHUploaded byDanicamae Ebdane
- Mpc Iind Year Asstt 2014-15Uploaded bypbarsing
- GD TIPSUploaded byapi-26424101
- Juvenile Delinquency 3.docUploaded byedion
- Developing Filipino brand of leadershipUploaded byreggiereyes
- Top PMP Coaching Centers in East DelhiUploaded byoureducation
- Obago-Samwel-Onyango-MSc-Research-Paper.pdfUploaded byLyka Palacio
- 2011 SeptUploaded byKatrina Sluis
- lesson 4- divide 10s 100s and 1000sUploaded byapi-300666676
- format of seminar reportUploaded byprasanthreddy143
- 2018 Gain Sbcc Summit Msc BadutaUploaded byAang Sutrisna
- Kajian TinjauanSurveyResearch-DrKamisahOsman.pdfUploaded bySaiful Nizam
- Exercise Bosah Ugolo-CompleteUploaded byMelania Coman
- Syllabus+-+Chem+2211+-+Fall+2016Uploaded byRishab Chawla
- Literature Matrix new.docxUploaded byJuvy Iringan
- University Of Cambridge ESOL ExaminationUploaded byNuriaAlacid
- Roles of Adjudicators (1)Uploaded byGladys Esteve
- The Colonnade, November 25, 1968Uploaded byBobcat News
- granillo ashleyeng101f2018tthUploaded byapi-433198361
- photosynthesis and cell respiration pblUploaded byapi-253760877
- HRM 101Uploaded byRobinKJames
- Notification APEPDCL Jr Assistant Posts1Uploaded byshekar_141
- Sample Essay QuestionsUploaded byongk0043
- description: tags: trioprofilebrochure2006Uploaded byanon-7003