0 Stimmen dafür0 Stimmen dagegen

30 Aufrufe17 Seitenjaringan syaraf tiruan dengan langkah

Jul 11, 2018

© © All Rights Reserved

PDF, TXT oder online auf Scribd lesen

jaringan syaraf tiruan dengan langkah

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

30 Aufrufe

jaringan syaraf tiruan dengan langkah

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

- Write a Program for Imlementing Perceptron Learning Algorithm
- Deep Learning Basics concepts
- Business Communication
- Bretas a.S. Artificial Neural Networks in Power System Restoration
- Obstacle Avoidance in Mobile Robot Using Neural Network---IEEE2011
- Image Processing Compression and Reconstruction by Using New Approach Artificial Neural Network
- 05.2.2-Backprop
- Artificial Neural Network
- Neural Assignment
- Gesture Recognition Technology
- Activation Function
- Review on ANN Based STLF Models
- Object Trackinng PHd Thesis
- 106-T251
- Wind-Solar Intelligent Controller System based on FPGA: Review Study
- Adaptive Neuro-Fuzzy Controller for Robot Vehicle
- Thesis the Effect of Moisture on the Hgi of Coal
- curs3site
- 10.1049@ip-gtd-19960314
- Takens Theorem With Singular Spectrum Analysis Applied to Noisy T

Sie sind auf Seite 1von 17

A Step by Step

Backpropagation Example

Matt Mazur

Background

Home

About

Backpropagation is a common method for training a neural network. There is no

Archives

shortage of papers online that attempt to explain how backpropagation works,

Contact

but few that include an example with actual numbers. This post is my attempt to

Now

explain how it works with a concrete example that folks can compare their own

Projects

calculations to in order to ensure they understand backpropagation correctly.

If this kind of thing interests you, you should sign up for my newsletter where I

Enter your email address to post about AI-related projects that I’m working on.

follow this blog and receive

notifications of new posts by

Backpropagation in Python

email.

You can play around with a Python script that I wrote that implements the

Join 2,617 other followers

backpropagation algorithm in this Github repo.

Enter your email address

Backpropagation Visualization

Follow

For an interactive visualization showing a neural network as it learns, check out

About my Neural Network visualization.

Hey there! I’m a data scientist

at Help Scout where I wrangle Additional Resources

data to gain insights into our

product and business. I also If you find this tutorial useful and want to continue learning about neural

built Lean Domain Search,

networks, machine learning, and deep learning, I highly recommend checking

Preceden and many other

out Adrian Rosebrock’s new book, Deep Learning for Computer Vision with

software products over the

years. Python. I really enjoyed the book and will have a full review up soon.

Overview

Search … For this tutorial, we’re going to use a neural network with two inputs, two hidden

neurons, two output neurons. Additionally, the hidden and output neurons will

Follow me on Twitter include a bias.

Dan Luu

@danluu

Replying to @danluu

In most discussions about

twitter on HN, a bunch of

devs drop in to say bot

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 1/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

removal is so easy twitter obv.

must not care at all.

~1M bots/day. Twitter only

has ~300M MAU, making the

error tolerance v. low. This

seems like a really hard

problem.

Sweatpants Cher

@House_Feminist In order to have some numbers to work with, here are the initial weights, the

I secretly hope that twitter biases, and training inputs/outputs:

keeps extending the

character limit as a social

experiment, slowly

conditioning our attention

spans until we’re able to read

actual books again

Alan White

@aljwhite

This is one of the most

terrifying things I’ve seen in

all my life

Matt Mazur Retweeted

network can learn how to correctly map arbitrary inputs to outputs.

Kitze

@thekitze

For the rest of this tutorial we’re going to work with a single training set: given

holy shit I'm hanging this on

my wall inputs 0.05 and 0.10, we want the neural network to output 0.01 and 0.99.

To begin, lets see what the neural network currently predicts given the weights

and biases above and inputs of 0.05 and 0.10. To do this we’ll feed those inputs

forward though the network.

We figure out the total net input to each hidden layer neuron, squash the total

Feb 12, 2018

net input using an activation function (here we use the logistic function), then

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 2/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Matt Mazur Retweeted repeat the process with the output layer neurons.

ABC News

@ABC

NASA's Curiosity rover Total net input is also referred to as just net input by some sources.

captures an incredible

panoramic view of Mars from

the Gale Crater, showing Here’s how we calculate the total net input for :

dunes, buttes and ridges

across the Red Planet's

surface. abcn.ws/2GJl2NP

Feb 4, 2018

I Am Devloper

We repeat this process for the output layer neurons, using the output from the

@iamdevloper

hidden layer neurons as inputs.

Elon Musk: I'm putting people

on Mars!

Developers: Fantastic, more Here’s the output for :

timezones to support.

Feb 9, 2018

We can now calculate the error for each output neuron using the squared error

function and sum them to get the total error:

Some sources refer to the target as the ideal and the output as the actual.

on. The result is eventually multiplied by a learning rate anyway so it

doesn’t matter that we introduce a constant here [1].

For example, the target output for is 0.01 but the neural network output

0.75136507, therefore its error is:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 3/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Repeating this process for (remembering that the target is 0.99) we get:

The total error for the neural network is the sum of these errors:

Our goal with backpropagation is to update each of the weights in the network

so that they cause the actual output to be closer the target output, thereby

minimizing the error for each output neuron and the network as a whole.

Output Layer

Consider . We want to know how much a change in affects the total error,

aka .

can also say “the gradient with respect to “.

First, how much does the total error change with respect to the output?

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 4/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

is sometimes expressed as

When we take the partial derivative of the total error with respect to ,

the quantity becomes zero because does not

affect it which means we’re taking the derivative of a constant which is

zero.

Next, how much does the output of change with respect to its total net input?

The partial derivative of the logistic function is the output multiplied by 1 minus

the output:

Finally, how much does the total net input of change with respect to ?

You’ll often see this calculation combined in the form of the delta rule:

aka (the Greek letter delta) aka the node delta. We can use this to

rewrite the calculation above:

Therefore:

Some sources extract the negative sign from so it would be written as:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 5/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

To decrease the error, we then subtract this value from the current weight

(optionally multiplied by some learning rate, eta, which we’ll set to 0.5):

Some sources use (alpha) to represent the learning rate, others use

(eta), and others even use (epsilon).

We perform the actual updates in the neural network after we have the new

weights leading into the hidden layer neurons (ie, we use the original weights,

not the updated weights, when we continue the backpropagation algorithm

below).

Hidden Layer

Next, we’ll continue the backwards pass by calculating new values for , ,

, and .

Visually:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 6/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

We’re going to use a similar process as we did for the output layer, but slightly

different to account for the fact that the output of each hidden layer neuron

contributes to the output (and therefore error) of multiple output neurons. We

know that affects both and therefore the needs to take

into consideration its effect on the both output neurons:

Starting with :

And is equal to :

Therefore:

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 7/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Now that we have , we need to figure out and then for each

weight:

We calculate the partial derivative of the total net input to with respect to

the same as we did for the output neuron:

Finally, we’ve updated all of our weights! When we fed forward the 0.05 and 0.1

inputs originally, the error on the network was 0.298371109. After this first round

of backpropagation, the total error is now down to 0.291027924. It might not

seem like much, but after repeating this process 10,000 times, for example, the

error plummets to 0.0000351085. At this point, when we feed forward 0.05 and

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 8/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

0.1, the two outputs neurons generate 0.015912196 (vs 0.01 target) and

0.984065734 (vs 0.99 target).

If you’ve made it this far and found any errors in any of the above or can think of

any ways to make it clearer for future readers, don’t hesitate to drop me a note.

Thanks!

Share this:

Like

82 bloggers like this.

Related

Neural Network-based Mind often. For real this time.

Poker Bot In "Emergent Mind" In "Writing"

In "Poker Bot"

Posted on March 17, 2015 by Mazur. This entry was posted in Machine Learning and tagged ai,

backpropagation, machine learning, neural networks. Bookmark the permalink.

ABTestCalculator.com, an Open on Github →

Source A/B Test

Significance Calculator

← Older Comments

Pavel Koryakin

— December 4, 2017 at 10:25 pm

Great tutorial!

But how can I update biases using back propagation ?

Reply

Matus Moravcik

— January 28, 2018 at 9:52 am

Reply

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 9/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Jack

— December 7, 2017 at 4:03 am

Same way. Just take the partial with respect to the bias instead of the weight.

Reply

WangLu

— December 20, 2017 at 9:51 pm

It’s easy to generalize the method accroding to weight update. You just need

use the similar way to calculate dEtotal/dbias to get the step for bias and update

it.

Reply

shikha purwar

— December 8, 2017 at 4:08 am

very good

Reply

HansMueller

— December 10, 2017 at 1:32 pm

The best explanaition of backpropagation I’ve read! Really helped me, thank you!

Reply

!g

ni

P

mobymotion

— December 14, 2017 at 6:16 am

Just about to give up trying to understand back prop, before I saw this.

Reply

Mohamad Zeina

— December 14, 2017 at 6:17 am

Just about to give up trying to understand back prop, before I saw this.

Reply

!g

ni

P

JavaScript

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 10/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Ilya Nonename

— December 14, 2017 at 12:15 pm

Very, nice.

Did not succeed at first, because i was using 0 and 1 in one of inputs.

I figured out that 0 does not adjust weight.

So i shifted inputs by 0.5 (-0.5 and 0.5 instead 0 and 1) and it worked.

Reply

neuralnetwork963574777

— December 15, 2017 at 1:04 am

Hi Matt,

One question. In this article you calculate the Error function at the end of the forward

propagation process as

I understand that this way of calculating the Error function was mostly used in the past

and now we should use cross entropy. However, getting back to the squared error

function – because the difference between the target and output is power 2, the result is

always positive (regardless whether target > output or vice versa). That means that

regardless that the actual network output result (target – output) can be positive error or

negative error, we always back propagate the positive E function and eventually use the

fractions of it at any neuron to adjust its weight and bias.

So, the adjustment goes always in one direction. Since it was successfully used in the

past, how that worked? Or getting back to your example, we can use different input

number and come up with negative (target – output) but the Error function will still be

positive, and so the weight and bios adjustments for each neuron.

Regards

Igor

Reply

Henry Henri

— January 2, 2018 at 12:43 pm

The squared error is always positive. But for backpropagation you use the

(partial) derivative of the error function, which is linear and hence can be

positive or negative.

Reply

Wowza

— December 17, 2017 at 11:28 pm

Man I’m studying for a final and this explained the algorithm better than the textbook.

you’re actively the best

Reply

Shashank

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 11/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

— December 18, 2017 at 1:00 pm

Amazing explanation!! can you please explain in the similar fashion about updating the

bias. As i am confused bias is only for a layer how can we update for every neuron??

Reply

Vinicius Silva

— December 18, 2017 at 6:23 pm

So what happens next? What do you mean by “after repeating this process 10,000 times,

for example, the error plummets to 0.0000351085”? Should we keep using the same

input record in all these 10000 iterations? I think I understood what has been explained

by this text, but I wish you could ellaborate a bit more on the whole neural network

learning process.

Also, can you provide a general idea on what is happening in your neural network

visualization example? What were you feeding the network during all those many

iterations?

Sorry for asking so many questions, it’s just I’m trying to get a deep understanding on

this topic, but failing to find quality material that isn’t too difficult for beginner like me.

Thank you.

Reply

oliveiravini1994

— December 18, 2017 at 6:25 pm

So what happens next? What do you mean by “after repeating this process 10,000 times,

for example, the error plummets to 0.0000351085”? Should we keep using the same

input record in all these 10000 iterations? I think I understood what has been explained

by this text, but I wish you could ellaborate a bit more on the whole neural network

learning process.

Also, can you provide a general idea on what is happening in your neural network

visualization example? What were you feeding the network during all those many

iterations?

Sorry for asking so many questions, it’s just I’m trying to get a deep understanding on

this topic, but failing to find quality material that isn’t too difficult for beginner like me.

Thank you.

Reply

Daniel

— December 23, 2017 at 10:18 pm

this network doesn’t work well, my outputs are exactly like the example above,

but training 16000 inputs for xor problem, the error is still very big, with another

net I got very small error with 2000 inputs and i didnt even touch eta

Reply

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 12/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Henry Henri

— January 2, 2018 at 12:54 pm

If I get your question correctly you’re asking, whether to keep using the same

input for all training cycles, then the answer is no. The network learns by

example. The more examples you show the more it will learn. If you only show

one example, that’s the only case it will be able to work with. Depending on the

complexity of the task you might need to teach tens of thousands of different

samples throughout training. In some cases, it is appropriate to show the same

sample multiple times. E.g. if you want to train XOR (you need at least one

hidden layer for this) you have the possible samples (0,0 => 0) (1,0 => 1) (0,1 => 1)

and (1,1 => 0). You run them in some random order until you have reached an

error rate you’re comfortable with.

Reply

Daniel

— December 19, 2017 at 2:34 pm

why should it be called backpropagation if you don’t update the weights after you

calculed them? you can easily perform this operation from the input layer to the output

layer and get the same result… are you sure about “we use the original weights, not the

updated weights, when we continue the backpropagation algorithm below” ?

Reply

Henry Henri

— January 2, 2018 at 12:47 pm

pushes the error back through the network to find out how much responsibility

to assign to each weight. This responsibility is then used to update the weights.

You can interleave backpropagation and update for each layer, but first, you

have to calculate the error for the next layer before you can do the update.

Reply

Adam Girycki

— December 20, 2017 at 1:24 pm

Reply

WangLu

— December 20, 2017 at 9:44 pm

Great simpification a NN to a two layer single data structure. It’s really easy to learn for

beginner. Thanks man.

Reply

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 13/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Daniel Zheng

— December 23, 2017 at 10:10 pm

Great tutorial just finished going through the math and managed to reproduce the

calculation. Only a matter of time until I master it.

Reply

rspurge2

— December 25, 2017 at 3:17 pm

Reply

vinith kumar

— December 28, 2017 at 12:22 am

Reply

Saurabh

— December 28, 2017 at 4:29 am

Simple and intuitive explanation !! This is what I was looking for. Thank you.

Reply

Wallen Tan

— December 30, 2017 at 7:38 pm

Really good tutorial! One of the most helpful ones I’ve come across.

Does the bias unit in each layer have only one weight or would there be a separate

weight per connection with the nodes?

Reply

Willy Wonka

— January 2, 2018 at 9:43 am

Man you explained it all, i have finally succeeded to implement hidden layer

backpropagation now. THANKS!

Reply

Henry Henri

— January 2, 2018 at 12:49 pm

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 14/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Maybe I missed it, but I think you left out the update of the biases. It’s simple but still

might not be obvious to everyone new to the topic.

Reply

James Liu

— January 4, 2018 at 1:49 am

1) how do u ensure differentiating give the minimum value? instead of giving the

maximum value?

2) Won’t differentiating it once give the lowest maximum value? which is the smallest

error. why do we have to differentiate it 10000 times?

Reply

Priyank

— January 4, 2018 at 11:38 am

Dear sir,

Error should be:

Error=(1/2)(out-target)^2

isn’t it?

Reply

Priyank

— January 4, 2018 at 11:54 am

Reply

Rashmi G

— January 8, 2018 at 5:26 am

Hi..A very useful article..I have a doubt with calculating error for o1 . Here I am getting a

negative value (-0.2747). Kindly help me here. Thanks in advance.

Reply

Karim Fouad

— January 8, 2018 at 10:17 am

Reply

Karim Fouad

— January 8, 2018 at 10:20 am

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 15/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

for those who ask about bias updating you may assume the weights of bias as

W’ 1 , W’ 2 , W’ 3 , W’ 4 and then apply same process on them

Reply

Draft

!g

ni

P

How To Build an Artificial Neural Network From Scratch

!g

ni

P

Deept

— January 13, 2018 at 4:43 am

Reply

Manas

— January 17, 2018 at 9:49 pm

This is so far the best explanation of backprop I read so far. Thankyou so much for this!

Reply

felipe

— January 18, 2018 at 2:39 pm

“We can calculate \frac{\partial E_{o1}}{\partial net_{o1}} using values we calculated earlier:

” whats going on here? how do i get the result?

Reply

Kashyap Mahanta

— January 21, 2018 at 1:21 am

Reply

Ankit Bhaukajee

— January 25, 2018 at 9:54 am

This blog is the best of the best explanation I have found in decoding backprop.

Everybody was giving their own formula and I was not able to grasp the intuition but this

post really helped me what is happening inside. Thank you for helping people.

Reply

Thomas

— January 28, 2018 at 1:41 pm

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 16/17

2/19/2018 A Step by Step Backpropagation Example – Matt Mazur

Amazing explanation!

Though I used some random inputs and set the target values to double the input values

(so the first output of the network is double the value of the first input, and the second

output is double the value of the second input). It worked perfectly for specific input

values. For example [0.03,0.09] would output very close to [0.06,0.18].

Though when I ran the algorithm in a loop many times, then tested the network (i.e.

without using backprop and target values) it just outputted the same values that were

outputted in the last iteration of the loop, rather than doubling the new values I inputted

into the network.

So basically it only worked when I ran the backprop with the target values – though I

want it to work without the target values!

Can anyone suggest anything? Sorry if I’m not being very clear, I’d be happy to explain

myself if anyone is confused about what I mean.

Reply

Thomas

— January 30, 2018 at 1:28 pm

Don’t worry, I worked it out! I was backpropagating too much for each pair of

inputs, and not putting enough test inputs in. I should’ve been backpropagating

alot less and using alot more test inputs!

Reply

Christophe Schnitzler

— January 30, 2018 at 6:02 am

Hi

I really don’t get how you calculate this line

\frac{\partial E_{total}}{\partial out_{o1}} = 2 * \frac{1}{2}(target_{o1} – out_{o1})^{2 – 1} * -1 + 0

I’m sure it is really simple, but cannot figure it out. Thanks for your help!

Reply

← Older Comments

Leave a Reply

Blog at WordPress.com.

https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ 17/17

- Write a Program for Imlementing Perceptron Learning AlgorithmHochgeladen vonHemant Panwar
- Deep Learning Basics conceptsHochgeladen vonAlam
- Business CommunicationHochgeladen vonBhanu Mehra
- Bretas a.S. Artificial Neural Networks in Power System RestorationHochgeladen vonNibedita Chatterjee
- Obstacle Avoidance in Mobile Robot Using Neural Network---IEEE2011Hochgeladen vonSVSEMBEDDED
- Image Processing Compression and Reconstruction by Using New Approach Artificial Neural NetworkHochgeladen vonAI Coordinator - CSC Journals
- 05.2.2-BackpropHochgeladen vonDuc DQ
- Artificial Neural NetworkHochgeladen vonpkn.11ee56
- Neural AssignmentHochgeladen vonMustaQeem Ahmad
- Gesture Recognition TechnologyHochgeladen vonMidhun T Menon
- Activation FunctionHochgeladen vonsegnumutra
- Review on ANN Based STLF ModelsHochgeladen vongsaibaba
- Object Trackinng PHd ThesisHochgeladen vonDr Madhurima
- 106-T251Hochgeladen vonTina Kietzman
- Wind-Solar Intelligent Controller System based on FPGA: Review StudyHochgeladen vonIJERT.ORG
- Adaptive Neuro-Fuzzy Controller for Robot VehicleHochgeladen vonanaffira
- Thesis the Effect of Moisture on the Hgi of CoalHochgeladen vonWita Manalu
- curs3siteHochgeladen vonMarius_2010
- 10.1049@ip-gtd-19960314Hochgeladen vonSandy Tondolo
- Takens Theorem With Singular Spectrum Analysis Applied to Noisy THochgeladen vonfc2229
- paper1Hochgeladen vonBrooke Cline
- Practical Introduction to Power of Enterprise MinerHochgeladen vonPurna Ganti
- DirectFileTopicDownload (7)Hochgeladen vonKarthik V Kalyani
- Binder1 54.pdfHochgeladen vonAbdul Rahman
- Types of Communication.pptHochgeladen vonNatalia Naumiuc Țîrdea
- Cybernetics_state_of_the_art.pdfHochgeladen vonJohn Kellden
- visicoHochgeladen vonminhtama7
- chap9traHochgeladen vonmarcum01
- CPM VageeshaHochgeladen vonVageesha Shantha Veerabhadra Swamy
- Artificial IntelligenceHochgeladen vonVelMurugan

- Datastage Functions and RoutinesHochgeladen vonsreehare
- PHEONWJ-M-SPE-0023~1 Valve SpecificationHochgeladen vonmizbah93
- Number SystemHochgeladen voneduardo acunia
- CAMHochgeladen vonMaheshwar Reddy
- Xtc Bikes2001Hochgeladen vondudeman
- Abstract - Air BikeHochgeladen vonNasruddin Shaikh
- YAXLEY, Heather Marie Liddiment_Ph.D._2017.pdfHochgeladen vonRao G
- Project ManagementHochgeladen vonlipu55ps
- The Strange - Translation Codex.pdfHochgeladen vonKrys
- HrdHochgeladen vonRachin Suri
- 2013-Application UTA EngHochgeladen vonclementscounsel8725
- Prediction of Time-Dependent Deformations in Post-tensioned Concrete Suspended Beams and Slabs in Tall Buildings [T.jayaSINGHE(2011)]Hochgeladen vonMicheline Cousin
- G342Hochgeladen vonAsif Hussain
- BAM_USHochgeladen vonjain.ruchir4281
- Wiring Harness GuideHochgeladen vonxlnc1
- ASHITL-3v1Hochgeladen vonArfath Pasha
- Auto Parts Sales & Inventory SystemHochgeladen vonVince Ryan L. Arboleda
- Aanchal Digest August 2017Hochgeladen vonAnonymous wQfa4xUFk
- Company Profile PT MELU BANGUN WIWEKA.pdfHochgeladen vonkraven99
- Sauerdanfoss Series Oml Omm Catalogue en 520l0346 2Hochgeladen vonreynaldy
- ntpc 3Hochgeladen vonEr ROhit Singhal
- UyenTrang NguyenHochgeladen vonemptysoul87
- ARISE AWAKE-Rashmi Bansal.pdfHochgeladen vonHemamalini
- 2015 BPVC-III-5Hochgeladen vonFernandoSerranoTrejo
- C1140.pdfHochgeladen vonMyriam Carrera
- Bus Depot Design GuidelinesHochgeladen vonAfrin Fathima
- Design of Biometric Fingerprint Image Enhancement Algorithm by Using Iterative Fast Fourier TransformHochgeladen vonTulipe Pivoine
- EFFECTIVENESS IS SAINS WORKSHEET-BASED STS (SCIENCE TECHNOLOGY SOCIETY) TO TRAIN THE CREATIVE THINKING SKILLS OF JUNIOR HIGH SCHOOL STUDENTS.Hochgeladen vonIJAR Journal
- Boa Bellows GuideHochgeladen vonAkshay Khamar
- 1E47F6A9-4315-4422-BA8D-CE555BE14B4BHochgeladen vonMu Hardi

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.