Sie sind auf Seite 1von 4

Numerical Methods Exam 2 Notes

Chapter 4 Roots: Bracketing Methods 1. Root-finding finding the value of a variable that makes a function equal to zero 2. Graphical methods make a plot of the function and check chere it crosses the x-axis a. Can be used to find rough estimates but are not good for precision 3. Bracketing methods and initial guesses a. Bracketing methods i. Based on two initial guesses that bracket the root, that is on either side of the root b. Open methods i. Methods that involve one or more initial guesses but there is no need to bracket the root 4. Incremental Search a. If f(x) is continuous from xl to xu and ( ) ( ) then there is at least one real root between xl and xu. b. The incremental search method locates an interval where the function changes sign and then searches for the root there. c. See incsearch.m 5. Bisection Method a. The bisection method is a variation of the incremental method in which the interval is always divided in half, if a function changes sign over an interval, the function value at the midpoint will be evaluated. The location of the root is then determined as lying within the subinterval where the sign change occurs. The subinterval then becomes he interval for the next iteration. This process is then repeated until the desired precision is achieved. b. Steps to solve bisection method by hand i. Guess values for upper and lower bounds having different signs ii. Initial estimate of the root xr is calculated by: 1. iii. Compute the product of the function value of the lower bound and the midpoint, if the value is greater than zero, there is no sign change between the two values and the root must be located between the midpoint and the upper bound iv. Wash rinse repeat v. If the root of the function is known error can be calculated using true percent relative error if it is not it can be calculated using approximate percent relative error: 1. | | | | c. See bisection_WRH.m 6. False Position Method a. Locates the root of the function by drawing a straight line between ( ) ( ) such the intersection of this line with the x axis represents an improved estimate of the root.

Numerical Methods Exam 2 Notes


( )( ( ) )

i.

( )

b. Follow the same logic as in the bisection method to iterate until error level drops to a specified level c. See false_position.m Chapter 6 Roots: Open Methods 1. Simple Fixed Point Iteration a. Single fixed point iteration is accomplished by rearranging the function ( ) ( ) which in iterative form is: ( ) into the form: b. The approximate error of this method can be calculated as follows: i. | | x 100% c. Algorithm for solving using this method i. Transform ( ) into ( ) ii. Begin with an initial guess and iterate until desired accuracy is achieved 2. Newton-Raphson Method a. Given an initial guess at the root at a tangent is extended from the point ( ) . The point where this tangent crosses the x-axis usually represents an improved estimate of the root. It is shown by the following formula: ( ) i. ( ) b. Iterate the above formula until desired accuracy is achieved c. See newtraph.m 3. Secant Methods a. Although Newton-Raphson method works well with most polynomials, for cases with difficult to evaluate derivatives, the derivative can be approximated using a backward finite divided difference: ( ) ( ) ( ) i. b. The above equation can be used to formulate an iterative equation for root finding called the modified secant method: i.
( ( ) ) ( )

ii. In the above, = a small perturbation fraction 4. MATLAB function fzero a. The fzero function is a combination of both Newton-Raphson and Secant methods and has the following syntax i. fzero(function,x0) where x0 is the initial guess. Two guesses that bracket the root over a sign change can be passed as a vector as follows: 1. fzero(function,[x0,x1],options) where x0 and x1 are guesses that bracket a sign change 2. The two endpoints must differ in sign or an error will occur

Numerical Methods Exam 2 Notes

Chapter 7 Optimization 1. Concerning maxima and minima: a. First derivative ( ) i. Where the first derivative is equal to zero, either a maximum or minimum occurs b. Second derivative ( ) i. If ( ) the point is a maximum ii. If ( ) the point is a minimum

2. One-dimensional Optimization a. Concerns finding the min or max of a function of a single variable b. When optimizing functions both global (the very best solution) and local (though not the best, better than its immediate neighbors) optimums. Cases with local max and mins are called multimodal and most often we are concerned with finding the global optimum. 3. Golden-Section Search a. After chosing upper and lower bounds containing only a single minimum create two interior points in the following way: i. ( )( ) ii. iii. b. Next evaluate the function at the two interior points c. If ( ) ( ) we know that the value of f at is closer to the actual minimum and so we now redefine the interval as follows: i. ii. iii. d. Wash, rinse, repeat until error is sufficiently small. Percent error for golden section search can be calculated as follows: i. ( )| | 4. Parabolic Interpolation a. Takes advantage of the fact that just as there is only one straight line connecting two points there is only one parabola connecting three points b. By the above reasoning, if there are three points that jointly bracket an optimum, a parabola can be fit to the points. The equation of that parabola can then be differentiated, set to zero and solved for an estimate off the optimal x. i.
( ( ) ( ) ( ) ) ( ( ) ) ( ( ) ) ( ( ) ) ( ( ) )

ii. Once is calculated, plug it into the original function and use a strategy similar to golden-section search to determine which point should be discarded. For instance, if the function value for the new point is lower than for the intermediate point, and the new x value is to the right of the intermediate point, will be discarded.

Numerical Methods Exam 2 Notes

iii. Wash, rinse, repeat 5. MATLAB function fminbnd a. fminbnd combines golden-section search with parabolic interpolation. b. Syntax: i. fminbnd(function,x1,x1,options) c. Parameters can be set with optimset just like in fzero 6. MATLAB function fminsearch a. Used to determine the minimum of a multidimensional function b. Syntax: i. [xmin fval] = fminsearch(function,[x1,x2]) Optimization Problem Step-by-step Solution Process 1. Identify design variables 2. Identify cost function and how design variables appear 3. Identify constraints and how design variables appear 4. Categorize the type of optimization problem a. Linear, quadratic, nonlinear with constraints, nonlinear without constraints 5. Select appropriate MATLAB optimization function and implement MATLAB Optimization Functions and Syntax 1. Linear Programming a. linprog i. x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options) 2. Quadratic programming a. quadprog 3. Nonlinear Programming (Unconstrained) a. fminbnd scalar bounded nonlinear function minimization i. x = fminbnd(@sin,0,2*pi) ii. x = fminbnd(@(x)(x-3)^2-1,0,5) b. fminunc multidimsnsional unconstrained nonlinear minimization c. fminsearch multidimensional unconstrained nonlinear minimization by NelderMead direct search method d. lsqnonli non-linear least squares problems 4. Nonlinear programming (constrained) a. fmincon multidimensional constrained nonlinear minimization b. fseminf multidimensional constrained minimization, semi-infinite constraints

Das könnte Ihnen auch gefallen