Sie sind auf Seite 1von 21

Worksheet 6

Joy Krishna Mondal


jm12752
1. (a) x(x<=0)
(b) x(isprime(abs(x)) == 0 & x>-1 & mod(x,2) == 1)
(c) x = x(:,sum(x<0 == 1) > 0 == 0)
(d) [x(x(:,1) > 10,1);x(size(x,1)+1:end)']
(e) [unique(x) arrayfun(@(elem) size(find(elem == x),1),unique(x))]
(f) reshape(sort(reshape(x,[size(x,2) size(x,1)])),[size(x,1) size(x,2)])
(g) x(std(x(:)) < abs(x)) = mean(x) +std(x(:)).*sign(x(std(x(:)) < abs(x)))
(h) y = reshape(sort(x(:),'descend'),[size(x,2) size(x,1)])'
2. (a) The Lagrange Polynomial p(x) can be expressed in the general form for n dataset.
p(x) =
(x x
1
)(x x
2
) . . . (x x
n
)
(x
0
x
1
)(x
0
x
2
) . . . (x
0
x
n
)
f(x
0
) +
(x x
0
)(x x
2
) . . . (x x
n
)
(x
1
x
0
)(x
1
x
2
) . . . (x
1
x
n
)
f(x
1
) +. . .
(x x
0
)(x x
1
) . . . (x x
n1
)
(x
n
x
0
)(x
n
x
1
) . . . (x
n
x
n1
)
f(x
n
)
(1)
We can deduce that the coecients in both the Numerator and Denominator is the same for each term.
p(x) =
(x x
1
)(x x
2
) . . . (x x
n
)
(x
0
x
1
)(x
0
x
2
) . . . (x
0
x
n
)
f(x
0
) +
(x x
0
)(x x
2
) . . . (x x
n
)
(x
1
x
0
)(x
1
x
2
) . . . (x
1
x
n
)
f(x
1
) +. . .
(x x
0
)(x x
1
) . . . (x x
n1
)
(x
n
x
0
)(x
n
x
1
) . . . (x
n
x
n1
)
f(x
n
)
_

_
x
1
x
2
x
3
. . . x
n
x
0
x
2
x
3
. . . x
n
.
.
.
.
.
.
.
.
. x
n
x
0
x
1
x
2
. . . x
n1
_

_
=
_

_
cycle(vector,1)
cycle(vector,2)
.
.
.
cycle(vector,n)
_

_
= matco %matrix of coefficient
vector = [x
1
x
2
x
3
. . . x
n
]
1 function final = cycle(vector ,i) % i is the indice of the value to shift
2 before = []; % the function is similar to quicksort as in you have a piviot
3 after = []; % defined using i and then shift the values accordingly
4 for k = [1:(i-1)]
5 before(k) = vector(k);
6 end
7 for k = [(i+1):length(vector )]
8 after(k-(i)) = vector(k);
9 end
10 final = [after before ]; %output value after slicing and shifting the values
11 end
colarrayfun(f,v,a) is a higher order utility function, which does the following operation. n m are dimensions.
(f, vector, matrix) f(vector, matrix) matrix
(f, (n 1), (n m)) f((n 1), (n m)) (n m)
I needed to write it since arrayfun returns a matrix of matrix rather than a reduced matrix.
1 function out = col arrayfun(f,v,a)
2 out = [];
3 for i = [1:size(a,1)]
4 out(i,:) = f(v(i),a(i,:)); % since matlab can do row vectorization.
5 end
6 end
1
We can Separate the formula to make computation easier for our nal piece of code
p(x) =
(x x
1
)(x x
2
) . . . (x x
n
)
(x
0
x
1
)(x
0
x
2
) . . . (x
0
x
n
)
f(x
0
) +
(x x
0
)(x x
2
) . . . (x x
n
)
(x
1
x
0
)(x
1
x
2
) . . . (x
1
x
n
)
f(x
1
) +. . .
(x x
0
)(x x
1
) . . . (x x
n1
)
(x
n
x
0
)(x
n
x
1
) . . . (x
n
x
n1
)
f(x
n
)
1 function lagrange(x,y) % <x,y> is the dataset and
2 xaxis = linspace(min(x),max(x) ,1000); %<min(x),max(x)>is the interval we are plotting for.
3 for k = [1:length(x)]
4 matco(k,:) = cycle(x,k);
5 end
6 denominator = col arrayfun(@(elem ,row) elem - row ,x,matco );
7 denominator = prod(denominator ,2);
8 mf = (1./denominator).*(y'); % mf means multipication factor
9 disp(matco );
10 for i = [1:length(xaxis)]
11 numerator = prod(xaxis(i) - matco ,2);
12 yaxis(i) = sum(mf.*numerator );
13 end
14 myplot(xaxis ,yaxis ,x,y);
15 end
1 lagrange(
2 [0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25
3 2.50 2.75 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75
4 5.00],
5 [0.000000 0.210017 0.324372 0.358154 0.346574 0.316461
6 0.281936 0.249009 0.219722 0.194417 0.172795 0.154366
7 0.138629 0.125139 0.113515 0.103445 0.094673 0.086989
8 0.080223 0.074237 0.068914]
9 );
(b) (c) The Vandermonde Matrix in our case has a determinant close to zero which means our polynomial approximation
may not be accurate. For this particular interpolation we get RCOND = 1.218858e-23.
For the Lagrange Method we are not having to need to solve a matrix equation Ax = b, which depending on the
algorithm used (i.e Gaussian Elimination,Cramers Rule) might be computationally expensive, while the Lagrange
method is a lot of elementary operations.
Any set of n Linear Equations either have no solutions, innite solutions or a single unique solutions. Assuming that
the mapping of the dataset is an bijective function we can state that both the approaches will result in the same unique
polynomial due to the fact we get a unique solution to the set of Linear Equations.
2
3. (a) The Newton-Cotes method we will be using is Booles rule.
_
x5
x1
f(x)dx
2k
45
(7f
0
+ 32f
1
+ 12f
2
+ 32f
3
+ 7f
4
) (2)
f
n
= f(x
n
)
|x
n
x
1
|= k where k is a constant value.
For Our MATLAB code
f is |
dr
dt
| passed as an anonymous function . begin is a which is the start of the interval. last is b which is the end
of the interval. stepsize is is the step-size. out is the arclength L.
1 function out = three(@x,@y,@z,begin ,last ,stepsize)
2
3 if abs(round(rem(stepsize ,4))) ~= 0
4 disp('stepsize needs to be multple of 4 for the algorithm to work');
5 return
6 end
7
8 divisions = stepsize + 3;
9 step = (last - begin )/ divisions;
10 phis = [begin:step:last];
11
12 XValue = arrayfun(@x,phis);
13 YValue = arrayfun(@y,phis);
14 ZValue = arrayfun(@z,phis);
15
16
17 XDiff = arrayfun(@(a,b) mydiff(a,b,step),XValue (1:end),[XValue (2:1:end) XValue(end )]);
18 YDiff = arrayfun(@(a,b) mydiff(a,b,step),YValue (1:end),[YValue (2:1:end) YValue(end )]);
19 ZDiff = arrayfun(@(a,b) mydiff(a,b,step),ZValue (1:end),[ZValue (2:1:end) ZValue(end )]);
20
21
22 value = arrayfun(@(x,y,z) sqrt(x^2 + y^2 + z^2),XDiff ,YDiff ,ZDiff);
23
24 weights = ones(size(value ));
25 weights (1) = 7*weights (1);
26 weights(end) = 7*weights(end);
27 weights (2:4:end) = 32.*weights (2:4:end);
28 weights (3:4:end) = 12.*weights (3:4:end);
29 weights (4:4:end) = 32.*weights (4:4:end);
30 weights (5:4:end -1) = (2*7).*weights (5:4:end -1);
31 sol = (2/45).*step.*(sum(value.*weights ));
32
33 end
(b) From Question:
x = a
_

_
1 +

N
_
2
+ 1 cos () y = a
_

_
1 +

N
_
2
+ 1 sin () z = a
_
1 +

N
_
L =
_
b
a
_
_

_
dx
d
_
2
+
_
dy
d
_
2
+
_
dz
d
_
2
_
_
d (3)
L is arc length.
I modifed the above code so that i can work with the part of the question.
3
1 function out = three(N)
2 a= 1;
3 begin= 0;
4 last = 2*N*pi;
5
6 divisions = 4000 + 3;
7 step = (last - begin )/ divisions;
8 phis = [begin:step:last];
9
10 function out = x(phi)
11 out = a.*sqrt(-1.*((-1+phi./(pi*N)).^2)+1).*cos(phi);
12 end
13 function out = y(phi)
14 out = a.*sqrt(-1.*((-1+phi./(pi*N)).^2)+1).*sin(phi);
15 end
16
17 function out = z(phi)
18 out = a.*(-1 + phi./(pi*N));
19 end
20 function out = mydiff(a,b,h)
21 out = (b-a)./h;
22 end
23
24
25
26 XValue = arrayfun(@x,phis);
27 YValue = arrayfun(@y,phis);
28 ZValue = arrayfun(@z,phis);
29
30
31 XDiff = arrayfun(@(a,b) mydiff(a,b,step),XValue (1:end),[XValue (2:1:end) XValue(end )]);
32 YDiff = arrayfun(@(a,b) mydiff(a,b,step),YValue (1:end),[YValue (2:1:end) YValue(end )]);
33 ZDiff = arrayfun(@(a,b) mydiff(a,b,step),ZValue (1:end),[ZValue (2:1:end) ZValue(end )]);
34
35
36 value = arrayfun(@(x,y,z) sqrt(x^2 + y^2 + z^2),XDiff ,YDiff ,ZDiff);
37
38 weights = ones(size(value ));
39 weights (1) = 7*weights (1);
40 weights(end) = 7*weights(end);
41 weights (2:4:end) = 32.*weights (2:4:end);
42 weights (3:4:end) = 12.*weights (3:4:end);
43 weights (4:4:end) = 32.*weights (4:4:end);
44 weights (5:4:end -1) = (2*7).*weights (5:4:end -1);
45 out = (2/45).*step.*(sum(value.*weights ));
46
47 end
N ArcLength
1 6.255840564527091
2 10.826358426103548
3 15.590747921782716
4 20.422002512409524
5 25.285441107730147
Table 1: Arclength
4
We can See from the graph the gradient is 5 and Y-intercept is 0. hence :
Arclength = 5N (4)
5
4.
f(x) =
1
2
0.5

0.5
x
e

1
2
2
(+log (x))
2
Subsitution = 0.5 and = 0.1 and x= 1 we get -2.37753957937 for f

(1).
1 function out = four()
2
3 function out= sample(x)
4 mu = 0.1;
5 sigma =0.5;
6 left = 1./(x.*sigma*sqrt(2*pi));
7 right = (-1).*((log(x) - mu).^2)./(2*(sigma ^2));
8 out = left*exp(right);
9 end
10 function out = secdef(f,x,h)
11 out = (f(x + 2*h) - 2*f(x+h) +f(x))/(h^2);
12 end
13 function out = calculate(variable)
14 out = arrayfun(@(h) secdef(@(x) sample(x),1,h),variable );
15 end
16 function out = divi(first ,last ,space)
17 step = (last - first )/space;
18 out = [first:step:last];
19 end
20 x = divi(exp(-15),exp ( -11) ,1000);
21
22 myplot(log(x),log(calculate(x)),'log( h)','log (f''(1))');
23
24 end
Figure 1: Finding value of h which results in least error
6
Figure 2: Truncation Error decreasing on the right while Rounding Error is increasing on left
.
Figure 3: logtruncation
.
Rounding error is caused by the imprecision of oating point arithmetic (binary expansion) caused by the computer.
Truncation error is caused by the choice of h, if we choose h to be zero than we do not have a truncation error.
7
We can established from these graphs that step size of e
11
is the point where the errors is lowest.
Figure 4: truncation error increases at a near exponential rate Oh2 as shown in the graph
Figure 5: rounding error is of the Oh but diverges quickly and cannot be predicted
8
Taylor Expansion
Doing Taylor Series gives me a dierent answer (Oh), I am not sure why but my intuition is the function doesnt approximate
well with Taylor series.
f

(x)
f(x + 2h) 2f(x +h) +f(x)
h
2
(5)
Using taylor series:
f(x + 2h) = . . .
4f

(x)
3
h
3
+ 2f

(x)h
2
+ 2f

(x)h +f(x) (6)


f(x +h) = . . .
f

(x)h
3
6
+
f

(x)h
2
2
+f

(x)h +f(x) (7)


Substituting expansions in equation (5) and simplifying:
f

(x) = f

(x) +
_
hf

(x) . . .
_
(8)
9
5. (a) 1 function out = five()
2
3
4 function out = algorithm(f,x_0 ,h,final)
5
6 function out = createM(f,t,x)
7 m1 = f(x,t);
8 m2 = f(x+(h/3)*m1,t + h/3);
9 m3 = f(x - h/3*m1 + h*m2,t+2*h/3);
10 m4 = f(x + h*m1 - h*m2 + h*m3, t + h);
11
12 out = [m1 m2 m3 m4];
13 end
14
15 % in = 1;
16 for it = [0:h:final];
17
18 % out(in) = x_0;
19 out = x_0;
20 % in = in + 1;
21 once = createM(f,it,x_0);
22 once(2:3) = 3*once(2:3);
23
24 x_0 = x_0 + (h/8)*sum(once);
25
26 end
27 end
28
29 function out = Orinal(x)
30 out = exp(t)*(t - 1);
31 end
32 function out = DiffEqu(x,t)
33 out = t*exp(-2*t) - 3*x;
34 end
35
36 function out = divi(first ,last ,space)
37 step = (last - first )/space;
38 out = [first:step:last];
39 end
40
41 array = divi (1 ,10^( -4) ,10^2 +1);
42
43 y = arrayfun(@(x) algorithm(@DiffEqu ,1/10 ,x,1), array);
44 err = abs (11/10 - y);
45
46
47 myplot(log(array),log(err),'log h','log Error');
48
49 % myplot (([1:length(err)]),err ,'h','Error ',log10 ([1:length(err)]),(err ));
50 % out = y;
51 end
(b) We can write the dierential equation as :
dx
dt
= te
2t
3x (9)
dx
dt
+ 3x = te
2t
(10)
Integrating Factor:
I.F = e

3dt
= e
3t
+C
0
(11)
Multiplying equation (10) with (11)
dx
dt
e
3t
+ 3xe
3t
= te
t
=
d
dt
[xe
3t
] (12)
xe
3t
=
_
te
t
dt = te
t

_
e
t
dt = e
t
(t 1) +C (13)
C =
11
10
and at f(1) we get
11
10
.
10
Figure 6: Error is Decreasing With an Near Exponential Rate
Figure 7: LogLog of Error vs h shows that the Error is Decreasing at an Exponential Rate
6. (a) The number of iteration needed to nd roots of f(x) depends on how much information we have about f(x),importantly:
Is f(x) easily dierentiable i.e do we know f

(x) ?
If we can nd f

(x) can we also nd f

(x) i.e is f

(x) easily dierentiable ?


Ideally we should know f

(x) and f

(x) and pass this information to our program, as we will see this will the most ecient
way to nd roots, however it may not be feasible to pass in this information to our program in which case we will need to
use some numerical technique to approximate f

(x) and f

(x) as is the case for this question.


11
Modied Newton-Raphson Method
x
n+1
= x
n
h = x
n

f(x
n
)
f

(x
n
)
(14)
Newton-Raphson (14) being a second-order scheme is an ideal way of nding the root, however Newton-Raphson can be
unreliable in situations where f

(x) is a small value which may happen if we are near a point of inection, even if we
are not near a point of inection having f

(x) < 1 will cause us to diverge from the root. To improve the reliability of
Newton-Raphson we can use its second derivative, which is unlikely to be a small value.
Assuming x
0
to be our current approximation.
h to be the distance between x
0
and the solution.
At f(x
0
h) we know is equals to zero since its the exact root.
Using Taylor Expansion and ignoring terms of Oh
3
and higher since h 0 :
f(x
0
h) f(x
0
) hf

(x
0
) +
h
2
2
f

(x
0
) 0 (15)
Assuming f

(x
0
) 0 since x
n
is near point of inection, equation (15) becomes:
f(x
0
h) f(x
0
) +
h
2
2
f

(x
0
) 0 (16)
Using (16) we can nd h.
h

2f(x
0
)
f

(x
0
)
(17)
This gives us a better x
n+1
and will help make Newton-Raphson more reliable (Note: Sign will be reversed if f

(x) is
negative).
x
n+1
= x
n
h = x
n

2f(x
0
)
f

(x
0
)
(18)
We need to reect on the fact that this modied Version will only be used when:
f

(x) < 1.
x
n+1
is close to x
n
.
Once we escape the region where f

(x) is a small value we can go back to using the normal Newton-Raphson.


Inverse Quadratic Interpolation
The bisection method employs a linear interpolation to nd x
n+1
given an interval, however as we do iterations we are
nding more points which are part of the curve, we can use the Lagrange interpolation to help improve our interpolation
estimate, however we need to be careful, interpolating from more than 3 points might give rise to the Runges phenomenon
The larange interpolation for 3 data points is :
f
n+1
=
(x
n+1
x
n1
)(x
n+1
x
n2
)
(x
n
x
n1
)(x
n
x
n2
)
f
n
+
(x
n+1
x
n
)(x
n+1
x
n2
)
(x
n1
x
n
)(x
n1
x
n2
)
f
n1
+
(x
n+1
x
n
)(x
n+1
x
n1
)
(x
n2
x
n
)(x
n2
x
n1
)
f
n2
(19)
Inversing f and x data-points in equation (19)
x
n+1
=
(f
n+1
f
n1
)(f
n+1
f
n2
)
(f
n
f
n1
)(f
n
f
n2
)
x
n
+
(f
n+1
f
n
)(f
n+1
f
n2
)
(f
n1
f
n
)(f
n1
f
n2
)
x
n1
+
(f
n+1
f
n
)(f
n+1
f
n1
)
(f
n2
f
n
)(f
n2
f
n1
)
x
n2
(20)
Since we know that f
n+1
= 0, we can rewrite equation (20)
x
n+1
=
f
n1
f
n2
(f
n
f
n1
)(f
n
f
n2
)
x
n
+
f
n
f
n2
(f
n1
f
n
)(f
n1
f
n2
)
x
n1
+
f
n
f
n1
(f
n2
f
n
)(f
n2
f
n1
)
x
n2
(21)
We can use equation (21) to get a better idea where x
n+1
lies, this method is used:
as a substitute for bisection method after the rst iteration since we have 3 data points.
The Interval [a, b] is large (i.e b a > 1).
When f

(x) 0.
12
Finite Dierences
For this question we are not allowed to use symbolic values for f

(x) and f

(x),, in which case we will have to employ a


numerical technique to nd approximate values for them. We can do this using Finite dierences.
f

(x
n
) =
f(x
n
) f(x
n1
)
x
n
x
n1
(22)
Equation (22)) is the conventional nite dierence formula for rst derivative.
f

(x
n
) =
f
n
2f
n1
+f
n2
h
2
(23)
The Secant Method uses equation (23) for calculating f

(x
n
) however i think the assumptions made may not apply to
Newton-Raphson/modied-Newton-Raphson mainly because the Step-size h is not consistant between iterative steps for
Newton-Raphson.
f

(x
n
) =
2 (f
n1
+f
n
) (x
n1
x
n2
) + 2 (f
n1
f
n2
) (x
n1
x
n
)
(x
n1
x
n2
) (x
n1
x
n
) (x
n2
x
n
)
(24)
Equation (24) was derived assuming variable step size h between each successive iteration, however we need to be careful
when to use this noting the fact that equation (22) and equation (24) are only true when
h 0
Presence of one prior datapoint for f

(x
n
)
Presence of two prior datapoint for f

(x
n
).
False Positive
The way I think about False Positive is a method by which we can reduce the solution space of the values we can obtain.False
Positive is basically Linear Interpolation of the curve however it has major drawbacks if used on its own. It can slow down
the convergence rate at really steep curves, however with correct estimation we can use false positive to reduce the time it
takes to converge to your solution
Final Implementation
x
estimate

af
b
+bf
a
f
a
f
b
(25)
Combining Equation (14) and Equation (22) we get the formula we are going to use to approximate Newton Raphson
(Secant Method), Its is worth Noting that this equation is the same for linear interpolation/falsepositive however the
dierence lies in the fact that secant method uses the last two datapoints while the false positive uses the bracketed solution
space. We will use this fact to our advantage and write one function to do both the tasks.
1 function out = LinearIntrep(x,y)
2 out = ((x(2)*y(1)) - (x(1)*y(2)))/(y(1) - y(2));
3 end
Even though I though the dierent theoretical models were really good at approximating the solution during unit testing
of my rootnding I had to change some of the Models and rethink how I am going to Implement it. The biggest Change I
have Made is to the Modied-Newton Raphson Formula.
1 function out = ModifiedNewtonRaphson(ACB)
2
3 diffACB = difference(ACB);
4
5 if diffACB (2) > diffACB (1)
6
7 fact = diffACB (2)/ diffACB (1);
8
9 out = ACB (2) + diffACB (2)/ log(fact);
10
11 elseif diffACB (1) > diffACB (2)
12 fact = diffACB (1)/ diffACB (2);
13
14 out = ACB (2) - differenceiffACB (1)/ log(fact);
15 end
16 end
13
Using equation (18) did not result in the convergence rate I was expecting based on Unit testing results. However the code
shows a modied formula which assumes that we have a linear curve. The fact is a free variable, playing around it gave
vastly dierent iterations cycles for my rootnder and co-incidentally those values were the same as that of the natural log
of the dierence. I Have an intuitive sense as to why this is the case but I failed to gure out why it was happening I also
tried out with higher order interpolations (Quartic), it resulted in an increase in increasing the iterations, so I sticked to
Inverse Quadratic Interpolation.
1 if (di(1) == 0 || di(2) == 0 )
2 FinalEstimate = bisection(AB); flag = 4;
3 else
4
5 FinalEstimate = LinearIntrep(CB(1:2),CB(3:4)); flag = 1; %Secant Method
6
7 if NotInRange(FinalEstimate ,[ACB (1) ACB (3)])
8
9 FinalEstimate = InverseQuadInterp(ACB(1:3),ACB(4:6)); flag = 2; %InverseQuad
10
11 if NotInRange(FinalEstimate ,[ACB (1) ACB (3)])
12
13 FinalEstimate = ModifiedNewtonRaphson(ACB(1:3)); flag = 3;
14
15 if(NotInRange(FinalEstimate ,AB(1:2)))
16
17 FinalEstimate = bisection(AB(1:2)); flag = 4;
18
19 end
20 end
21 end
22 end
This is the main Loop of the root nder, as estimation will fail if it does not fall in the bracketed region using NotInRange
function.
Figure 8: Flowchart of Root Finding Program
The program terminates when the error is below the error tolerance.
blocks are in the main loop.
False Positive
Secant Method
Inverse Qaud
Interpolation
Modied Secan-
t/Newton Raphson
Bisection
14
Final Code
1 function [FinalEstimate it] = myrootfinder(varargin)
2
3 format long
4
5 f = varargin {1};
6 AB = varargin {2};
7 tol = varargin {3};
8 if (sign(tol) == (-1))
9 fprintf('Caution : Tolerence has a Negative Value.');
10 tol = abs(tol);
11 end
12 if (nargin >3)
13 iterationlimit = varargin {4};
14 else
15 iterationlimit = 100;
16 end
17
18 if (nargin >4)
19 helpflag = 0;
20 errortypeflag = [1 1];
21 if (length (varargin {5}) == 1)
22 errortypeflag = errortypeflag.*varargin {5};
23
24 elseif (length (varargin {5}) == 2)
25
26 errortypeflag = varargin {5};
27 end
28 else
29 errortypeflag = [0 0];
30 helpflag = 1;
31 end
32
33 if (nargin >5)
34 Methodflag = varargin {6};
35 else
36 Methodflag = false;
37 helpflag = true;
38 end
39 Method = {'Secant ','InverseQuadInterp ','Modified Newton Raphson ','Bisection ' };
40
41 function [AB CB] = ABfind(list ,flist)
42 if (sign (flist (1)) == sign(flist (2)))
43 AB = [list(2:3) flist (2:3) ];
44 CB = [list(1:2) flist (1:2)];
45 else
46 AB = [list(1:2) flist (1:2)];
47 CB = [list(2:3) flist (2:3)];
48 end
49 end
50
51 function out = NotInRange(Value ,Range );
52 if (Value < min(Range) || Value > max(Range ))
53 out = true;
54 else
55 out = false;
56 end
57 end
58 function [ACB AB CB] = ReAdjust(Estimate ,AB)
59
60
61 fFinal = f(Estimate );
62 ACB = [AB(1) Estimate AB(2) AB(3) fFinal AB (4)];
63
64
65
66 [AB CB] = ABfind(ACB(1:3),ACB(4:6));
15
67
68 end
69 function out = bisection(interval)
70 out = (interval (2) + interval (1))/2;
71 end
72
73 if sign( f(AB(1)) ) == sign( f(AB(2)) )
74 disp ('Error ! interval should have different signs.');
75 return;
76 end
77 function out = difference(ACB)
78 out = [(ACB (2) - ACB(1)), (ACB (3) - ACB (2))];
79 end
80
81 function out = ModifiedNewtonRaphson(ACB)
82
83 diffACB = difference(ACB);
84
85 if diffACB (2) > diffACB (1)
86
87 fact = diffACB (2)/ diffACB (1);
88
89 out = ACB (2) + diffACB (2)/ log(fact);
90
91 elseif diffACB (1) > diffACB (2)
92 fact = diffACB (1)/ diffACB (2);
93
94 out = ACB (2) - differenceiffACB (1)/ log(fact);
95 end
96 end
97 function out = InverseQuadInterp(x,y)
98 out = (y(1)*y(2)*x(3))/((y(3) - y(1))*(y(3) - y(2))) ...
99 +(y(1)*y(3)*x(2))/((y(2) - y(1))*(y(2) - y(3))) ...
100 +(y(2)*y(3)*x(1))/((y(1) - y(2))*(y(1) - y(3)));
101
102 end
103 function out = LinearIntrep(x,y)
104 out = ((x(2)*y(1)) - (x(1)*y(2)))/(y(1) - y(2));
105 end
106 % function out = RiddersMethod(ACB)
107 % out = ACB(2) + (ACB(2) - ACB (1))*(sign(ACB(6) - ACB (4))*ACB (5))/ sqrt(ACB (5)^2 - ACB (4)*ACB (6));
108 % end
109
110
111
112 flag = 0;
113 AB = sort(AB,'ascend ');
114 fAB = arrayfun(f,AB);
115
116 FinalEstimate = LinearIntrep(AB ,fAB);
117 [ACB AB CB] = ReAdjust(FinalEstimate ,[AB fAB]);
118 errtrain = [abs(AB(1) - AB (2))/2];
119 it = 1;
120 value (1) = abs(AB(2)- AB (1));
121
122 while(errtrain(end) > tol)
123
124 PreviousEstimate = FinalEstimate;
125
126 if it == iterationlimit
127 fprintf('Cannot Find Solution with 100 iterations.');
128 fprintf('You can overide this limit by passing in the ')
129 fprintf('optional Fourth Argument.');
130 break
131 end
132
133 di = difference(ACB);
134
16
135 if (di(1) == 0 || di(2) == 0 )
136 FinalEstimate = bisection(AB); flag = 4;
137 else
138
139 FinalEstimate = LinearIntrep(CB(1:2),CB(3:4)); flag = 1; %Secant Method
140
141 if NotInRange(FinalEstimate ,[ACB (1) ACB (3)])
142
143 FinalEstimate = InverseQuadInterp(ACB(1:3),ACB(4:6)); flag = 2; %InverseQuad
144
145 if NotInRange(FinalEstimate ,[ACB (1) ACB (3)])
146
147 FinalEstimate = ModifiedNewtonRaphson(ACB(1:3)); flag = 3;
148
149 if(NotInRange(FinalEstimate ,AB(1:2)))
150
151 FinalEstimate = bisection(AB(1:2)); flag = 4;
152
153 end
154 end
155 end
156 end
157
158 [ACB AB CB] = ReAdjust(FinalEstimate ,AB);
159
160 errtrain = [ errtrain abs(FinalEstimate - PreviousEstimate )];
161
162 value(it) = abs(AB(2)- AB (1));
163 it = it + 1;
164
165 if (Methodflag == true)
166 fprintf('Used %s Method for iteration Step %d. \n',Method{flag},it);
167 end
168
169 if (errortypeflag (1))
170 fprintf('Absolute Error: %0.15e \n',errtrain(end ));
171 end
172 if (errortypeflag (2))
173 fprintf('Relative Error: %0.15e.\n',abs(errtrain(end ))/ FinalEstimate );
174 end
175 end
176
177 if (helpflag == true )
178 fprintf('\n My Root Finder gives %0.12f as best root approximation to',FinalEstimate );
179 fprintf('tolerance of %4e.\n',tol);
180 fprintf(' The number of Iterations required is %d.\n',it);
181 fprintf('\n \nYou can make the function show detailed information for each step.\n');
182 if (errortypeflag ~= true )
183 fprintf('For Error Information pass in ''true'' as 5th Argument.\n');
184 fprintf('\nFor Absolute and Relative Error Pass in Matrix of ');
185 fprintf( 'boolean Value for the 5th Argument \n');
186 fprintf('to specify which you would like to see in that order.\n');
187 end
188 if (Methodflag ~=true )
189 fprintf('\nFor the type of root finder used pass in ''true'' as 6th Argument\n');
190 end
191 end
192 end
7. (a)
10000000000x
4
1111000000x
3
+ 11211000x
2
11110x +e

|x|
1000
= 0 (26)
Substituting x for f(y) where f(y) =
y
1000
in equation (26)
y
4
100

1111y
3
1000
+
11211y
2
1000

1111y
100
+e

|y|
1000000
= 0 (27)
17
a b Iterations Solution(y) x
0.0000 0.2000 7 0.099999988765 9.9999988765e-05
0.8000 1.1000 7 1.000000124704 0.0010000001247
9.5000 10.5000 7 9.999999875297 0.0099999998753
98.0000 102.0000 7 100.000000011234 0.100000000011
Table 2: [sol ite] = arrayfun(@(y) myrootfinder(@sixb1,b1(y,:),(10^-12),100,[0 0],0),[1:size(b1,1)]);
(b)
10xe
10x
2
cot (x) 11 = 0 (28)
Dividing equation(28) by e
10x
2
,noting 0 < x < 25.
10xcot (x) 11e
10x
2
= 0 (29)
a b Iterations Solution
1.5600 1.5800 5 1.570796326781
4.6000 5.0000 6 4.712388980385
7.8000 8.0000 5 7.853981633974
10.9900 11.0000 4 10.995574287564
13.8000 14.5000 6 14.137166941154
17.0000 17.5000 6 17.278759594744
20.4000 20.4400 5 20.420352248334
23.0000 24.0000 5 23.561944901923
Table 3: [sol ite] = arrayfun(@(y) myrootfinder(@sixb2,b2(y,:),(10^-12),100,[0 0],0),[1:size(b2,1)]);
(c)
(x 1)
2

x + 3
_
sin (2x) + 2 |x|
3 = 0 (30)
a b Iterations Solution
3.0000 4.0000 9 3.703852168886
0.1000 0.3000 9 0.228294191409
Table 4: [sol ite] = arrayfun(@(y) myrootfinder(@sixb3,b3(y,:),(10^-12),100,[0 0],0),[1:size(b3,1)]);
(d)
1

5
_
e
1

1
_ = 0 (31)

6
_
e
1

1
_ +
e
1

7
_
e
1

1
_
2
= 0 (32)
5 +
e
1

_
e
1

1
_ = 0 (33)
5 + (5 + 1) e
1

= 0 (34)
a b Iterations Solution g()
max
0.1800 0.2200 9 0.201405235273 21.2014356605499
Table 5: [sol ite] = arrayfun(@(y) myrootfinder(@sixb4,b4(y,:),(10^-12),100,[0 0],0),[1:size(b4,1)]);
18
8.
m
d
2
y
dt
2
+c
dy
dt
+ky = F(t) (35)
Assuming:
y = x
0
and x
1
=
dy
dt
and substituting them in equation (35) and rearranging to give:
dx
1
dt
=
1
m
_
F(t) cx
1
kx
0
_
(36)
1 function seven ()
2
3 m1 = 75*10^( -3);
4 k = 23.6;
5 c = 100;
6 m2 = 45*10^( -3);
7 g = 9.81;
8
9 function out = value(t,x)
10
11 if t<1
12 m = m1;
13 else
14 m = m1 + m2;
15 end
16
17 d1 = x(2);
18
19 d2 = (1/m)*(m*g - c*x(1) - k*x(2));
20
21 out = [d1;d2];
22 end
23 [x,y] = ode45(@value ,[0 5],[0 0]);
24
25 myplot(x,y(: ,1),'time','Displacement ',x,y(: ,2));
26
27 end
19
Figure 9: Green is Velocity while pink is displacement c = 5.2
Figure 10: Green is Velocity while pink is displacement , c = 15.2
20
Figure 11: Green is Velocity while pink is displacement , c = 100.2
Increasing the damping causes more rapid acceleration while displacement is reduced with more damping.
21

Das könnte Ihnen auch gefallen