Sie sind auf Seite 1von 14

ENSC 483 Modern Control Systems Final Project Inverted Pendulum

Gabor Bernat 301134451

Kotai Adam


System Description The Model 505 Inverted pendulum developed by EPC has the following construction:

The system has two axels of liberty. One is the pendulum itself and another the mass at the top of the pendulum. We can replace these masses with a single mass at their center of gravity during the modeling of the system. We represent these points on the model with cg2 for the pendulum and cg1 for the mass at the top. The Non Linear Model of the System For the modeling of the system we will use the energytic approach and the Lagrange equations that has the form of:

In the upper equation T is the systems kineticall energy and V is the potential energy. Q i is the associated general force to the coordinate according what we are deriving ( noted as q i). We write up the kinetic energy as sum of the torques applied to each of the masses.

Vcg1 and Vcg2 are the velocity of the masses at their central gravity point and J 1 and J2 are the polar moment inertias around the same point. Using some simple geometrical correlations, we can decompose the velocities as:

By using the coordinate system created by the vectors u 1 and u2 we project Vcg1 and using the generilized form of Pitagoras theorem we square the equation acquiring the equation:

Furthermore, we know that:

We substitute these into the equation of the kinetical energy:

Where we noted J0 as the systems moment of inertia about O.

For finding the potential energy we define the reference point around what we would like the equilibrium point of the system (Meaning is equal with 90 degree and x is zero). We know that:

And we can calculate the height for each of the central gravity points (depending on the angles and ) using some basic trigonometrical relations:

We choose the length x and the angle for the coordinates in our Lagrange equation. By making the derivation we come across the following two equations that describe the system in a non linear mathematical fashion:

The Linearized model of the system For the linearization of the system we first have to find an equilibrium point around what we will linearize. If we have a motionless system is is true that:

Furthermore if we apply no input to the system ( F(t) = 0) from the non-linear equations we get that:

There exist an additional solution at equals with 180, however this isnt discussed here at that point the pendulum is not inverted. The linearization of the system is obtained by expanding the equations around the equilibrium point using the Taylor series.


State Space Model On the linearized mathematical model we can use Cramers rule in order to find the solutions of the linear equations and though isolating the second order diferentials:


Furthermore we know that the state space model of a system has the form of:

We define the following state variable:

Using the system (1) we acquire the following parameters for the state space model:


And Ci=1 when Xi is an output and zero otherwise. Or their expressed forms:

Stability, Controllability, Observalibility In order to find these traits of the system we will write up the Transfer functions. For this we will transform the the second equation of system (1) to Laplace and we will get:

By doing the same with the first equation we can make the substitition and acquire the equation:

Multipying the two above equations results we obtain the position of the sliding rod in relation with the Force applied to it. As the input is the force applied and as output we can consider the position of the rod this can be considered as the transfer function to the system when C3 is one and the other Cis are zero.

We can calculate the zero points from setting the denominator of the transform function to zero or simply calculating the eigenvalues of the matrix A. From this we get that the poles are:

By using the physical limits of the system we can proove that we will have always two poles on the Y axis of the X0Y coordinate system. Furthermore we have one additional pole in the positive side of the X coordinate axis and one on the negative side. Because we have one eigenvalue in the positiv hemisphere the system is unstable.

The observability matrix of the system looks like:

By trying out the possilbe valuse for the matrix C we can conclude that the system is observable if we set as output at least one of the (x , derivative of x) pair and one of the (, derivative of ) pair. Setting only one of the states as output will resulst that we can observe only that state and its derivative. Furthermore if we set both of any of the pairs we can observ three of the four states. The controllability matrix has the form:

We cans ee that (A^i)*B for any i greater than zero will result in the same column. Therefore the rank of the matrix will be always two. This means that we can controll only two states out of the four at any time. Therefore the system is not controllable. For instance let us see the system for the following parameters taken out of the manual: m1=0.213; m2=1.785; g =9.81; lc=-0.02984;


For what we get the eigenvalues: -3.078, +3.078, +2.96566 j -2.96566 j The controllability matrix has the form:

And its rank is two meaning it isnt controllable. We set as output x and . The observability matrix has a rank of four meaning it is observable:

The State Feedback System We use the following variables of the system:

Parameter m1 m2 J* kx ks

Numerical Value 0.213 kg 1.785 kg 0.042024 kgm2 50525 incr./m 32 incr.

Parameter lc lo J oe ka kf

Numerical Value -0.0427255 m 0.33000 m 0.0652197 kgm2 2342.76 incr./rad 0.013 N/DAC incr.

With these the system looks like (A,B):

We choose as the systems outputs the two measurable states: x 1 and x3. The systems poles are located at the +6.7113i, - 6.7113i, -3.2908 and 3.2908. However as we proved already in the previous section the system is controllable under any circumstances and therefore we can create a state feedback system that will control it. For calculating the feedbacks values (parameter K) we will use the matlab and it has built in function place.

%State feedback poles p1p = -10+3i; p2p = -10-3i; p3p = -10; p4p = -9; %State feedback system K = place(A,B,[p1p p2p p3p p4p])
We have chosen the following values as these are far away from the pole in order to stabilize the system fast enough. The obtained values were:

K = 388.0281 109.4368 765.0615 191.3526

The Observer design The system has two variables that we cannot measure or the device to do so is too expensive. This are the two differentiates of the position and angle (x 2 and x4). In order to measure this we will construct an observer for our system of state feedback. Because throughout the course we have proved that the state feedback has no effect on the observer we can continue to work with our initial system matrices.

Calculating the observer gain (L) is analogue with the calculation of the state feedback with the difference that the observer system is dual of the state feedback system and as we will work with the transposes of matrix A and C. The following matlab code snippet handles this:

%Observer Poles p1 = -25; p2 = -22; p3 = -20; p4 = -18; %Observer L = place (A',C',[p1 p2 p3 p4])'
We use the upper poles because these are twice as large as in case of the controller and as we hope that the system will also stabilize itself twice as fast and as give a result twice as fast. This way we can always have the most up to date input data for our state feedback system. The result the upper calculation yielded is as follows:

L= 42.4011 428.2099 2.4228 66.1853

0 2.5763 0 103.2047 0 42.5989 0 433.5804

0 0 0 0

Finally, we can put together the two systems and simulate it in order to see if it is working. For the simulation, we use a step input signal of magnitude 0.2.

T = 0:0.01:1.5; U = 0.2*ones(size(T)); [Y,X] = lsim(Ae,Be,Ce,De,U,T); plot(T,Y) legend('Controller','Observer')

In the upper figure, we can see that the controller is twice as fast (approximately) and therefore the chosen poles should be good. Throughout the runtime of the system, we can use the Runge-Kutta method to solve the differential equation. For controlling the system, we use the following algorithm written for the ECP 505.

;Gabor Benat and Kotai Adam ;***********define user variables ************** ;ECP505 Gains of the System #define h q23 #define a1 q24 #define kf q25 #define kx q26 #define ka q27 #define ks q28 #define y1 q29 #define y3 q30 ;Matrix #define #define #define #define A A21 A23 A41 A43 q1 q2 q3 q4

;Matrix B #define B2 q5 #define B4 q6 ;Observer Gains #define L11 q7 #define L13 q8 #define L21 q9 #define L23 q10 #define L31 q11 #define L33 q12 #define L41 q13 #define L43 q14 ;Feedback Gains #define K1 q15 #define K2 q16 #define K3 q17 #define K4 q18 ;Observer Variables #define x1 q19 #define x2 q20 #define x3 q21 #define x4 q22 ;Runge-Kutta #define rk11 #define rk12 #define rk13 #define rk14 #define rk21 #define rk22 #define rk23 #define rk24 #define rk31 #define rk32 #define rk33 Numerical Differentiation Variables q31 q32 q33 q34 q35 q36 q37 q38 q39 q40 q41

#define #define #define #define #define

rk34 rk41 rk42 rk43 rk44

q42 q43 q44 q45 q46

;************Initialize variables**************** A21 = -17.8032 A23 = 49.7223 A41 = 15.685 A43 = -16.4084 B2 = -7.85266 B4 = 4.69484 ; Controller Gains with poles at -10+3i, -10-3i, -10, -9 K1 = 388.0281 K2 = 109.4368 K3 = 765.0615 K4 = 191.3526 ; Observer Poles at -25, -22, -20, -18 ; Observer Gains L11 = 42.4011 L13 = 2.5763 L21 =428.2099 L23 = 103.2047 L31 = 2.4228 L33 = 42.5989 L41 = 66.1853 L43 = 433.5804 ; Initial State Initialization of the Observable x1 = 0 x2 = 0 x3 = 0 x4 = 0 h = 0.001768 a1 = 0.38 kf = 0.0013 kx = 50200 ka = 2546 ks = 32 y1 = 0 y3 = 0 ;********* real time code which is run every servo period *** begin control_effort = (-K1*x1-K2*x2-K3*X3-K4*x4)*(a1/kf) y1 = 10*(enc1_pos)/(ks*ka) y3 = 10*(enc2_pos)/(ks*kx) rk11 = (-L11)*x1 + x2 + (-L13)*x3 + L11*y1 +L13*y3 rk12 = (-L11)*(x1+h/2*rk11) + (x2+h/2*rk11) + (-L13)*(x3+h/2*rk11) + L11*y1 +L13*y3 rk13 = (-L11)*(x1+h/2*rk12) + (x2+h/2*rk12) + (-L13)*(x3+h/2*rk12) + L11*y1 +L13*y3 rk14 = (-L11)*(x1+h*rk13) + (x2+h*rk13) + (-L13)*(x3+h*rk13)+ L11*y1 +L13*y3

x1 = x1 + h/6*(rk11 + 2*rk12 + 2*rk13 + rk14) rk21 = (A21-L21)*x1 + (A23-L23)*x3 + B2*control_effort*(kf/a1) + L21*y1 +L23*y3 rk22 = (A21-L21)*(x1+h/2*rk21) + (A23-L23)*(x3+h/2*rk21) + B2*control_effort*(kf/a1) + L21*y1 +L23*y3 rk23 = (A21-L21)*(x1+h/2*rk22) + (A23-L23)*(x3+h/2*rk22) + B2*control_effort*(kf/a1) + L21*y1 +L23*y3 rk24 = (A21-L21)*(x1+h*rk23) + (A23-L23)*(x3+h*rk23) + B2*control_effort*(kf/a1) + L21*y1 +L23*y3 x2 = x2 + h/6*(rk21 + 2*rk22 + 2*rk23 + rk24) rk31 = (-L31)*x1 + (-L33)*x3 + x4 + L31*y1 +L33*y3 rk32 = (-L31)*(x1+h/2*rk31) + (-L33)*(x3+h/2*rk31) + (x4+h/2*rk31) + L31*y1 +L33*y3 rk33 = (-L31)*(x1+h/2*rk32) + (-L33)*(x3+h/2*rk32) + (x4+h/2*rk32) + L31*y1 +L33*y3 rk34 = (-L31)*(x1+h*rk33) + (-L33)*(x3+h*rk33) + (x4+h*rk33) + L31*y1 +L33*y3 x3 = x3 + h/6*(rk31 + 2*rk32 + 2*rk33 + rk34) rk41 = (A41-L41)*x1 + (A43-L43)*x3 + B4*control_effort*(kf/a1) + L41*y1 +L43*y3 rk42 = (A41-L41)*(x1+h/2*rk41) + (A43-L43)*(x3+h/2*rk41) + B4*control_effort*(kf/a1) + L41*y1 +L43*y3 rk43 = (A41-L41)*(x1+h/2*rk42) + (A43-L43)*(x3+h/2*rk42) + B4*control_effort*(kf/a1) + L41*y1 +L43*y3 rk44 = (A41-L41)*(x1+h*rk43) + (A43-L43)*(x3+h*rk43) + B4*control_effort*(kf/a1) + L41*y1 +L43*y3 x4 = x4 + h/6*(rk41 + 2*rk42 + 2*rk43 + rk44) end