Sie sind auf Seite 1von 2

What we want to do here, I'll show you two things.

We have two objectives


for this video: one is to show you how to draw a drawn hand
analysis schedule for rate monotonic theory, as well as show you the rate monotonic
policy
by example, how it works. In subsequent courses and video segments we'll show
you that it's also optimal. But right now we just want
to show you how it works. Remember that when
we have services, in this case we
have three, S1, S2, S3, we need to be given the T, C, and D for each one of those.
We can assume D equals
T. So we're given periods and we're given the
computational requirements, the worst-case execution
time, C1, C2, C3. From that, we can compute the frequency, 1 over the period. The
fundamental frequency
will be the longest period. The higher frequencies are a multiple of the
fundamental frequency. The highest multiple of the fundamental is the
highest priority. We would see that would be S1, followed by S2, followed by S3,
which is the fundamental. Notice also that they are not whole number multiples
of the fundamental, therefore they're not harmonic. We can compute the
utility for each one, which is simply the
C over the T and we see that we use
73.33 percent here. Less than the rate
monotonic least upper bound for three services, also far less than 100 percent of
the available CPU resource. Therefore, we have 26.67 percent of the CPU as slack
time; you can consider slack time to be margin relative to 100 percent. You can see
that we
even have some margin relative to the
least upper bound. Now let's show what happens when we apply the rate
monotonic policy, which simply says the
highest frequency service gets the highest priority. They're dispatched according
to priority out of the
ready queue and they run until they are completed or
until they are interfered with by the availability of a service with higher
priority. So let's see how that works. First of all, I should point out some
features of
the timing diagram. The red line is what we
call the critical instant. From a worst-case perspective, we assume that all three
services might be ready and wanting the CPU at
the beginning of time, at the critical instant. That would be a worst-case. If
that's the case, we know S1 will win because
it's the highest priority, and we let it run to completion. Well, it only needs
one unit of time, so it's done in
this first window. Then it's yields the CPU
when it's done and it doesn't want it again
until time window 3. S2 can run and it only
needs one unit of time, so it therefore completes. S1 runs again. It doesn't want
the CPU
after it completes there, and S2 is done and hasn't
been re-requested. It doesn't get re-requested
until time window 11. S3 can run, but it
can't run to completion because we need to run S1 according to rate
monotonic policy, it should preempt S3 and it does. In fact, then S3 can
finally complete, split into two parts
during time windows 6 and then S1 runs again. In fact, you see when
we're drawing these, we can immediately draw S1 and that's the easy
part to draw in that, just every S1 period
we just draw in what it uses right away right at the left-hand side of
its request period. It leaves holes essentially of unused CPU that we
fill in then with S2. Then whatever S2 doesn't
use is available for S3, and then whatever S3 doesn't use become slack time or
margin. Nobody needs the CPU. At that point in
time the scheduler would spin, so-called idle, and it would just pull the ready
queue for work and
there wouldn't be any work, so it just continue to spin and look for
work, not finding any. Interestingly enough,
over the longest period, which is what [inaudible]
did say is necessary to test the exact feasibility of a
rate monotonic scheduler. We see that we have three
unused time windows out of 15. That would indicate that
20 percent aren't used. So why is utility not 80 percent? The other thing that's
true
about exact analysis is we actually need to look over the LCM to fully
explain the schedule. Not to determine
whether it's feasible, but in other words,
to fully describe it. So if we scroll to
the right and we continue to fill this in as
we did before out to 30, which is the LCM of the
three periods to 10 and 15, then we see that we just
fill in S1 as before, right at the beginning of every request period that it has.
We fill in either S2 or S3 whenever S1 isn't
using the CPU. In fact, here the only
reason we don't have S2 scheduled here is
because it was done, it got done earlier back here. So we let S3 use it,
it gets preempted, we let S3 finish, we schedule S1. Nobody wants the CPU here, so
we have more slack. We schedule S1. We have S2 re-requested
here at 21, but it has to wait for S1 to
complete then it executes. So we have one, two, three, four, five more units of
slack. We have eight time
windows of slack over 30, which explains the 26.67
percent slack overall. So we've seen a couple
of things here. We've seen how to draw
a schedule by hand; we'll see how to automate
that in the future so that if we had
a lot of services, like 20 or 30 services, we wouldn't have to
do this by hand. It would be a little
untenable for human analysis if we had a
large number of services. We also have seen how the
rate monotonic policy works. What it is, highest
priority is given to the highest frequency service
in the setup services that share a CPU in the AMP
architecture and how it works. We essentially have
done that by pretending that we're the scheduler,
we're the dispatcher. We see the preemptions, we see interferes by
one service to another; namely a higher interference by higher priority service to
a lower priority service. We also see run to completion. That concludes this
basic method of analysis and explanation of the rate monotonic
policy by example.

Das könnte Ihnen auch gefallen