Sie sind auf Seite 1von 121

UNIT 5

REAL TIME OPERATING SYSTEM (RTOS)


BASED DESIGN-1

10.1 BASICS OF OS
OS acts as a bridge b/w user applications and task and the underlying
system resources
Primary functions of an OS are:
Make the system convenient to use
Organize and manage the system resources efficiently and correctly
Operating system
architecture

10.1.1 The Kernel


Core of the operating system
Responsible for managing system resources and communication
between h/w and other services
Kernel = System Libraries + Services
Process management:
Setting memory space for process,
loading processs code into memory space,
allocating system resources,
scheduling and managing execution of processes,
setting up and managing PCB,
IPC and synchronization,
process termination and deletion

10.1.1 The Kernel


Primary Memory management:
RAM (processes + variables + shared data)
Memory Management Unit(MMU) of kernel is responsible for :
Keeping track of memory
Allocating and deallocating memory space

File system management:


File is a collection of related information
This service of kernel is responsible for :

Creation, deletion and alteration of files


Creation, deletion and alteration of directories
Saving of files in secondary storage memory
Providing automatic allocation of file space
Providing a flexible naming convention for the files

10.1.1 The Kernel


I/O System (Device) management:
Routing I/O requests coming from different user application to the
appropriate I/O devices of system
Access to I/O devices via API
Kernel maintains the list of all I/O devices of the system(Device
Manager)
Device Driver : system calls by which kernel interacts with I/O
Device Manager is responsible for:
Loading and unloading device drivers
Exchanging information, and system specific control signals to and from
devices

10.1.1 The Kernel


Secondary Storage management:
Managing secondary storage memory devices
Backup medium for programs and data
Responsibilities:
Disk storage allocation
Disk scheduling
Free disk space management

Protection systems:
Multiple users with different level of access permissions
Implementing security policies
Interrupt handler

10.1.1.1 The Kernel Space and User Space

Memory space where kernel code is located kernel space


Memory space where user applications are located user space
Virtual memory demand paging technique
Swapping act of loading code into and out of main memory
It happens between main memory and secondary storage memory

10.1.1.2 Monolithic Kernel and Microkernel


Monolithic kernel :
Kernel services run in kernel space
Effective utilization of low level features of underlying system
Any error or failure in any of the kernel module results in crashing of entire
kernel application
LINUX, SOLARIS, MS-DOS

10.1.1.2 Monolithic Kernel and Microkernel


Microkernel:
Only essential set of OS services into kernel
Rest of OS services are implemented in programs called Servers
Mach, QNX, Minix 3

10.1.1.2 Monolithic Kernel and Microkernel


Benefits of Microkernel approach:
Robustness:
High availability
Chances of corruption of kernel services are ideally zero

Configurability
Services in Server can be restarted without need to restart the whole system

10.2 Types of Operating system


General Purpose Operating System(GPOS)

Kernel is more generalized and contains all services


There can be slow responsiveness of an application
Personal Computers/Desktops system
Ex: Windows XP/MS-DOS

Real Time Operating System(RTOS)


Deterministic timing behavior

10.2 Types of Operating system


Real Time Kernel

Task/Process Management
Task/Process scheduling
Task/Process synchronization
Error/Exception handling
Memory management
Interrupt handling
Time management

Task/Process Management
TCB contains:
1. Task ID
2. Task State
3. Task Type
4. Task Priority
5. Task Context Pointer
6. Task Memory Pointers
7. Task System Resource Pointers
8. Task Pointers

Creates TCB for a task


Delete TCB
Read TCB for getting state of task
Update TCB
Modify TCB for changing priority of the task

Task/Process Scheduling
Task/Process Synchronization
Error/Exception Handling
Errors occurred during execution of tasks
Insufficient memory, timeouts, deadlocks, deadline missing, bus error, divide by
zero, etc
Kernel level exception Deadlock
Task level exception Timeout
API GetLastError() Windows CE RTOS

Memory Management

BLOCK based allocation


Free Buffer queue

Interrupt Handling :

Synchronous and Asynchronous


Synchronous interrupts occurring in sync with current task (Divide by Zero)
Asynchronous not in sync with currently executing task

Hard real time :

RTOS strictly adhere to the timing constraint


Missing deadline catastrophic results(data lose)
A late answer is a wrong answer
Air bag control system & Anti lock brake system(ABS)
HITL introduces delays

Soft real time:

No guarantee of meeting deadlines


A late answer is an acceptable answer, but it could not have done faster
Often have HITL(Human in the loop)
Automatic Teller Machine(ATM)

Tasks, Process and Threads


Task : Program in execution and all related information maintained
by OS for the program. (JOB)
Process : Program or part of it in execution, it requires various system
resources
Structure of process: Concurrent execution of tasks

Process states and state Transition


Process Life Cycle

Created state
Ready state
Running state
Blocked state
Completed state

State Transition

Process states and state Transition


VxWorks

READY
PEND (BLOCKED)
DELAY(process is sleeping)
SUSPEND

MicroC/OS-II

DORMANT (CREATED)
READY
RUNNING
WAITING
INTERRUPTED

THREADS
Primitive that can execute code
Single sequential flow of control within a process
Light weight process
Memory organization of a process and its threads

MULTITHREADING
Better memory utilization
Speeds up execution
of process
CPU utilization

THREAD STANDARDS
POSIX Threads:
Portable Operating System Interface
Library Pthreads (creation and management functions in C)
int pthread_create(pthread_t *new_thread_ID,
const pthread_attr_t *attribute,
void * (*start_function) (void *),
void *arguments);
int pthread_join(pthread_t new_thread, void * *thread_status);
Return value of 0 success

Example 1
#include<pthread.h>
#include<stdlib.h>
#include<stdio.h>
void *new_thread(void *thread_args)
{
int i, j;
for(j=0; j<5; j++)
{
printf(Hello I am the new thread\n);
for(i=0; i<10000; i++) ;
}
return NULL;
}

//sleep(), delay()

Example 1(contd)
int main(void)
{
int i,j;
pthread_t tcb;
if (pthread_create ( &tcb, NULL, new_ thread, NULL))
{
printf(Error in creating new thread\n);
return -1;
}
for(j=0; j<5; j++)
{
printf(hello an in main thread\n);
for(i=0; i<10000; i++);
}

Example 1(contd)
if (pthread_join (tcb, NULL ) )
{
printf(Error in thread join\n);
return -1;
}
return -1;
}

Example 1(contd)
Thread Termination :
1.
Natural termination (return or pthread_exit())
2.
Forced termination ( pthread_cancel() )

Win 32 Threads

Threads supported by Windows operating system


Library Win 32 API

HANDLE CreateThread(LPSECURITY_ATTRIBUTES lpThreadAttributes,


DWORD dwStacksize,
LPTHREAD _START_ROUTINE lpStartAddress,
LPVOID lpParameter,
DWORD dwCreationFlags,
LPDWORD lpThreadId);

Win 32 Threads

GetCurrentThread(void)
GetCurrentThreadId(void)
GetThreadPriority(void)
SetThreadPriority(HANDLE hThread, int n Priority)

Example
#include<windows.h>
#include<stdio.h>
//Child Thread
void ChildThread(void)
{
char i;
for (i=0; i<=10; ++i)
{
printf(Executing Child Thread: Counter = %d\n, i);
Sleep(500);
}
}

Example(contd)
//Primary Thread
int main(int argc, char* argv[])
{
HANDLE hThread;
DWORD dwThreadID;
char i;
hThread= CreateThread(NULL, 1000, (LPTHREAD_START_ROUTINE)
ChildThread, NULL, 0, &dwthreadID);
if(hThread==NULL)
{
printf(Thread creation failed\n Error no:%d\n, GetLastError());
return 1;
}

Example(contd)
//Primary Thread
for(i=0; i<=10;i++)
{
printf(Executing main thread: Counter=%d\n,i);
Sleep(500);
}
return 0;
}

Java Threads

Threads supported by java programming language


Package java.lang

import java.lang.*;
public class MyThread extends Thread
{
public void run()
{
System.out.println(Hello from mythread\n);
}
public static void main(Srring args[])
{
(new MyThread()).start(); //waiting for execution
}
}

Java Threads

MyThread.start()
Ready state
MyThread.yield()
Voluntarily gives up execution of thread
Ready state
MyThread.sleep(100)
Forces to sleep for the duration mentioned
Suspend mode and again back Ready state

Thread pre-emption

Preempting currently running thread


Why preemption?
Thread context switching
User Level Thread
1.
No kernel/OS support
2.
Process multiple user level thread OS treat as a single thread process
has to schedule each thread
Kernel Level Thread
1.
OS treat as separate threads
2.
Scheduling is done by OS

THREAD BINDING MODELS

Many-to-One Model
1.
Solaris Green Threads
2.
GNU Portable Thread
One-to-One Model
1.
Windows XP/NT/2000
2.
Linux Threads
Many-to-Many Model
1.
Windows NT/2000
2.
ThreadFibre

Multiprocessing and Multitasking


MULTIPROCESSING :

Executing multiple processes simultaneously

Multiprocessor systems multiple CPUs


MULTIPROGRAMMING :

Multiple programs in memory

Uniprocessor systems switching CPU among the processes

Multitasking ability of CPU to hold multiple programs in memory and switch


CPU

Illusion of multiple tasks executing parallelly

CONTEXT SWITCHING :

Act of switching CPU among the processes or changing current execution


context
CONTEXT SAVING :

Saving current context details for currently running process at the CPU
Switching
CONTEXT RETRIEVAL:

Process of retrieving context details for a process

Types of Multitasking
CO-OPERATIVE MULTITASKING:

Current Executing process has to release CPU voluntarily

Process can hold CPU as much as it wants


PREEMPTIVE MULTITASKING:

Every process gets a chance to execute

When and how much time???


NON-PREEMPTIVE MULTITASKING:

Process can execute until it terminates OR enters Blocked/Waiting state OR


waiting for an I/O or system resource

Task Scheduling

It forms the basis of multitasking


Determining which task/process has to be executed at a given point of time
Scheduler : kernel service / application that implements the scheduling
algorithm
Pre-emptive
Running

Ready
Priority based
pre-emptive

Blocked/Waiting

Ready
Pre-emptive /
non pre-emptive

Running

Running

Blocked/Waiting
Pre-emptive /
non pre-emptive /
cooperative

Completed

Factors to be considered during selection of


Scheduling algorithm / criterion
CPU UTILIZATION:

It should be high

Measure of how much percentage CPU is being ultilized


THROUGHPUT:

No of process executed per unit time

It should be high
TURNAROUND TIME:

Amount of time taken be a process for completing its execution

Time waiting for memory + time in ready queue + time spend for I/O + time
spent for execution

Factors to be considered during selection of


Scheduling algorithm / criterion
WAITING TIME:

Amount of time spend in Ready queue waiting for CPU

It should be minimal
RESPONSE TIME:

Time between submission of a process and the first response

It should be least

Various QUEUES in OS
JOB QUEUE:

All process in the system


READY QUEUE:

All process ready for execution and waiting for CPU


DEVICE QUEUE:

All process waiting for an I/O device

Non preemptive Scheduling


First come First Serve(FCFS) / FIFO scheduling

Allocate CPU based on the order in which they enter Ready queue
Ticketing Reservation system
Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 10,5,7 milliseconds
respectively enters ready queue in the order P1, P2, P3. Calculate waiting time
and turn around time for each process and also average waiting time and turn
around time(Assume no I/O waiting for the processes)
Example 2
Calculate waiting time and turn around time for each process and also average waiting
time and turn around time(Assume no I/O waiting for the processes) for the
above example if process enters the Ready Queue in the order P2, P1, P3

Non preemptive Scheduling


Last come First Serve(LCFS) / LIFO scheduling
Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 10,5,7 milliseconds
respectively enters ready queue in the order P1, P2, P3. (Assume only P1 is
present in the Ready queue when the scheduler picks it up and P2, P3 entered
Ready queue after that). Now a new process P4 with estimated completion time
6ms enters the Ready queue after 5ms of scheduling P1. calculate the waiting
time and turn around time.

Non preemptive Scheduling


Shortest Job First Scheduling
Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 10,5,7 milliseconds
respectively enters ready queue together. calculate the waiting time and turn
around time for each process and also average waiting time and turn around
time.
Example 2
Calculate the waiting time and turn around time for each process and also average
waiting time and turn around time for the above example if a new process P4
with estimated completion time 2ms enters the Ready Queue after 2ms of
execution of P2.
Disadvantage:

Starvation

Cannot predict next shortest process in the Ready queue for scheduling

Non preemptive Scheduling


Priority based Scheduling

Windows CE (0-255 priority numbers)


Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 10,5,7 milliseconds
and priorities 0,3,2 (0 highest priority and 3 lowest priority) respectively enters
ready queue together. calculate the waiting time and turn around time for each
process and also average waiting time and turn around time.
Example 2
Calculate the waiting time and turn around time for each process and also average
waiting time and turn around time for the above example if a new process P4
with estimated completion time 6ms and priority 1 enters the Ready Queue after
5ms of execution of P1.
Disadvantage:

Starvation (Tackled by AGING)

Preemptive Scheduling
Preemptive SJF Scheduling/ Shortest Remaining Time(SRT)
Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 10,5,7 milliseconds
respectively enters ready queue together. A new process P4 enters with
estimated completion time 2ms enters the Ready queue after 2ms. calculate the
waiting time and turn around time for each process and also average waiting
time and turn around time.

Preemptive Scheduling
Round Robin (RR) Scheduling

Equal chance to all


First Process in the Ready queue is selected for execution
It is executed for pre-defined time
And next process is selected when the previous process pre-defined time elapses
or process completes

Round Robin (RR) Scheduling implementation for RTX51 Tiny OS


#include<rtx51tny.h>
int counter0;
int counter1;
void func (void) _task_task_id
job0 () _task_0 {
16 tasks
os_create_task (1);
while(1) {
counter0++;
}
}
job1 () _task_1 {
while(1) {
counter1++;
}
}

RTX51 Tiny RR Scheduling in SMART CARD READER


Creating the tasks
Check presence of card
void check_card_task (void) _task_1
Process data received from card
{
Update display
}
Check serial port for command
and data
void process_card_task (void) _task_2
Process data received from serial
{
port
}
void check_serial_io_task (void) _task_3
{
}
void process_serial_data_task (void) _task_4
{
}

RTX51 Tiny RR Scheduling in SMART CARD READER


Scheduling of the tasks
void startup_task (void) _task_0
{
os_create_task (1);
os_create_task (2);
os_create_task (3);
os_create_task (4);
os_delete_task (0);
}

RTX51 Tiny RR Scheduling in SMART CARD READER


void check_card_task (void) _task_1
{
while(1)
{
if(card is present)
os_send_signal (2)
}
}
void process_card_task (void) _task_2
{
while(1)
{
os_wait1(K_SIG);
}}

RTX51 Tiny RR Scheduling in SMART CARD READER


void check_serial_io_task (void) _task_3
{
while(1)
{
if(data is received)
os_send_signal (4)
}
void process_serial_data_task (void) _task_4
{
while(1)
{
os_wait1(K_SIG);
}
}

Preemptive Scheduling
Round Robin (RR) Scheduling
Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 6,4,2
milliseconds respectively enters ready queue together IN THE
ORDER P1,P2,P3. calculate the waiting time and turn around time
for each process and also average waiting time and turn around
time using RR algorithm with Time slice =2ms.

Preemptive Scheduling
Priority Based Scheduling
Example 1
Three Process with IDs P1,P2,P3 with estimated completion time 10,5,7
milliseconds and priorities 1,3,2 (0 highest priority and 3 lowest
priority) respectively enters ready queue together. A new process
P4 with estimated completion time 6ms and priority 0 enters the
Ready Queue after 5ms of execution of P1. calculate the waiting
time and turn around time for each process and also average
waiting time and turn around time.

Threads, Processes and Scheduling


Process 1
#include<windows.h>
#include<stdio.h>
void Task(void)
{
while(1)
{
//Perform task execution time =7.5 units
Sleep(17.5);
//Repeat Task
}

Process 1
void main(void)
{
DWORD id;
HANDLE hThread;
// Create thread with normal priority
hThread=CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)Task,
(LPVOID) 0, 0, &id);
if(NULL==hThread)
{
printf(Creating thread Failed: Error code= %d, GetLastError());
return;
}
WaitForSingleObject(hThread, INFINITE);
return;
}

Threads, Processes and Scheduling


Process 2
#include<windows.h>
#include<stdio.h>
void Task(void)
{
while(1)
{
//Perform task execution time =10 units
Sleep(5);
//Repeat Task
}

Process 2
void main(void)
{
DWORD id;
HANDLE hThread;
// Create thread with above normal priority
hThread=CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)Task,(LPVOID)0,
CREATE_SUSPENDED, &id);
if(NULL==hThread)
{
printf(Creating thread Failed: Error code= %d, GetLastError());
return;
}

Process 2
SetThreadPriority(hThread,
THREAD_PRIORITY_ABOVE_NORMAL);
ResumeThread(hThread);
WaitForSingleObject(hThread, INFINITE);
return;
}

IDLE PROCESS(TASK)
void Idle_Process (void)
{
//simple wait doing nothing
While(1);
}

Task Communication
Based on the degree of interaction:
1.
Co-operating processes
2.
Competing processes
Cooperation through sharing(through some shared resources)
Cooperation through communication(communicate for synchronization)
Shared Memory:

Write and read data from


from memory

Task Communication
Implementing Shared Memory:
1.
Pipes:

Client-server architecture
Pipe server and pipe client
Unidirectional and bidirectional

Microsoft Windows support two types of Pipes:

Anonymous Pipes (unnamed and unidirectional)


Named Pipes (named and unidirectional or bidirectional)

Memory Mapped Objects

Shared memory technique


CreateFileMapping(HANDLE
hFile,
LPSECURITY_ATTRIBUTES
lpFileMappingAttributes,
DWORD
flProtect,
DWORD
dwMaximunSizeHigh, DWORD dwMaximunSizeLow, LPCTSTR lpName)
MapViewOfFile(
HANDLE
hFileMappingObjcet,
DWORD
dwDesiredAccess,
DWORD
dwFileOffsetHigh,
DWORD
dwFileOffsetLow, DWORD dwNumberOfBytesToMap)

Memory Mapped Objects


#include<stdio.h>
#include<windows.h>
void main()
{
HANDLE hFileMap;
LPTSTR hMapView;
hFileMap = CreateFileMapping( (HANDLE) -1, NULL,
PAGE_READWRITE, 0, 0*2000, TEXT(memorymappedobject));
if(NULL== hFileMap)
{
printf(Failed: Error code=%d\n,GetLastError());
return;
}

Memory Mapped Objects


hMapView = (LPTSTR) MapViewOfFile (hFileMap, FILE_MAP_WRITE, 0,0,0);
If(NULL== hMapView)
{
printf(Failed: Error code=%d\n,GetLastError());
return;
}
else
{
printf(Virtual Address is %d, hMapView);
}
UnmapViewOfFile(hMapView);
CloseHandle(hFileMap);
}

Message Passing

Difference between shared memory and message passing

1.

Amount of data transferred


Speed

Message queue
FIFO
send(Name of the process to which message is to be sent, message)
receive(Name of the process from which message is be sent,
message)
Synchronous

Enters waiting state


Asynchronous

No waiting for acceptance

Message Passing

Message queue
PostMessage(HWND hWnd, UINT Msg, WPARAM wParam,
LPARAM lParam)
PostThreadMessage(DWORD idThread, UINT Msg, WPARAM
wParam, LPARAM lParam)
SendMessage(HWND hWnd, UINT Msg, WPARAM wParam,
LPARAM lParam)
CreateMsgQueue(LPCWSTR lpszName, LPMSGQUEUEOPTIONS
lpOptions)

Message Passing
typedef MSGQUEUEOPTIONS_OS{
DWORD dwSize;
DWORD dwFlags;
DWORD dwMaxMessages;
DWORD cbMaxMessages;
BOOL bReadAccess;
};

Message Passing

Message queue

OpenMsgQueue()

WriteMsgQueue(HANDLE hMsgQ, LPVOID lpBuffer, DWORD


cbDataSize, DWORD dwTimeOut, DWORD dwFlags)

dwFlags==MSGQUEUE_MSGALERT

ReadMsgQueue(HANDLE hMsgQ, LPVOID lpBuffer, DWORD


cbBufferSize,
LPDWORD
lpNumberOfBytesRead,
DWORD
dwTimeOut, DWORD pwFlags)

GetMsgQueueInfo(HANDLE hMsgQ, LPMSGQUEUEINFO lpInfo)

MailBox

Alternate form of Message queue


Creation and Subscribing
Mailbox server and mailbox client

Signalling

Primitive way of communication between processes


Signals do not carry any data
Eg RTX51 Tiny OS

os_send_signal
os_wait

Remote Procedure Call ( RPC) and Sockets

Remote Invocation or Remote Invocation Method(RMI)


Client Server applications and heterogeneous environments
Server and client
Synchronous(Blocking)
Asynchronous
Authentication
(DES, 3DES)
Sockets(INET, UNIX)

Stream Socket
Datagram Socket

Task Synchronization

1.
2.
3.
4.
5.
6.

Multiple processes sharing system resources & variables


Shared memory (write and read)
Unexpected results
Act of making the processes aware of the access of shared resources by
each process to avoid conflicts
Racing
Deadlock
Dining Philosophers Problem
Producer-consumer/bounded buffer problem
Readers Writers problem
Priority inversion

1. Racing
#include<windows.h>
#include<stdio.h>
char Buffer[10]={1,2,3,4,5,6,7,8,9,10};
short int counter=0;
void Process_A(void) {
int i;
for(i=0; i<5; i++) {
if(Buffer[i] >0)
counter++;
}
}

1. Racing
void Process_B(void) {
int j;
for(j=0; j<5; j++) {
if(Buffer[j] >0)
counter++;
}
}

1. Racing
int main()
{
DWORD id;
CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)Process_A,(LPVOID) 0, 0, &id);
CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)Process_B,(LPVOID) 0, 0, &id);
Sleep(100000);
return 0;
}

1. Racing
mov eax , dword ptr [ebp-4];
add eax,1;
mov dword ptr [ebp-4] , eax;

1. Racing

Multiple process compete each other to access and manipulate


shared data concurrently(incorrect results)
Solution:
Make access and modification of shared variables mutually
exclusive

2. Deadlock

None of the processes can execute

Conditions favoring Deadlock


Combined occurrence of all the four condition(Coffman Conditions)
Described by E G Coffman in 1971

Mutual Exclusion

Only one process can hold a resource at a time

Hold and Wait

Process holds a shared resource and wait for additional resource

No Resource Preemption

OS cannot take back a resource from the process holding it

Circular Wait

P0, P1, P2.Pn

Deadlock Handling
Ignore Deadlocks :

Assume system design is deadlock free

Cost of removing a deadlock is large

Ex: UNIX
Detect and Recover Deadlocks :

Back up cars technique

OS keeps a resource graph in m/y

Updating the graph

Terminate the process or preempt the


resource

Deadlock Handling
Avoid Deadlocks :

Avoiding by careful resource allocation techniques

Ex: Traffic Light mechanisms to avoid traffic jams


Prevent Deadlocks :
Negating one of the four conditions

Mutual Exclusion

Process should request and be allocated all resource before execution


Allocate resource only if the process does not hold resource currently

No Resource Preemption

Release resources if request made by a process cannot be fulfilled


Update the resource list
Reschedule processes when it gets its old resources and new resource

Deadlock Handling

Livelock and Deadlocks


Starvation

3. The Dining Philosophers Problem

Five philosophers ( n )
Eating , Hungry or Brainstorming
Only 5 forks ( need 2)

3. The Dining Philosophers Problem

Three different scenarios:


Scenario 1:

Involve in brainstorming together and try to eat together

Circular chain

Starvation and deadlock

3. The Dining Philosophers Problem

Three different scenarios:


Scenario 2:

All philosophers start brainstorming together

Race condition(unexpected results)

3. The Dining Philosophers Problem

Three different scenarios:


Scenario 3:

Philosophers brainstorming together and trying to eat together


Livelock and starvation

3. The Dining Philosophers Problem


Solutions:

Round Robin
Imposing rules in accessing the forks by philosophers
Philosopher acquires a semaphore ( mutex) before picking up ant fork

4. Producer-Consumer/Bounded Buffer Problem

Two process concurrently access a shared buffer with fixed size


Thread/Process producing data Producer Thread/Process
Thread/Process consuming data Consumer Thread/Process
Due to lack of synchronization:

Buffer overrun
Buffer under-run

Lead to inaccurate data and data loss

4. Producer-Consumer/Bounded Buffer Problem


#include<window.h>
#include<stdio.h>
#define N 20
int buffer[N];
Void producer_thread(void){
int x;
while(true){
for(x=0; x<N; x++){
buffer[x]=rand()%1000;
printf(Produced:Buffer[%d]=%d, x, buffer[x]);
Sleep(25);
}
}
}

4. Producer-Consumer/Bounded Buffer Problem


Void consumer_thread(void){
int y=0, value;
while(true){
for(y=0; y<N; y++){
value=buffer[y];
printf(Consumed:Buffer[%d]=%d, y, value);
Sleep(20);
}
}
}

4. Producer-Consumer/Bounded Buffer Problem


int main()
{
DWORD thread_id;
CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)Producer_thread,NULL, 0, &thread_id);
CreateThread(NULL,0,
(LPTHREAD_START_ROUTINE)Consumer_Thread,NULL, 0, & thread_id);
Sleep(500);
return 0;
}

4. Producer-Consumer/Bounded Buffer Problem


ISSUES:
1.
2.

Producer thread is scheduled more frequently than consumer thread


Consumer thread is scheduled more frequently than Producer thread

SOLUTIONS:
1.

Sleep and wake-up

5. Readers-Writers Problem

Processes competing for limited shared resources


Banking system
Multiple processes read data concurrently no impact
Multiple processes read and write data concurrently impact

6. Priority Inversion

High priority task waiting for low priority task to release a resource and
a medium priority task continue execution by preempting low priority
task
Binary semaphore

0 locked
1 unlocked

Priority Inheritance

Low priority task accessing shared resource requested by high priority


task temporarily inherits the priority of high priority task from the
moment high priority task raises the request
And when the resource is released priority is set back to original value

Priority Ceiling

Priority is associated with each shared resource


Priority = priority of highest priority task using the resource
Ceiling priority
Boosting the priority of low priority task to ceiling priority
And the priority is reset after it releases shared resource

Task Synchronization Techniques

Avoiding conflicts in resource access


Ensuring proper sequence of operation across processes
Communicating between the processes
CRITICAL SECTION:

Code memory area holding program instructions for accessing shared


resource

Mutual exclusion

1. Mutual Exclusion through Busy Waiting / Spin Lock

Lock variable for implementing mutual exclusion


Lock= = 1 in critical section
Lock = = 0 not in critical section

//inside main thread


bool bFlag; //lock variable
bFlag=FALSE;
//inside child threa
if(bFlag = =FALSE)
bFlag=TRUE;

2. Mutual Exclusion through Sleep & Wakeup

CPU is busy checking the lock to see whether they can proceed
Not feasible in embedded systems(power consumption)
Sleep & Wake up mechanisms
Process in critical section new process accessing critical section will
goto Sleep process owning critical section wakeup the new process
when it lives the critical section
1. Semaphore :

Process which wants to access shared resource first should acquire


system object to indicate other processes that the shared resource is
currently acquired by it

Resource Hard disk

Binary Semaphore(mutex)

Counting Semaphore(0 and value)

Signalled and nonsignalled && Acquiring and releasing the resource

Critical Section Objects:

InitializeCriticalSection(LPCRITICAL_SECTION lpCriticalSection)
EnterCriticalSection(LPCRITICAL_SECTION lpCriticalSection)

Wait queue

TryEnterCriticalSection(LPCRITICAL_SECTION lpCriticalSection)
LeaveCriticalSection(LPCRITICAL_SECTION lpCriticalSection)
DeleteEnterCriticalSection(LPCRITICAL_SECTION lpCriticalSection)

DEVICE DRIVERS

Bridge between OS and hardware


Abstracts hardware from user applications
Establish connectivity, initialize hardware and transfer data
Wi-fi module, File systems, Storage device interfaces
Protocols different
Each hardware need a unique driver component

DEVICE DRIVERS

NAND Flash memory NAND Flash Driver


Built in drivers or Onboard drivers loaded during booting kept in
RAM
Installable drivers loaded on need basis

HOW TO CHOOSE AN RTOS


FUNCTIONAL REQUIREMENTS:

Processor support

Memory requirements( ROM && RAM )

Real-Time Capabilities

Kernel and interrupt latency(latency minimal)

IPC and task synchronization

Modularization support

Support for Networking and Communication

Development Language support

HOW TO CHOOSE AN RTOS


NON-FUNCTIONAL REQUIREMENTS:

Custom developed or off the shelf

Cost

Development and debugging tools availability

Ease of use

Das könnte Ihnen auch gefallen