Sie sind auf Seite 1von 55

C++

02/10/2015

Operator overloading:
Refer following program:
#include <iostream>
using namespace std;
class loc {
int longitude, latitude;
388 C + + : T h e C o m p l e t e R e f e r e n c e
public:
loc() {} // needed to construct temporaries
loc(int lg, int lt) {
longitude = lg;
latitude = lt;
}
void show() {
cout << longitude << " ";
cout << latitude << "\n";
}
loc operator+(loc op2);
loc operator-(loc op2);
loc operator=(loc op2);

loc operator++();
};
// Overload + for loc.
loc loc::operator+(loc op2)
{
loc temp;
temp.longitude = op2.longitude + longitude;
temp.latitude = op2.latitude + latitude;
return temp;
}
// Overload - for loc.
loc loc::operator-(loc op2)
{
loc temp;
// notice order of operands
temp.longitude = longitude - op2.longitude;
temp.latitude = latitude - op2.latitude;
return temp;
}
// Overload asignment for loc.
loc loc::operator=(loc op2)

{
longitude = op2.longitude;
latitude = op2.latitude;
return *this; // i.e., return object that generated call
}
// Overload prefix ++ for loc.
loc loc::operator++()
{
longitude++;
latitude++;
return *this;
}
int main()
{
loc ob1(10, 20), ob2( 5, 30), ob3(90, 90);
ob1.show();
ob2.show();
++ob1;
ob1.show(); // displays 11 21
ob2 = ++ob1;
ob1.show(); // displays 12 22

ob2.show(); // displays 12 22
ob1 = ob2 = ob3; // multiple assignment
ob1.show(); // displays 90 90
ob2.show(); // displays 90 90
return 0;
}
2. Prefix and post fix operator over loading:
// Prefix increment
type operator++( ) {
// body of prefix operator
}
// Postfix increment
type operator++(int x) {
// body of postfix operator
}
3. Overloading a shorthand operator:
loc loc::operator+=(loc op2)
{
longitude = op2.longitude + longitude;
latitude = op2.latitude + latitude;
return *this;

}
4. Operator overloading restriction:
4.1 You cannot alter the precedence of an operator.
4.2 You cannot change the number of operands that an operator
takes.
4.3 Except for the function call operator (described later), operator
functions cannot have default arguments.
4.4 Finally, these operators cannot be overloaded: . : : .* ?
5. Friend Function and operator overloading:
In C++, only friend can access private part of the class
You can overload an operator for a class by using a nonmember
function, which is usually a friend of the class. Since a friend function is not
a member of the class, it does not have a this pointer. Therefore, an
overloaded friend operator function is passed the operands explicitly. This
means that a friend function that overloads a binary operator has two
parameters, and a friend function that overloads a unary operator has one
parameter. When overloading a binary operator using a friend function,
the left operand is passed in the first parameter and the right operand is
passed in the second parameter.
In this program, the operator+() function is made into a friend
#include <iostream>
using namespace std;
class loc {
int longitude, latitude;
public:
loc() {} // needed to construct temporaries
loc(int lg, int lt) {
longitude = lg;
latitude = lt;

}
void show() {
cout << longitude << " ";
cout << latitude << "\n";
}
friend loc operator+(loc op1, loc op2); // now a friend
loc operator-(loc op2);
loc operator=(loc op2);
loc operator++();
};
// Now, + is overloaded using friend function.
loc operator+(loc op1, loc op2)
{
loc temp;
temp.longitude = op1.longitude + op2.longitude;
temp.latitude = op1.latitude + op2.latitude;
return temp;
}
// Overload - for loc.
loc loc::operator-(loc op2)
{

loc temp;
// notice order of operands
temp.longitude = longitude - op2.longitude;
temp.latitude = latitude - op2.latitude;
return temp;
}
// Overload assignment for loc.
loc loc::operator=(loc op2)
{
longitude = op2.longitude;
latitude = op2.latitude;
return *this; // i.e., return object that generated call
}
// Overload ++ for loc.
loc loc::operator++()
{
longitude++;
latitude++;
return *this;
}
int main()

{
loc ob1(10, 20), ob2( 5, 30);
ob1 = ob1 + ob2;
ob1.show();
return 0;
}
Friend function for overloading ++
#include <iostream>
using namespace std;
class loc {
int longitude, latitude;
public:
loc() {}
loc(int lg, int lt) {
longitude = lg;
latitude = lt;
}
void show() {
cout << longitude << " ";
cout << latitude << "\n";
}

loc operator=(loc op2);


friend loc operator++(loc &op);
friend loc operator--(loc &op);
};
// Overload assignment for loc.
loc loc::operator=(loc op2)
{
longitude = op2.longitude;
latitude = op2.latitude;
return *this; // i.e., return object that generated call
}
// Now a friend; use a reference parameter.
loc operator++(loc &op)
{
op.longitude++;
op.latitude++;
return op;
}
// Make op-- a friend; use reference.
loc operator--(loc &op)
{

op.longitude--;
op.latitude--;
return op;
}
int main()
{
loc ob1(10, 20), ob2;
ob1.show();
++ob1;
ob1.show(); // displays 11 21
ob2 = ++ob1;
ob2.show(); // displays 12 22
--ob2;
ob2.show(); // displays 11 21
return 0;
}
Difference between normal overloading and friend function over
loading:
In case of normal operator overloading obj1+100 is possible but
100+obj1 is not possible. But both are possible in case of friend function.
Since in case of normal operator overloading left operator calls the
overloading operator function while in case of friend function we pass both
the object to the function.

Inheritance:

When a class inherits another class then the members of base class
become members of derived class.
Class derived: access-specifier base class{}
Access specifier could be public, private or protected.
If public: all public becomes public, protected becomes protected and
private becomes private.
If protected: all public and protected becomes protected and private
remains private.
If private: all becomes private.
Default access specifier in case of class is private and in case of struct
it is public.
Important points:
1. Multiple inheritance is possible. Class derived : public base1, public
base2{}
2. Constructor and destructor can not be inherited. But only the
constructor and destructor base class gets called for derived object.
Constructing base2
Constructing base1
Constructing derived
Destructing derived
Destructing base1
Destructing base2
3. Parameters can be passed to base class constructor as
derived(int x, int y ): base(y)
4. In case of inheritance, if we want to restore access of some of the
members of the base class that is possible. Consider derived class inherited
the base class as private. But you want variable j to be restore in derived
class then it is possible using either using statement or access
declarationbase-class::member within derived class.
Class base {public: int i;};

Class derived: private base{ base::i}; // i becomes public in derived


class.
5. Virtual base class:
// This program contains an error and will not compile.
#include <iostream>
using namespace std;
class base {
public:
int i;
};
// derived1 inherits base.
class derived1 : public base {
public:
int j;
};
// derived2 inherits base.
class derived2 : public base {
public:
int k;
};
/* derived3 inherits both derived1 and derived2.
This means that there are two copies of base

in derived3! */
class derived3 : public derived1, public derived2 {
public:
int sum;
};
int main()
{
C h a p t e r 1 6 : I n h e r i t a n c e 439
Remember
derived3 ob;
ob.i = 10; // this is ambiguous,
ob.j = 20;
ob.k = 30;
// i ambiguous here, too
ob.sum = ob.i + ob.j + ob.k;
// also ambiguous, which i?
cout << ob.i << " ";
cout << ob.j << " " << ob.k << " ";
cout << ob.sum;
return 0;
}

In above program derived3 has two copies of base class. To avoid this
ambiguity there are two ways:
1. Use scope resolution operator. So ob.i can be replaced by
ob.derived1::i=10
2. Use of virtual base class. i.e. in definition of derived class just add
virtual before access-specifier as
// derived1 inherits base as virtual.
class derived1 : virtual public base {
public:
int j;
};
// derived2 inherits base as virtual.
class derived2 : virtual public base {
public:
int k;
};
/* derived3 inherits both derived1 and derived2.
This time, there is only one copy of base class. */
class derived3 : public derived1, public derived2 {
public:
int sum;
};

Virtual Function:
For vTable and vptr refer
http://www.go4expert.com/articles/virtual-table-vptr-t16544/

Virtual function are member function that is declared in base class


and can be redefined by virtual class.
Class base{
Public:
Virtual void func(){cout<<Base;}
};
class derived : public base{
Public:
Void func(){cout<<Derived;}
};
int main(){
base b;
derived d;
b.func();//Base class
d.func();// Derived class
return 0;
}
Important Points:

1. Virtual function can be called using base class pointer and base
class reference.
Ex:
Int main(){
base *p, b;
derived d;
p=&b;
//Base class pointer
p.func();
p=&d;
p.func();
return 0;
}
2. Virtual attribute can be inherited.
Class base{public: virtual void func();}
Class derived1: public base{public: void func();}
Class derived2: public derived1{public: void func();} // inherited.
3. Overriding is nothing but virtual function.
4. If a function is defined as the virtual in base and that function is not
defined in derived class and then we called that function using derived class
object then base class function gets called. If there is hierarchy then the
latest definition is called.
Ex. base{virtual void func()}----------- derived1{void func()} -----
derived2.
If object of derived2 tried to access the func() then definition of in
derived class gets called.
5. Pure virtual function. virtual void func()=0; This function need to
have there definition in all of the derived classes.
6.Difference Late Binding vs Early Binding: Late binding are happen in
case of virtual function. This function call are resolved at the run time. While
the early binding happens in case of normal functions. These function are
resolved at compile time.

FAQs: Can we defined virtual function as private.

Templates:
It is feature of C++. Using template generic classes and generic
functions can be created. Templates help us to reduce the redundant code
written just for different data types.
Generic Function:
The function of same logic can be used for different data types. Only
one function definition called generic function. Depending upon the datatype
compiler will process the function at compile time. Generic function are
different from overloaded function as in generic function the action
performed on variable are remain same i.e. logic of then function never
changes only data type changes
Definition:
Template <class X> void func(X a, X b ) { }
where X is data type. X can be int,char,struct,object,etc
Generic class:
Concept is same. Same class definition but can be used with different
data types.
Definition:
Template <data Type> class class-name {}
Inside main function or any function we can called it as
class-name <data Type> d;
Important Notes:
1. Overloading function template for generic function is possible.
Template <class X> void func(X a)
template <class X> void func(X a, X b)
2. Explicitly overloading generic function.

Templates <class X> void func(X a, X b )


void func(int a, int b)
3. Generic Function or generic class can defined with two generic
types.
Template <class X1, class X2> void func (X1 a, X2 b)
Same is applied for generic class.
Practical Use:
Sorting of interger or float can be implemented using sorting.

Exception handling:

Process :
Reference:
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-inmemory/
https://www.corelan.be/index.php/2009/07/19/exploit-writing-tutorialpart-1-stack-based-overflows/
http://www.tutorialspoint.com/operating_system/os_process_scheduli
ng.htm
Process:
A process is a program in execution. The execution of a process must
progress in a sequential fashion. Definition of process is following.
A process is defined as an entity which represents the basic unit of
work to be implemented in the system.
Components of process are following.
S.N.

Component & Description

Object Program
Code to be executed.

Data
Data to be used for executing the program.

Resources
While executing the program, it may require some resources.

Status
Verifies the status of the process execution.A process can run
to completion only when all requested resources have been
allocated to the process. Two or more processes could be
executing the same program, each using their own data and
resources.

Program:
A program by itself is not a process. It is a static entity made up of
program statement while process is a dynamic entity. Program contains the
instructions to be executed by processor.
A program takes a space at single place in main memory and
continues to stay there. A program does not perform any action by itself.
As a process executes, it changes state. The state of a process is
defined as the current activity of the process.
Process can have one of the following five states at a time.
S.N.
1

State & Description


New
The process is being created.
Ready
The process is waiting to be assigned to a processor. Ready
processes are waiting to have the processor allocated to them
by the operating system so that they can run.
Running
Process instructions are being executed (i.e. The process that
is currently being executed).
Waiting
The process is waiting for some event to occur (such as the
completion of an I/O operation).
Terminated
The process has finished execution.

Process in memory:

Process Control Block:


Each process is represented in the operating system by a process
control block (PCB) also called a task control block. PCB is the data
structure used by the operating system. Operating system groups all
information that needs about particular process. PCB contains many pieces
of information associated with a specific process which are described below.

S.N.

Information & Description


Pointer
Pointer points to another process control block. Pointer is
used for maintaining the scheduling list.
Process State
Process state may be new, ready, running, waiting and so on.
Program Counter
Program Counter indicates the address of the next instruction
to be executed for this process.
CPU registers
CPU registers include general purpose register, stack pointers,
index registers and accumulators etc. number of register and
type of register totally depends upon the computer
architecture.
Memory management information
This information may include the value of base and limit
registers, the page tables, or the segment tables depending
on the memory system used by the operating system. This
information is useful for deallocating the memory when the
process terminates.
Accounting information
This information includes the amount of CPU and real time
used, time limits, job or process numbers, account numbers
etc.

Process control block includes CPU scheduling, I/O resource


management, file management information etc.. The PCB serves as the
repository for any information which can vary from process to process.
Loader/linker sets flags and registers when a process is created. If that
process get suspended, the contents of the registers are saved on a stack
and the pointer to the particular stack frame is stored in the PCB. By this
technique, the hardware state can be restored so that the process can be
scheduled to run again.

Process Scheduling and Scheduling algorithm:


The process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and the selection
of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming


operating system. Such operating systems allow more than one process to
be loaded into the executable memory at a time and loaded process shares
the CPU using time multiplexing.
Scheduling queues:
Scheduling queues refers to queues of processes or devices. When the
process enters into the system, then this process is put into a job queue.
This queue consists of all processes in the system. The operating system
also maintains other queues such as device queue. Device queue is a queue
for which multiple processes are waiting for a particular I/O device. Each
device has its own device queue.
This figure shows the queuing diagram of process scheduling.
Queue is represented by rectangular box.
The circles represent the resources that serve the queues.
The arrows indicate the process flow in the system.

Queues are of two types


1.Ready queue
2.Device queue
A newly arrived process is put in the ready queue. Processes waits in
ready queue for allocating the CPU. Once the CPU is assigned to a process,
then that process will execute. While executing the process, any one of the
following events can occur.
1.The process could issue an I/O request and then it would be placed
in an I/O queue.
2. The process could create new sub process and will wait for its
termination. 3. The process could be removed forcibly from the CPU, as a
result of interrupt and put back in the ready queue.
Two state process model:
Two state process model refers to running and non-running states
which are described below.
S.N.

State & Description


Running
When new process is created by Operating System that
process enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for
their turn to execute. Each entry in the queue is a pointer to a
particular process. Queue is implemented by using linked list.
Use of dispatcher is as follows. When a process is interrupted,
that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded.
In either case, the dispatcher then selects a process from the
queue to execute.

Schedulers:
Schedulers are special system softwares which handles process
scheduling in various ways.Their main task is to select the jobs to be
submitted into the system and to decide which process to run. Schedulers
are of three types
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Long Term Scheduler:
It is also called job scheduler. Long term scheduler determines which
programs are admitted to the system for processing. Job scheduler selects
processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduling. The primary objective of
the job scheduler is to provide a balanced mix of jobs, such as I/O bound
and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving
the system.
On some systems, the long term scheduler may not be available or
minimal. Time-sharing operating systems have no long term scheduler.
When process changes the state from new to ready, then there is use of long
term scheduler.
Short Term Scheduler
It is also called CPU scheduler. Main objective is increasing system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects process
among the processes that are ready to execute and allocates CPU to one of
them.
Short term scheduler also known as dispatcher, execute most
frequently and makes the fine grained decision of which process to execute
next. Short term scheduler is faster than long term scheduler.
Medium Term Scheduler
Medium term scheduling is part of the swapping. It removes the
processes from the memory. It reduces the degree of multiprogramming.
The medium term scheduler is in-charge of handling the swapped outprocesses.

Running process may become suspended if it makes an I/O request.


Suspended processes cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
process, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or
rolled out. Swapping may be necessary to improve the process mix.
Comparison between Scheduler

Long Term
S.N. Scheduler
1

Short Term
Scheduler

Medium
Term Scheduler

It is a job
scheduler

It is a CPU
scheduler

It is a process
swapping scheduler.

Speed is
lesser than short
term scheduler

Speed is
fastest among
other two

Speed is in
between both short
and long term
scheduler.

It controls
the degree of
multiprogramming

It provides
lesser control over
degree of
multiprogramming

It reduces the
degree of
multiprogramming.

It is almost
absent or minimal
in time sharing
system

It is also
minimal in time
sharing system

It is a part of
Time sharing
systems.

It selects
processes from
pool and loads
them into memory
for execution

It selects
those processes
which are ready to
execute

It can reintroduce the


process into
memory and
execution can be
continued.

Context Switch
A context switch is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time. Using this technique a context
switcher enables multiple processes to share a single CPU. Context switching
is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to
execute another, the context switcher saves the content of all processor
registers for the process being removed from the CPU, in its process
descriptor. The context of a process is represented in the process control
block of a process.
Context switch time is pure overhead. Context switching can
significantly affect performance as modern computers have a lot of general
and status registers to be saved. Content switching times are highly
dependent on hardware support. Context switch requires ( n + m ) bxK time
units to save the state of the processor with n general registers, assuming b
are the store operations are required to save n and m registers of two
process control blocks and each store instruction requires K time units.

Some hardware systems employ two or more sets of processor


registers to reduce the amount of context switching time. When the process
is switched, the following information is stored.
1.Program Counter
2.Scheduling Information
3.Base and limit register value
4.Currently used register
5.Changed State
6.I/O State
7.Accounting
Scheduling algorithms :
1.FCFS:
2.SJF(Shortest Job First):

3.Priority Based scheduling: Each process assign one priority.


Process are executed on basis of priority.
4. Round Robin scheduling:
1.Each process is provided a fix time to execute called quantum
2. Once a process is executed for given time period. Process is
preempted and other process executes for given time period
3. Context switching is used to save states of preempted processes.
5.Multi queue scheduling:
1.Multiple queues are maintained for processes.
2.Each queue can have its own scheduling algorithms.
3.Priorities are assigned to each queue.

Orphan process:
An orphan process is a computer process whose parent process has
finished or terminated, though it remains running itself.
In a Unix-like operating system any orphaned process will be
immediately adopted by the special initsystem process. This operation is
called re-parenting and occurs automatically. Even though technically the
process has the init process as its parent, it is still called an orphan process
since the process that originally created it no longer exists.

A process can be orphaned unintentionally, such as when the parent


process terminates or crashes. The process group mechanism in most Unixlike operation systems can be used to help protect against accidental
orphaning, where in coordination with the user's shell will try to terminate all
the child processes with the SIGHUP process signal, rather than letting them
continue to run as orphans.
A process may also be intentionally orphaned so that it becomes
detached from the user's session and left running in the background; usually
to allow a long-running job to complete without further user attention, or to
start an indefinitely running service. Under Unix, the latter kinds of
processes are typically called daemon processes. The Unix nohup command
is one means to accomplish this.
Deamon Process:
In Unix and other multitasking computer operating systems, a daemon
is a computer program that runs as a background process, rather than being
under the direct control of an interactive user. Typically daemon names end
with the letter d: for example, syslogd is the daemon that implements the
system logging facility and sshd is a daemon that services incoming SSH
connections.
In a Unix environment, the parent process of a daemon is often, but
not always, the init process. A daemon is usually created by a process
forking a child process and then immediately exiting, thus causinginit to
adopt the child process. In addition, a daemon or the operating system
typically must perform other operations, such as dissociating the process
from any controlling terminal (tty). Such procedures are often implemented
in various convenience routines such as daemon(3) in Unix.
Daemon process is a process orphaned intentionally.
Zombie Process:
On Unix and Unix-like computer operating systems, a zombie
process or defunct process is a process that has completed
execution but still has an entry in the process table. This entry is still
needed to allow the parent process to read its child's exit status. The term
zombie process derives from the common definition of zombie an undead
person. In the term's metaphor, the child process has "died" but has not yet
been "reaped". Also, unlike normal processes, the kill command has no
effect on a zombie process.

When a process ends, all of the memory and resources associated with
it are deallocated so they can be used by other processes. However, the
process's entry in the process table remains. The parent can read the child's
exit status by executing the wait system call, whereupon the zombie is
removed. The wait call may be executed in sequential code, but it is
commonly executed in a handler for the SIGCHLD signal, which the parent
receives whenever a child has died.
After the zombie is removed, its process identifier (PID) and entry in
the process table can then be reused. However, if a parent fails to call wait,
the zombie will be left in the process table. In some situations this may be
desirable, for example if the parent creates another child process it ensures
that it will not be allocated the same PID. On modern UNIX-like systems
(that comply with SUSv3 specification in this respect), the following special
case applies: if the parent explicitly ignores SIGCHLD by setting its handler
toSIG_IGN (rather than simply ignoring the signal by default) or has
the SA_NOCLDWAIT flag set, all child exit status information will be
discarded and no zombie processes will be left
A zombie process is not the same as an orphan process. An
orphan process is a process that is still executing, but whose parent has
died. They do not become zombie processes; instead, they are adopted
by init (process ID 1), which waits on its children.
Interprocess Communication:
Shared memory:
Message-passing:
Pipes:
Named pipes(FIFO):

Threads:
A thread is a flow of execution through the process code, with
its own program counter, system registers and stack. A thread is also
called a light weight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread
is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist
outside a process. Each thread represents a separate flow of control.
Threads have been successfully used in implementing network servers and
web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. Following figure shows the
working of the single and multithreaded processes.

Difference between Process and Thread


Process
S.N.

Thread

Process is heavy weight or resource


intensive.

Thread is light weight taking lesser


resources than a process.

Process switching needs interaction


with operating system.

Thread switching does not need to


interact with operating system.

In multiple processing environments


each process executes the same code but
has its own memory and file resources.

All threads can share same set of


open files, child processes.

If one process is blocked then no


other process can execute until the first
process is unblocked.

While one thread is blocked and


waiting, second thread in the same task
can run.

Multiple processes without using


threads use more resources.

Multiple threaded processes use


fewer resources.

In multiple processes each process


operates independently of the others.

One thread can read, write or


change another thread's data.

Advantages of Thread
1.Thread minimize context switching time.
2.Use of threads provides concurrency within a process.
3.Efficient communication.
4.Economy- It is more economical to create and context switch
threads.
5.Utilization of multiprocessor architectures to a greater scale and
efficiency.

Types of Thread
Threads are implemented in following two ways
User Level Threads -- User managed threads
Kernel Level Threads -- Operating System managed threads acting
on kernel, an operating system core.
User Level Threads
In this case, application manages thread management kernel is not
aware of the existence of threads. The thread library contains code for
creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread
contexts. The application begins with a single thread and begins running in
that thread.


ADVANTAGES
1.Thread switching does not require Kernel mode privileges.
2.User level thread can run on any operating system.
3.Scheduling can be application specific in the user level thread.
4.User level threads are fast to create and manage.
DISADVANTAGES
1.In a typical operating system, most system calls are blocking.
2.Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads:
In this case, thread management done by the Kernel. There is no
thread management code in the application area. Kernel threads are
supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are
supported within a single process.
The Kernel maintains context information for the process as a whole
and for individuals threads within the process. Scheduling by the Kernel is
done on a thread basis. The Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally slower to create
and manage than the user threads.
ADVANTAGES

1.Kernel can simultaneously schedule multiple threads from the same


process on multiple processes.
2.If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
3.Kernel routines themselves can multithreaded.
DISADVANTAGES
1.Kernel threads are generally slower to create and manage than the
user threads.
2.Transfer of control from one thread to another within same process
requires a mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and
Kernel level thread facility. Solaris is a good example of this combined
approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system
call need not block the entire process.
Multithreading models are three types
Many to many relationship.
Many to one relationship.
One to one relationship.
Many to Many Model
In this model, many user level threads multiplexes to the Kernel
thread of smaller or equal numbers. The number of Kernel threads may be
specific to either a particular application or a particular machine.
Following diagram shows the many to many model. In this model,
developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a multiprocessor.


Many to One Model
Many to one model maps many user level threads to one Kernel level
thread. Thread management is done in user space. When thread makes a
blocking system call, the entire process will be blocks. Only one thread can
access the Kernel at a time,so multiple threads are unable to run in parallel
on multiprocessors.
If the user level thread libraries are implemented in the operating
system in such a way that system does not support them then Kernel
threads use the many to one relationship modes.


One to One Model
There is one to one relationship of user level thread to the kernel level
thread.This model provides more concurrency than the many to one model.
It also another thread to run when a thread makes a blocking system call. It
support multiple thread to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use one
to one relationship model.


Difference between User Level & Kernel Level Thread
S
.N.

User Level Threads

Kernel Level Thread

User level threads are faster to


create and manage.

Kernel level threads are slower to


create and manage.

Implementation is by a thread
library at the user level.

Operating system supports creation


of Kernel threads.

User level thread is generic and


can run on any operating system.

Kernel level thread is specific to the


operating system.

Multi-threaded application cannot


take advantage of multiprocessing.

Kernel routines themselves can be


multithreaded.

Thread library:
1. POSIX Pthread : for unix
2. Windows Thread
3. Java Thread library
Multicore and multiprocessor CPU and Hyperthreading :
Threading is preferred and encouraged for multicore processor
Multiprocessor CPU: This kind of motherboard will found in
supercomputers and very high performing computers. But this is not suitable
as hardware requirement is very high.


Note: Please read ppt on multicore arch which is stored in drive.
Thread Pool:
In case of multithreaded web server whenever the server receives a
request, it creates a separate thread to service the request. Whereas
creating a separate thread is certainly superior to creating a separate
process, a multithreaded server nonetheless has potential problems. The
first issue concerns the amount of time required to create the thread,
together with the fact that the thread will be discarded once it has
completed its work. The second issue is more troublesome. If we allow all
concurrent requests to be serviced in a new thread, we have not placed a
bound on the number of threads concurrently active in the system. Unlimited
threads could exhaust system resources, such as CPU time or memory. One
solution to this problem is to use a thread pool.
The general idea behind a thread pool is to create a number of threads
at
process startup and place them into a pool, where they sit and wait for
work.
When a server receives a request, it awakens a thread from this pool
if one
is availableand passes it the request for service. Once the thread
completes its service, it returns to the pool and awaits more work. If the
pool contains no available thread, the server waits until one becomes free.
Threading Issues:
1 The fork() and exec() System Calls:

The fork() system call is used to create a separate, duplicate process.


The semantics of the fork() and exec() system calls change in a
multithreaded program. If one thread in a program calls fork(), does the new
process duplicate all threads, or is the new process single-threaded? Some
UNIX systems have chosen to have two versions of fork(), one that
duplicates all threads and another that duplicates only the thread that
invoked the fork() system call. The exec() system call typically works in the
same way as described in Chapter 3. That is, if a thread invokes the exec()
system call, the program specified in the parameter to exec() will replace the
entire processincluding all threads. Which of the two versions of fork() to
use depends on the application. If exec() is called immediately after forking,
then duplicating all threads is unnecessary, as the program specified in the
parameters to exec() will replace the process. In this instance, duplicating
only the calling thread is appropriate. If, however, the separate process
does not call exec() after forking, the separate process should duplicate all
threads.
2. Signal Handling:
A signal is used in UNIX systems to notify a process that a particular
event has occurred. A signal may be received either synchronously or
asynchronously depending on the source of and the reason for the event
being signaled. All signals, whether synchronous or asynchronous, follow the
same pattern:
1. A signal is generated by the occurrence of a particular event.
2. The signal is delivered to a process.
3. Once delivered, the signal must be handled.
Examples of synchronous signal include illegal memory access and
division
by 0. If a running program performs either of these actions, a signal
is generated. Synchronous signals are delivered to the same process
that
performed the operation that caused the signal (that is the reason
they are
considered synchronous).
When a signal is generated by an event external to a running process,
that
process receives the signal asynchronously. Examples of such signals
include
terminating a process with specific keystrokes (such as
<control><C>) and
having a timer expire. Typically, an asynchronous signal is sent to
another
process.
A signal may be handled by one of two possible handlers:

1. A default signal handler


2. A user-defined signal handler
Every signal has a default signal handler that the kernel runs when
handling that signal. This default action can be overridden by a userdefined
signal handler that is called to handle the signal. Signals are handled
in
different ways. Some signals (such as changing the size of a window)
are
simply ignored; others (such as an illegal memory access) are handled
by
terminating the program.
Handling signals in single-threaded programs is straightforward:
signals
are always delivered to a process. However, delivering signals is more
complicated in multithreaded programs, where a process may have
several
threads. Where, then, should a signal be delivered?
In general, the following options exist:
1. Deliver the signal to the thread to which the signal applies.
2. Deliver the signal to every thread in the process.
3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals for the process.
The method for delivering a signal depends on the type of signal
generated.
For example, synchronous signals need to be delivered to the thread
causing
the signal and not to other threads in the process. However, the
situation with
asynchronous signals is not as clear. Some asynchronous signalssuch
as a
signal that terminates a process (<control><C>, for example)should
be
sent to all threads. The standard UNIX function for delivering a signal
is
kill(pid t pid, int signal)

This function specifies the process (pid) to which a particular signal


(signal) is to be delivered. Most multithreaded versions of UNIX allow a
thread to specify which signals it will accept and which it will block.
Therefore, in some cases, an asynchronous signal may be delivered only to
those threads that are not blocking it. However, because signals need to be
handled only once, a signal is typically delivered only to the first thread
found that is not blocking it. POSIX Pthreads provides the following function,
which allows a signal to be delivered to a specified thread (tid):
pthread kill(pthread t tid, int signal)
Although Windows does not explicitly provide support for signals, it
allows us to emulate them using asynchronous procedure calls (APCs).
The
APC facility enables a user thread to specify a function that is to be
called
when the user thread receives notification of a particular event. As
indicated
by its name, an APC is roughly equivalent to an asynchronous signal in
UNIX.
However, whereas UNIX must contend with how to deal with signals in
a
multithreaded environment, the APC facility is more straightforward,
since an
APC is delivered to a particular thread rather than a process.
3.Thread Cancellation:
Thread cancellation involves terminating a thread before it has
completed. For example, if multiple threads are concurrently searching
through a database and one thread returns the result, the remaining threads
might be canceled. Another situation might occur when a user presses a
button on a web browser that stops a web page from loading any further.
Often, a web page loads using several threadseach image is loaded in a
separate thread. When a user presses the stop button on the browser, all
threads loading the page are canceled.
A thread that is to be canceled is often referred to as the target
thread.
Cancellation of a target thread may occur in two different scenarios:
1. Asynchronous cancellation. One thread immediately terminates the
target thread.
2. Deferred cancellation. The target thread periodically checks whether
it
should terminate, allowing it an opportunity to terminate itself in an
orderly fashion.

The difficulty with cancellation occurs in situations where resources


have
been allocated to a canceled thread or where a thread is canceled
while in
the midst of updating data it is sharing with other threads. This
becomes
especially troublesome with asynchronous cancellation. Often, the
operating
system will reclaim system resources from a canceled thread but will
not
reclaim all resources. Therefore, canceling a thread asynchronously
may not
free a necessary system-wide resource.
With deferred cancellation, in contrast, one thread indicates that a
target
thread is to be canceled, but cancellation occurs only after the target
thread has checked a flag to determine whether or not it should be canceled.
The thread can perform this check at a point at which it can be canceled
safely.
4. Thread local storage:
Threads belonging to a process share the data of the process. Indeed,
this
data sharing provides one of the benefits of multithreaded
programming.
However, in some circumstances, each thread might need its own copy
of
certain data.We will call such data thread-local storage (or TLS.) For
example, in a transaction-processing system, we might service each
transaction in a separate thread. Furthermore, each transaction might be
assigned a unique identifier. To associate each thread with its unique
identifier, we could use thread-local storage.
It is easy to confuse TLS with local variables. However, local variables
are visible only during a single function invocation, whereas TLS data
are
visible across function invocations. In some ways, TLS is similar to
static
data. The difference is that TLS data are unique to each thread. Most
thread
librariesincluding Windows and Pthreadsprovide some form of
support
for thread-local storage; Java provides support as well.

5. Scheduler activations:
A final issue to be considered with multithreaded programs concerns
communication between the kernel and the thread library, which may be
required by the many-to-many and two-level models. Such coordination
allows the number of kernel threads to be dynamically adjusted to help
ensure the best performance.
Many systems implementing either the many-to-many or the two-level
model place an intermediate data structure between the user and
kernel
threads. This data structuretypically known as a lightweight process,
or
LWPis shown in Figure. To the user-thread library, the LWP appears
to be a virtual processor on which the application can schedule a user thread
to run. Each LWP is attached to a kernel thread, and it is kernel threads that
the

Fig: Light weight Process


operating system schedules to run on physical processors. If a kernel
thread

blocks (such as while waiting for an I/O operation to complete), the


LWP blocks as well. Up the chain, the user-level thread attached to the LWP
also blocks. An application may require any number of LWPs to run
efficiently. Consider a CPU-bound application running on a single processor.
In this scenario, only one thread can run at at a time, so one LWP is
sufficient. An application that is I/O-intensive may require multiple LWPs to
execute, however. Typically, an LWP is required for each concurrent blocking
system call. Suppose, for example, that five different file-read requests
occur simultaneously. Five LWPs are needed, because all could be waiting for
I/O completion in the kernel. If a process has only four LWPs, then the fifth
request must wait for one of the LWPs to return from the kernel.
One scheme for communication between the user-thread library and
the
kernel is known as scheduler activation. It works as follows: The kernel
provides an application with a set of virtual processors (LWPs), and the
application can schedule user threads onto an available virtual
processor.
Furthermore, the kernel must inform an application about certain
events. This procedure is known as an upcall. Upcalls are handled by the
thread library with an upcall handler, and upcall handlers must run on a
virtual processor.
One event that triggers an upcall occurs when an application thread is
about to block. In this scenario, the kernel makes an upcall to the
application informing it that a thread is about to block and identifying the
specific thread. The kernel then allocates a new virtual processor to the
application. The application runs an upcall handler on this new virtual
processor, which saves the state of the blocking thread and relinquishes the
virtual processor on which the blocking thread is running. The upcall handler
then schedules another thread that is eligible to run on the new virtual
processor. When the event that the blocking thread was waiting for occurs,
the kernel makes another upcall to the thread library informing it that the
previously blocked thread is now eligible to run.
The upcall handler for this event also requires a virtual processor, and
the kernel may allocate a new virtual processor or preempt one of the user
threads and run the upcall handler on its virtual processor. After marking the
unblocked thread as eligible to run, the application schedules an eligible
thread to run on an available virtual processor.
Components of Thread:
The general components of a thread include:
A thread ID uniquely identifying the thread
A register set representing the status of the processor

A user stack, employed when the thread is running in user mode,


and a
kernel stack, employed when the thread is running in kernel mode
A private storage area used by various run-time libraries and
dynamic link
libraries (DLLs)
The register set, stacks, and private storage area are known as the
context of the thread.

Process Synchronization:
Process synchronization are required in case of cooperating process
where multiple process are working simultaneously. Consider two process
which are modifying common data structure. That common data structure is
nothing but critical section which can affect performance of the other
process. To avoid that process synchronization is required.
Process synchronization techniques:
Reference:
http://www2.cs.uregina.ca/~hamilton/courses/330/notes/synchro/node3.ht
ml
1. Petersons solution(Software approach):
2. Mutex
3. Semaphore: (Binary or counting semaphore)
typedef struct {
int value;
struct process *list;
} semaphore;
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);

}
}
Deadlock and Starvation:
Because of deadlock starvation can happen. Infinitely blocking process
is a starved process.
Synchronization Problem:
1. Bounded buffer problem
2. Reader Writer problm
3. Dining philosopher problem
Usage of Monitor:

For memory management :


http://www.cs.uic.edu/~jbell/CourseNotes/Operatin
gSystems/8_MainMemory.html
1. Little indian and big indian
2. BST
Important and tricky questions
Link List:
http://www.geeksforgeeks.org/write-a-function-to-reverse-the-nodesof-a-linked-list/
http://www.geeksforgeeks.org/nth-node-from-the-end-of-a-linked-list/
Categor
y
Postfix

Operator

Associa
tivity

() [] -> . ++ - -

Left to
right

Unary

+ - ! ~ ++ - - (type) * & sizeof

Right to
left

Multiplic

*/%

ative

Left to
right

Additive

+-

Left to

right
Shift

<< >>

Left to
right

Relation

< <= > >=

al

Left to
right

Equality

== !=

Left to
right

Bitwise

&

AND

Left to
right

Bitwise

XOR

Left to
right

Bitwise

OR

Left to
right

Logical

&&

AND

Left to
right

Logical

||

OR

Left to
right

Conditio

?:

nal

Right to
left

Assignm
ent
Comma

= += -= *= /= %= >>= <<= &=


^= |=

Right to
left

Left to
right

Tips to solve some linked list problem


1. Always remember link list is non contigious unidirectional memory
space. So while solving it try to use
1. Basic way dont care about complexity
2. Optimization may be by using two pointer or recursive function.

Das könnte Ihnen auch gefallen