Beruflich Dokumente
Kultur Dokumente
Perform multiple tasks at once (reading and writing, computing and receiving input) Take advantage of multiple P!s "ore efficiently use resources
%&' P!
Thread ( Thread )
Waiting
*unning
+uick view
Process
%solated with its own virtual address space ontains process data like file handles ,ots of overhead -very process has .T ,-.ST one kernel thread Shared virtual address space ontains running state data ,ess overhead 0rom the 'S1s point of view, this is what is scheduled to run on a P! Shared virtual address space, contains running state data /ernel unaware -ven less overhead
/ernel threads
!ser threads
Trade2offs
Processes
Secure and isolated /ernel aware reating a new process (address space3) 4rings lots of overhead 5o need to create a new address space 5o need to change address space in conte6t switch /ernel aware Still need to enter kernel to conte6t switch 5o new address space, no need to change address space 5o need to enter kernel to switch /ernel is unaware7 5o multiprocessing7 %&' 4locks all user threads7
/ernel threads
!ser threads
%mplicit overheads
onte6t switching 4etween processes is very e6pensive 4ecause it changes the address space7
8ut changing the address space is simply a register change in the P!? 8ut it re9uires flushing the Translation ,ook2aside 8uffer7
onte6t switching 4etween threads has a similar overhead7 Suddenly the cache will miss a lot7
Process
,ike in
hrome
/ernel threads
"ultiprocessor heavy P! per conte6t switch 8locking %&' ompiling ,inu6 Single processor or single kernel thread ,ight P! per conte6t switch ,ittle or no 4locking %&'
!ser threads
onte6t switching
Xsthread_switch: pusha movl %esp,(%eax) movl %edx,%esp popa ret
Thread 1 TCB SP
Thread 2 TCB SP
Thread 2 registers
CPU
ESP
Thread 1 regs
Thread 1 running
Thread 2 ready
8
Thread 1 TCB SP
Thread 2 TCB SP
CPU
ESP
Thread 1 regs
Thread 1 running
Thread 2 ready
9
Thread 1 TCB SP
Thread 2 TCB SP
CPU
ESP
Thread 1 regs
Thread 1 running
Thread 2 ready
10
Thread 1 TCB SP
Thread 2 TCB SP
CPU
ESP
Thread 1 regs
Thread 1 ready
Thread 2 running
11
Thread 1 TCB SP
Thread 2 TCB SP
Thread 1 registers
CPU
ESP
Thread 2 regs
Thread 1 ready
Thread 2 running
12
:one; return
Xsthread_switch: pusha movl %esp,(%eax) movl %edx,%esp popa ret
Thread 1 TCB SP
Thread 2 TCB SP
Thread 1 registers
CPU
ESP
Thread 2 regs
Thread 1 ready
Thread 2 running
13
.d<usting the P
Thread 1 TCB SP
Thread 2 TCB SP
ra,$%&$$
CPU
ESP PC
Thread 1 (sto''ed) Thread 2 running switch(t1!t2)# switch(t2!""")# $%-$$ 'rint(()test 1*)# $%&$$ 'rint(()test 2*)#
onte6t Switching
Theading "odels
8etween kernel and user threads, a process might use one of three models=
Threading "odels
777it can actually get pro4lematic in its comple6ity See Scheduler .ctivations
,inu6 actually runs 'ne to one Windows runs a la>y version of Scheduler .ctivations7
?ou must have noticed in your pro<ect you deal with a ,inu6 structure called a #task@struct$7 %s this a P 8 or T 8?
task@structs
,inu6 has no e6plicit concept of a #thread$ (or a process) 4ut #tasks$7 . task is a #conte6t of e6ecution$ or
'-s7
'-s can share anything, nothing, or something in24etween7 .n e6ternal #cd$ program7 (shares fs struct and cwd)7 #e6ternal %' daemons$7 (shares file descriptors) vfork (shares address space)7
,ocks
%f you need to protect shared data and critical sections, you need some primitive to work with7 8ut, there are lots of design choices in locking and synchroni>ation7
Spinning vs 8locking
Spinning
%f the lock is not free, repeatedly try to ac9uire the lock7 %f the lock is not free, add the thread to the lock1s wait 9ueue and conte6t switch7 Spinning is good for small critical sections7 .lso good on multiprocessors7 %f the overhead of the conte6t switch is less than the time spent waiting (spinning), then 4locking is prefera4le7
8locking
Spin locks are good for fine2grained work like you might see in your 'S7 8locking is good for coarse2grained work like protecting large data structures7
This commonly uses test@and@set7 This ensures that the current thread is the only one operating7
'ptimistic locking checks that an update will not 4reak the structure7
%t does this 4y reading an initial value then checking that this value hasn1t changed with compare@and@swap7 %f the value has changed, a4ort and try again7 Therefore, any num4er of threads might 4e operating on a #critical section7$
#"ake the common case fast7$ Pessimistic locking assumes that the common case is contention7
We won1t waste time trying to run through critical section if we only end up a4orting7 .n 'S has lots of small, commonly used data structures and critical sections7
%nversely, optimistic locking assumes that most of the time there isn1t contention7
'ptimistic locking is like data4ase transactions7 They assume the will not commonly a4ort7 .lso good when data is commonly read 4ut rarely written7
Branularity of locks
,ow overhead7 0ewer memory references7 ,ess concurrency7 Cigher overhead7 "ore memory references7 Breater capacity for 4us contention and cache storms7 Breater concurrency7