Sie sind auf Seite 1von 7

3Tasks are usually assigned with priorities.

At times it is necessary to run a certain task that has a higher priority before another task although it is running. Therefore, the running task is interrupted for some time and resumed later when the priority task has finished its execution. This is called preemptive scheduling. Eg: Round robin In non-preemptive scheduling, a running task is executed till completion. It cannot be interrupted. Eg First In First Out

The most common names for addressing modes (names may differ among architectures) Addressing Example Meaning When used modes Instruction Register Add R4,R3 R4 <- R4 + R3 When a value is in a register Immediate Add R4, #3 R4 <- R4 + 3 For constants Add R4, Displacement R4 <- R4 + M[100+R1] Accessing local variables 100(R1) Register Accessing using a pointer or a Add R4,(R1) R4 <- R4 + M[R1] deffered computed address Useful in array addressing: Add R3, (R1 + Indexed R3 <- R3 + M[R1+R2] R1 - base of array R2) R2 - index amount Add R1, Direct R1 <- R1 + M[1001] Useful in accessing static data (1001) Memory Add R1, If R3 is the address of a pointer p, R1 <- R1 + M[M[R3]] deferred @(R3) then mode yields *p Useful for stepping through arrays AutoR1 <- R1 +M[R2] in a loop. Add R1, (R2)+ increment R2 <- R2 + d R2 - start of array d - size of an element Same as autoincrement. AutoR2 <-R2-d Add R1,-(R2) Both can also be used to implement decrement R1 <- R1 + M[R2] a stack as push and pop Used to index arrays. May be Add R1, R1<Scaled applied to any base addressing 100(R2)[R3] R1+M[100+R2+R3*d] mode in some machines.
A NassiShneiderman diagram (NSD) in computer programming is a graphical design representation for structured programming.[1] This type of diagram was developed in 1972 by Isaac Nassi and his graduate student Ben Shneiderman.[2]These diagrams are also called structograms,[3] as they show a program's structures. HIPO for hierarchical input process output is a "popular 1970s systems analysis design aid and documentation technique"[1] for representing the modules of a system as a hierarchy and for documenting each module Flow chart: Symbols

A typical flowchart from older basic computer science textbooks may have the following kinds of symbols: Start and end symbols Represented as circles, ovals or rounded (fillet) rectangles, usually containing the word "Start" or "End", or another phrase signaling the start or end of a process, such as "submit inquiry" or "receive product". Arrows Showing "flow of control". An arrow coming from one symbol and ending at another symbol represents that control passes to the symbol the arrow points to. The line for the arrow can be solid or dashed. The meaning of the arrow with dashed line may differ from one flowchart to another and can be defined in the legend. Generic processing steps Represented as rectangles. Examples: "Add 1 to X"; "replace identified part"; "save changes" or similar. Subroutines Represented as rectangles with double-struck vertical edges; these are used to show complex processing steps which may be detailed in a separate flowchart. Example: PROCESSFILES. One subroutine may have multiple distinct entry points or exit flows (see coroutine); if

so, these are shown as labeled 'wells' in the rectangle, and control arrows connect to these 'wells'. Input/Output Represented as a parallelogram. Examples: Get X from the user; display X. Prepare conditional Represented as a hexagon. Shows operations which have no effect other than preparing a value for a subsequent conditional or decision step (see below). Conditional or decision Represented as a diamond (rhombus) showing where a decision is necessary, commonly a Yes/No question or True/False test. The conditional symbol is peculiar in that it has two arrows coming out of it, usually from the bottom point and right point, one corresponding to Yes or True, and one corresponding to No or False. (The arrows should always be labeled.) More than two arrows can be used, but this is normally a clear indicator that a complex decision is being taken, in which case it may need to be broken-down further or replaced with the "pre-defined process" symbol. Junction symbol Generally represented with a black blob, showing where multiple control flows converge in a single exit flow. A junction symbol will have more than one arrow coming into it, but only one going out. In simple cases, one may simply have an arrow point to another arrow instead. These are useful to represent an iterative process (what in Computer Science is called a loop). A loop

may, for example, consist of a connector where control first enters, processing steps, a conditional with one arrow exiting the loop, and one going back to the connector. For additional clarity, wherever two lines accidentally cross in the drawing, one of them may be drawn with a small semicircle over the other, showing that no junction is intended. Labeled connectors Represented by an identifying label inside a circle. Labeled connectors are used in complex or multi-sheet diagrams to substitute for arrows. For each label, the "outflow" connector must always be unique, but there may be any number of "inflow" connectors. In this case, a junction in control flow is implied. Concurrency symbol Represented by a double transverse line with any number of entry and exit arrows. These symbols are used whenever two or more control flows must operate simultaneously. The exit flows are activated concurrently when all of the entry flows have reached the concurrency symbol. A concurrency symbol with a single entry flow is a fork; one with a single exit flow is a join.

Lexical analysis The word "lexical" is related to the term lexeme: an abstract unit of morphological analysis in linguistics. Lexical analysis or "tokenization" is the process of breaking a character stream into individual units called tokens. Tokens are strings that represent a category of symbols. For those of you familiar with Java, you know of the legacy class, StringTokenizer, that is used with a regular expression to break up a string into units returned in a String array. A lexer or "scanner" is a software tool used by the compiler to tokenize source code. Tokenization is realized through pattern matching. Loaders:
In computing, a loader is the part of an operating system that is responsible for loading programs. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution. Loading a program involves reading the contents of executable file, the file containing the program text, into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code. All operating systems that support program loading have loaders, apart from systems where code executes directly from ROM or in the case of highly specialized computer systems that only have a fixed set of specialised programs. In many operating systems the loader is permanently resident in memory, although some operating systems that support virtual memory may allow the loader to be located in a region of memory that is pageable. In the case of operating systems that support virtual memory, the loader may not actually copy the contents of executable files into memory, but rather may simply declare to the virtual memory subsystem that there is a mapping between a region of memory allocated to contain the running program's code and the contents of the associated executable file. (See memory-mapped file.) The

virtual memory subsystem is then made aware that pages with that region of memory need to be filled on demand if and when program execution actually hits those areas of unfilled memory. This may mean parts of a program's code are not actually copied into memory until they are actually used, and unused code may never be loaded into memory at all. Linker: In computer science, a linker or link editor is a computer program that takes one or more object files generated by a compiler and combines them into a single executable program. Assembler: An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language. Compiler: A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer'sprocessor uses. Interpreter: In computer science, an interpreter normally means a computer program that executes, i.e. performs, instructions written in a programming language. An interpreter may be a program that either 1. executes the source code directly 2. translates source code into some efficient intermediate representation (code) and immediately executes this 3. explicitly executes stored precompiled code[1] made by a compiler which is part of the interpreter system Memory Address Register: The Memory Address Register (MAR) is a CPU register that either stores the memory address from which data will be fetched to the CPU or the address to which data will be sent and stored. In other words, MAR holds the memory location of data that needs to be accessed. When reading from memory, data addressed by MAR is fed into the MDR (memory data register) and then used by the CPU. When writing to memory, the CPU writes data from MDR to the memory location whose address is stored in MAR. The Memory Address Register is half of a minimal interface between a microprogram and computer storage. The other half is a memory data register. Far more complex memory interfaces exist, but this is the least that can work. Memory Buffer Register: A Memory Buffer Register (MBR) is the register in a computer's processor, or central processing unit, CPU, that stores the data being transferred to and from the immediate access store. It acts as a buffer allowing the processor and memory units to act independently without being affected by minor differences in operation. A data item will be copied to the MBR ready for use at the next clock cycle, when it can be either used by the processor or stored in main memory.

This register holds the contents of the memory which are to be transferred from memory to other components or vice versa. A word to be stored must be transferred to the MBR, from where it goes to the specific memory location, and the arithmetic data to be processed in the ALU first goes to MBR and then to accumulated register, and then it is processed in the ALU.

Types of operating system Real-time A real-time operating system is a multitasking operating system that aims at executing realtime applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an eventdriven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts. Multi-user A multi-user operating system allows multiple users to access a computer system at the same time. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time. Single-user operating systems have only one user but may allow multiple programs to run at the same time. Multi-tasking vs. single-tasking A multi-tasking operating system allows more than one program to be running at a time, from the point of view of human time scales. A single-tasking system has only one running program. Multi-tasking can be of two types: pre-emptive or co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions, both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking. Distributed Further information: Distributed system A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system. Embedded

Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A deadlock situation can arise if and only if all of the following conditions hold simultaneously in a system:[1] Mutual Exclusion: At least one resource must be non-shareable.[1] Only one process can use the resource at any given instant of time. Hold and Wait or Resource Holding: A process is currently holding at least one resource and requesting additional resources which are being held by other processes. No Preemption: The operating system must not de-allocate resources once they have been allocated; they must be released by the holding process voluntarily. Circular Wait: A process must be waiting for a resource which is being held by another process, which in turn is waiting for the first process to release the resource. In general, there is aset of waiting processes, P = {P1, P2, ..., PN}, such that P1 is waiting for a resource held by P2, P2 is waiting for a resource held by P3 and so on till PN is waiting for a resource held by P1.[1][7]

1. Semaphores 2. Types of instructions(register to register, storage to storage) 3. Spooling 4. Virtual memory 5. Caching 6. Types of caches 7. Paging 8. Fragmentation 9. Tables in os 10. Trojan horse what type of program 11. Legitimate program means 12. Producer consumer problem 13. Literal table, identifier table, terminal table 14. Macro processor, translator 15. Executive control modules 16. Reenterable means 17. History of os 18. Booting 19. Types of computers

20. VSAM file 21. Different types of algorithms in os 22. Storage placement strategies 23. Block device means 24. Types of s/w 25. Job control language 26. Relative organisation, Key fielding, dynamic reallocation, hashing 27. Batch file contains what? 28. Relocate program means 29. Registers 30. Critical section 31. Latency 32. Transmission time 33. Seek time 34.

Das könnte Ihnen auch gefallen