Sie sind auf Seite 1von 8

Lexical Analizers

CONTENTS
• INTRODUCTION
• AIM OF THE PROJECT
• PURPOSE OF THE PROJECT
• SYSTEM DESIGN
• GOALS
• SCOPE OF PROJECT
INTRODUCTION
• Lexical analyzer converts stream of input characters into a stream of tokens. The different tokens that our
lexical analyzer identifies are as follows:
• KEYWORDS: int, char, float, double, if, for, while, else, switch, struct, printf, scanf, case, break, return,
typedef, void
• IDENTIFIERS: main, fopen, getch etc
• NUMBERS: positive and negative integers, positive and negative floating point numbers.
• OPERATORS: +, ++, -, --, ||, *, ?, /, >, >=, <, <=, =, ==, &, &&.
• BRACKETS: [ ], { }, ( ).
• STRINGS: Set of characters enclosed within the quotes
• COMMENT LINES: Ignores single line, multi line comments
• For tokenizing into identifiers and keywords we incorporate a symbol table which initially consists of
predefined keywords. The tokens are read from an input file. If the encountered token is an identifier or a
keyword the lexical analyzer will look up in the symbol table to check the existence of the respective token. If
an entry does exist then we proceed to the next token. If not then that particular token along with the token
value is written into the symbol table. The rest of the tokens are directly displayed by writing into an output
file.
AIM OF THE PROJECT
• Aim of the project is to develop a Lexical Analyzer that can generate
tokens for the further processing of compiler.
PURPOSE OF THE PROJECT

• The lexical features of a language can be specified using types-3


grammar. The job of the lexical analyzer is to read the source program
one character at a time and produce as output a stream of tokens.
The tokens produced by the lexical analyzer serve as input to the next
phase, the parser. Thus, the lexical analyzer’s job is to translate the
source program in to a form more conductive to recognition by the
parser.
SYSTEM DESIGN:
• Process:
• The lexical analyzer is the first phase of a compiler. Its main task is to read the input characters and produce
as output a sequence of tokens that the parser uses for syntax analysis. This interaction, summarized
schematically in fig. a.

• Upon receiving a “get next token “command from the parser, the lexical analyzer reads the input characters
until it can identify next token.
• Sometimes, lexical analyzers are divided into a cascade of two phases, the first called “scanning”, and the
second “lexical analysis”.
• The scanner is responsible for doing simple tasks, while the lexical analyzer proper does the more complex
operations.
• The lexical analyzer which we have designed takes the input from an input file. It reads one character at a
time from the input file, and continues to read until end of the file is reached. It recognizes the valid
identifiers, keywords and specifies the token values of the keywords.
• It also identifies the header files, #define statements, numbers, special characters, various relational and
logical operators, ignores the white spaces and comments. It prints the output in a separate file specifying
the line number.
GOALS
• To create tokens from the given input stream.
SCOPE OF PROJECT

• Lexical analyzer converts the input program into character stream of valid
words of language, known as tokens.

• The parser looks into the sequence of these tokens & identifies the
language construct occurring in the input program. The parser and the
lexical analyzer work hand in hand; in the sense that whenever the parser
needs further tokens to proceed, it request the lexical analyzer. The lexical
analyzer in turn scans the remaining input stream & returns the next token
occurring there. Apart from that, the lexical analyzer also participates in
the creation & maintenance of symbol table. This is because lexical
analyzer is the first module to identify the occurrence of a symbol. If the
symbol is getting defined for the first time, it needs to be installed into the
symbol table. Lexical analyzer is most widely used for doing the same.

Das könnte Ihnen auch gefallen