Control section in system software




















In the numeric format if the sign appears in the last byte it is known as the trailing numeric. If the sign appears in a separate byte preceding the first digit then it is called as leading separate numeric.

What are the addressing modes used in VAX architecture? Register direct, register deferred, auto increment and decrement, program counter relative, base relative, index register mode and indirect addressing are the various addressing modes in VAX architecture. How do you calculate the actual address in the case of register indirect with immediate index mode? Here the target address is calculated using the formula T. The test device TD instruction tests whether the addressed device is ready to send or receive a byte of data.

The condition code is set to indicate the result of this test. Define software. It is the set of programs written in any of the programming languages.

Software is divided into 2 types. Define the basic functions of assembler. What is meant by assembler directives. Give example. These are the statements that are not translated into machine instructions,but they provide instructions to assembler itself.

What is forward references? It is a reference to a label that is defined later in a program. If we attempt to translate the program line by line,we will unable to process the statement in line10 because we do not know the address that will be assigned to RETADR. The address is assigned later in line 80 in the program.

What are the three different records used in object program? The header record,text record and the end record are the three different records used in object program. The header record contains the program name,starting address and length of the program.

Text record contains the translated instructions and data of the program. End record marks the end of the object program and specifies the address in the program where execution is to begin. The operation code table contain the mnemonic operation code and its machine language equivalent. Some assemblers it may also contains information about instruction format and length. OPTAB is usually organized as a hash table,with mnemonic operation code as the key. What are the symbol defining statements generally used in assemblers?

Define relocatable program. An object program that contains the information necessary to perform required modification in the object code depends on the starting location of the program during load time is known as reloadable program.

Differentiate absolute expression and relative expression. If the result of the expression is an absolute value constant then it is known as absolute expression. Write the steps required to translate the source program to object program. This variable is used to assign addresses to the symbols. Aftre each source statement Define load and go assembler.

One pass assembler that generate their object code in memory for immediate execution is known as load and go assembler. Here no object programmer is written out and hence no need for loader. What are the two different types of jump statements used in MASM assembler? What are the use of base register table in AIX assembler? A base register table is used to remember which of the general purpose registers are currently available as base registers and also the base addresses they contain.

USING statement causes entry to the table and. DROP statement removes the corresponding table entry. Define modification record and give its format This record contains the information about the modification in the object code during program relocation. Object code generation b.

Literals added to literal table c. Listing printed Answer: a. Object code generation - PASS 2 b. Literals added to literal table — PASS 1 c. Listing printed — PASS2 d. Address location of local symbols — PASS1 What is meant by machine independent assembler features?

The assembler features that does not depends upon the machine architecture are known as machine independent assembler features. Eg: program blocks,Literals. How the register to register instructions are translated in assembler? In the case of register to register instructions the operand field contains the register name. During the translation first the object code is converted into its corresponding machine language equivalent with the help of OPTAB.

What is meant by external references? Assembler program can be divided into many sections known as control sections and each control section can be loaded and relocated independently of the others. If the instruction in one control section need to refer instruction or data in another control section. Such references between control are called external references. Define control section. A control section is a part of the program that maintain its identity after assembly;each control section can be loaded and relocated independently of the others.

Control sections are most often used for subroutines. The major benefit of using control sections is to increase flexibility. EXTDEF names external symbols that are defined in a particular control section and may be used by other sections. EXTREF names external symbols that are referred in a particular control section and defined in another control section. What are the basic functions of loaders Loading — brings the object program into memory for execution Relocation — modifies the object program so that it can be loaded at an address different from the location originally specified Linking — combines two or more separate object programs and also supplies the information needed to reference them.

Define absolute loader The loader, which is used only for loading, is known as absolute loader. Bootstrap loader What is meant by bootstrap loader?

This is a special type of absolute loader which loads the first program to be run by the computer. What are relative relocative loaders? Define conditional macro expansion. If the macro is expanded depends upon some conditions in macro definition depending on the arguments supplied in the macro expansion then it is called as conditional macro expansion. What is the use of macro time variable? Macro time variable can be used to store working values during the macro expansion.

What are the statements used for conditional macro expansion? What is meant by positional parameters? If the parameters and arguments were associated with each other according to their positions in the macro prototype and the macro invocation statement, then these parameters in macro definitions are called as positional parameters.

What are known as nested macro call? The statement in which a macro calls on another macro,is called nested macro call. In the nested macro call, the call is done by outer macro and the macro called is the inner macro. How the macro is processed using two passes? Pass 1: processing of definitions Pass 2:actual-macro expansion. Give the advantage of line by line processors. What is meant by line by line processor This macro processor reads the source program statements, process the statements and then the output lines are passed to the language translators as they are generated, instead of being written in an expanded file.

Give the advantages of general-purpose macroprocessors. What is meant by general-purpose macro processors? The macro processors that are not dependent on any particular programming language,but can be used with a variety of different languages are known as general purpose macro processors. What are the important factors considered while designing a general purpose macroprocessors?

What is the symbol used to generate unique labels? How the nested macro calls are executed? The execution of nested macro call follows the LIFO rule. In case of nested macro calls the expansion of the latest macro call is completed first.

Mention the tasks involved in macro expansion. How to design the pass structure of a macro assembler? To design the structure of macro-assembler, the functions of macro preprocessor and the conventional assembler are merged. After merging, the functions are structured into passes of the macro assembler. Define interactive editor? An interactive editor is a computer program that allows a user to create and revise a target document. The term document includes objects such as computer programs, text, equations, tables, diagrams, line art, and photographs any thing that one might find on a printed page.

What are the tasks performed in the editing process? Determine how to format this view on-line and how to display it. Specify and execute operations that modify the target document. Update the view appropriately. Locator device 4. What is the function performed in editing phase?

In the actual editing phase, the target document is created or altered with a set of operations such as insert, delete, replace, move and copy. Define Locator device? The most common such devices for editing applications are the mouse and the data tablet. What is the function performed in voice input device? Voice-input devices, which translate spoken words to their textual equivalents, may prove to be the text input devices of the future. Voice recognizers are currently available for command input on some systems.

What are called tokens? The lexical analyzer tracks the source program one character at a time by making the source program into sequence of atomic units is called tokens.

Name some of typical tokens. Identifiers, keywords, constants, operators and punctuation symbols such as commas and parentheses are typical tokens. What is meant by lexeme? The character that forms a token is said to be a lexeme. Mention the main disadvantage of interpreter.

The main disadvantage of interpreter is that the execution time of interpreted program is slower than that of a corresponding compiled object program. What is meant by code optimization? The code optimization is designed to improve the intermediate code, which helps the object program to run faster and takes less space. What is error handler? The error handler is used to check if there is an error in the program.

If any error, it should warn the programmer by instructions to proceed from phase to phase. Name some of text editors. What for debug monitors are used? Debug monitors are used in obtaining information for localization of errors. Mention the features of word processors. What are the phases in performing editing process? Traveling phase b. Filtering phase c. Formatting phase d. Editing phase Define traveling phase. The phase specifies the region of interest.

Traveling is achieved using operations such as next screenful, bottom, find pattern. Filtering phase. The selection of what is to be viewed and manipulated in given by filtering. Editing phase In this phase, the target document is altered with the set of operations such as insert, delete, replace, move and copy.

Define user interface? User interface is one, which allows the user to communicate with the system in order to perform certain tasks. User interface is generally designed in a computer to make it easier to use. Define input device? Input device is an electromechanical device, which accepts data from the outside world and translates them into a form, which the computer can interpret. Define output devices Output devices the user to view the elements being edited and the results of the editing operations.

Define editor structure. The command language processor accepts input from the users input devices and analyzes the tokens and syntactic structure of the commands. Define interactive debugging systems An interactive debugging system provides programmers with facilities that aid in the testing and debugging of programs. Debugging functions and capabilities 2. Relationship with other parts of the system 3.

User interface criteria. What are the basic types of computing environments used in editors functions? Time sharing ii. Stand-alone iii. Distributed What are the methods in Interaction language of a text editor? Typing —oriented or text command oriented method b. Function key interfaces c.

Wow awesome blog. Thanks Again. Fragmentation Description External Total memory space is enough to satisfy a request or to reside a 1 fragmentation process in it, but it is not contiguous so it cannot be used. Some portion of fragmentation memory is left unused as it cannot be used by another process. External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block.

External fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages size is power of 2, between bytes and bytes. When a process is to be executed, it's corresponding pages are loaded into any available memory frames.

Logical address space of a process can be non-contiguous and a process is allocated physical memory whenever the free memory frame is available. Operating system keeps track of all free frames. Operating system needs n free frames to run a program of size n pages. Segmentation Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information.

For example, data segments or code segment for each process, data segment for operating system and so on. Segmentation can be implemented using or without using paging. Speed differences between two devices. A slow device may write data into a buffer, and when the buffer is full, the entire buffer is sent to the fast device all at once. So that the slow device still has somewhere to write while this is going on, a second buffer is used, and the two buffers alternate as each becomes full.

This is known asdouble buffering. Double buffering is often used in animated graphics, so that one screen image can be generated in a buffer while the other completed buffer is displayed on the screen. This prevents the user from ever seeing any half-finished screen images.

Data transfer size differences. Buffers are used in particular in networking systems to break messages up into smaller packets for transfer, and then for re-assembly at the receiving side. To support copy semantics.

For example, when an application makes a request for a disk write, the data is copied from the user's memory area into a kernel buffer. Now the application can change their copy of the data, but the data which eventually gets written out to disk is the version of the data at the time the write request was made. VirtualMemory This section describes concepts of virtual memory, demand paging and various page replacement algorithms. Virtual memory is a technique that allows the execution of processes which are not completely available in memory.

The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available. Following are the situations, when entire program is not required to be loaded fully in main memory. Virtual memory is commonly implemented by demand paging.

It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory. Virtual memory algorithms Page replacement algorithms Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated.

Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages. This process determines the quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm. A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself.

There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults. Reference String The string of memory references is called reference string.

Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference. The latter choice produces a large number of data, where we note two things. A translation look-aside buffer TLB : A translation lookaside buffer TLB is a memory cache that stores recent translations of virtual memory to physical addresses for faster retrieval.

When a virtual memory address is referenced by a program, the search starts in the CPU. First, instruction caches are checked. At this point, TLB is checked for a quick reference to the location in physical memory. When an address is searched in the TLB and not found, the physical memory must be searched with a memory page crawl operation.

As virtual memory addresses are translated, values referenced are added to TLB. TLBs also add the support required for multi-user computers to keep memory separate, by having a user and a supervisor mode as well as using permissions on read and write bits to enable sharing.

TLBs can suffer performance issues from multitasking and code errors. This performance degradation is called a cache thrash. Cache thrash is caused by an ongoing computer activity that fails to progress due to excessive use of resources or conflicts in the caching system.

Use the time when a page is to be used. OperatingSystemSecurity This section describes various security related aspects like authentication, one time password, threats and security classifications. So a computer system must be protected against unauthorized access, malicious access to system memory, viruses, worms etc. We're going to discuss following topics in this article. One time passwords provides additional security along with normal authentication.

In One- Time Password system, a unique password is required every time user tries to login into the system. Once a one-time password is used then it cannot be used again. One time password are implemented in various ways. System asks for numbers corresponding to few alphabets randomly chosen. System asks for such secret id which is to be generated every time prior to login. Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks then it is known as Program Threats.

One of the common examples of program threat is a program installed in a computer which can store and send user credentials via network to some hacker. Following is the list of some well-known program threats. It is harder to detect. A virus is generally a small code embedded in a program. System threats refer to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack.

Following is the list of some well-known system threats. A Worm process generates its multiple copies where each copy uses system resources, prevents all other processes to get required resources.

Worm processes can even shut down an entire network. Definition motivates a generic model of language processing activities. We refer to the collection of language processor components engaged in analyzing a source program as the analysis phase of the language processor. Components engaged in synthesizing a target program constitute the synthesis phase. Hardware is just a piece of mechanical device and its functions are being controlled by a compatible software.

Hardware understands instructions in the form of electronic charge, which is the counterpart of binary language in software programming. Binary language has only two alphabets, 0 and 1. To instruct, the hardware codes must be written in binary format, which is simply a series of 1s and 0s.

It would be a difficult and cumbersome task for computer programmers to write such codes, which is why we have compilers to write such codes. Language Processing System We have learnt that any computer system is made of hardware and software.

The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember. These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine. This is known as Language Processing System.

They may perform the following functions. Macro processing: A preprocessor may allow a user to define macros that are short hands for longer constructs. File inclusion: A preprocessor may include header files into the program text.

Rational preprocessor: these preprocessors augment older languages with more modern flow-of- control and data structuring facilities. As an important part of a compiler is error showing to the programmer. They begin to use a mnemonic symbols for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language.

Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation object program.

What is an assembler? A tool called an assembler translates assembly language into binary instructions. Symbolic names for operations and locations are one facet of this representation.

An assembler reads a single assembly language source file and produces an object file containing machine instructions and bookkeeping information that helps combine several object files into a program.

Figure 1 illustrates how a program is built. Most programs consist of several files—also called modules— that are written, compiled, and assembled independently. A program may also use prewritten routines supplied in a program library. A module typically contains References to subroutines and data defined in other modules and in libraries.

The code in a module cannot be executed when it contains unresolved References to labels in other object files or libraries. Another tool, called a linker, combines a collection of object and library files into an executable file , which a computer can run.

The Assembler Provides: a. This includes access to the entire instruction set of the machine. A means for specifying run-time locations of program and data in memory. Provide symbolic labels for the representation of constants and addresses. Perform assemble-time arithmetic. Provide for the use of any synthetic instructions. Emit machine code in a form that can be loaded and executed. Report syntax errors and provide program listings h. Provide an interface to the module linkers and program loader.

Expand programmer defined macro routines. This require more overhead and the process becomes complex While, impure, the source code is subjected to some initial preprocessing before the code is eventually interpreted. The actual analysis overhead is now reduced and the processor speed enabling faithful and efficient interpretation. JAVA also uses interpreter. The process of interpretation can be carried out in following phases.

Lexical analysis 2. Synatx analysis 3. Semantic analysis 4. Direct Execution e Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed.

The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute. Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. It is also expected that a compiler should make the target code efficient and optimized in terms of time and space. Compiler design principles provide an in-depth view of translation and optimization process.

It includes lexical, syntax, and semantic analysis as front end, and code generation and optimization as back- end. Analysis Phase Known as the front-end of the compiler, the analysis phase of the compiler reads the source program, divides it into core parts and then checks for lexical, grammar and syntax errors. The analysis phase generates an intermediate representation of the source program and symbol table, which should be fed to the Synthesis phase as input.

Analysis and Synthesis phase of compiler Synthesis Phase Known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table. A compiler can have many phases and passes. Pass : A pass refers to the traversal of a compiler through the entire program.

Phase : A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase. A common division into phases is described below.

In some compilers, the ordering of phases may differ slightly, some phases may be combined or split into several phases or some extra phases may be inserted between those mentioned below.

Lexical analysis This is the initial part of reading and analysing the program text: The text is read and divided into tokens, each of which corresponds to a sym- bol in the programming language, e. Syntax analysis This phase takes the list of tokens produced by the lexical analysis and arranges these in a tree-structure called the syntax tree that reflects the structure of the program.

This phase is often called parsing. Type checking This phase analyses the syntax tree to determine if the program violates certain consistency requirements, e. Intermediate code generation The program is translated to a simple machine- independent intermediate language. Register allocation The symbolic variable names used in the intermediate code are translated to numbers, each of which corresponds to a register in the target machine code.

In terms of programming languages, words are objects like variable names, numbers, keywords etc. Lexical analysis is the first phase of a compiler. It takes the modified source code from language preprocessors that are written in the form of sentences.

The lexical analyzer breaks these syntaxes into a series of tokens, by removing any whitespace or comments in the source code.

If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer works closely with the syntax analyzer. It reads character streams from the source code, checks for legal tokens, and passes the data to the syntax analyzer when it demands.

Tokens Lexemes are said to be a sequence of characters alphanumeric in a token. There are some predefined rules for every lexeme to be identified as a valid token. These rules are defined by grammar rules, by means of a pattern. A pattern explains what can be a token, and these patterns are defined by means of regular expressions.

Syntax Analysis Introduction Syntax analysis or parsing is the second phase of a compiler. In this chapter, we shall learn the basic concepts used in the construction of a parser. We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules. But a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the regular expressions.

Regular expressions cannot check balancing tokens, such as parenthesis. Syntax Analyzers A syntax analyzer or parser takes the input from a lexical analyzer in the form of token streams.

The parser analyzes the source code token stream against the production rules to detect any errors in the code. The output of this phase is a parse tree. This way, the parser accomplishes two tasks, i. Parsers are expected to parse the whole code even if some errors exist in the program.

Parsers use error recovering strategies, which we will learn later in this chapter. Parse Tree A parse tree is a graphical depiction of a derivation.

It is convenient to see how strings are derived from the start symbol. The start symbol of the derivation becomes the root of the parse tree.

Let us see this by an example from the last topic. Types of Parsing Syntax analyzers follow production rules defined by means of context-free grammar. The way the production rules are implemented derivation divides parsing into two types : top-down parsing and bottom-up parsing. Top-down Parsing When the parser starts constructing the parse tree from the start symbol and then tries to transform the start symbol to the input, it is called top-down parsing. It is called recursive as it uses recursive procedures to process the input.

Recursive descent parsing suffers from backtracking. This technique may process the input string more than once to determine the right production. Recursive Descent Parsing Recursive descent is a top-down parsing technique that constructs the parse tree from the top and the input is read from left to right. It uses procedures for every terminal and non-terminal entity. This parsing technique recursively parses the input to make a parse tree, which may or may not require back-tracking.

But the grammar associated with it if not left factored cannot avoid back- tracking. A form of recursive-descent parsing that does not require any back-tracking is known as predictive parsing. This parsing technique is regarded recursive as it uses context-free grammar which is recursive in nature. Back-tracking Top- down parsers start from the root node start symbol and match the input string against the production rules to replace them if matched.

So the top-down parser advances to the next input letter i. It does not match with the next input symbol. Now the parser matches all the input letters in an ordered manner. The string is accepted. Predictive Parser Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string. The predictive parser does not suffer from backtracking.

To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input symbols. To make the parser back-tracking free, the predictive parser puts some constraints on the grammar and accepts only a class of grammar known as LL k grammar. Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree. The parser refers to the parsing table to take any decision on the input and stack element combination.

In recursive descent parsing, the parser may have more than one production to choose from for a single instance of input, whereas in predictive parser, each step has at most one production to choose.

There might be instances where there is no production matching the input string, making the parsing procedure to fail. LL grammar is a subset of context-free grammar but with some restrictions to get the simplified version, in order to achieve easy implementation. LL grammar can be implemented by means of both algorithms namely, recursive-descent or table- driven.

LL parser is denoted as LL k. The first L in LL k is parsing the input from left to right, the second L in LL k stands for left-most derivation and k itself represents the number of look aheads. Bottom-up Parsing As the name suggests, bottom-up parsing starts with the input symbols and tries to construct the parse tree up to the start symbol. Bottom-up parsing starts from the leaf nodes of a tree and works in upward direction till it reaches the root node.



0コメント

  • 1000 / 1000