Sie sind auf Seite 1von 7

MAVE EDUCATION

Frequently Asked Terminologies from Internet world


URL
Abbreviation of Uniform Resource Locator, the global address of documents and other resources on the
World Wide Web.

The first part of the address indicates what protocol to use, and the second part specifies the IP address or
the domain name where the resource is located.

For example, the two URLs below point to two different files at the domain pcwebopedia.com. The first
specifies an executable file that should be fetched using the FTP protocol; the second specifies a Web page
that should be fetched using the HTTP protocol:

RAID
Short for Redundant Array of Independent (or Inexpensive) Disks, a category of disk drives that employ two
or more drives in combination for fault tolerance and performance. RAID disk drives are used frequently on
servers but aren't generally necessary for personal computers

OSI
pronounced as separate letters) Short for Open System Interconnection, an ISO standard for worldwide
communications that defines a networking framework for implementing protocols in seven layers. Control is
passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom
layer, over the channel to the next station and back up the hierarchy.

At one time, most vendors agreed to support OSI in one form or another, but OSI was too loosely defined
and proprietary standards were too entrenched. Except for the OSI-compliant X.400 and X.500 e-mail and
directory standards, which are widely used, what was once thought to become the universal
communications standard now serves as the teaching model for all other protocols.

Most of the functionality in the OSI model exists in all communications systems, although two or three OSI
layers may be incorporated into one.

OSI is also referred to as the OSI Reference Model or just the OSI Model.

VPN

wires to connect nodes. For example, there are a number of systems that enable you to create networks
using the Internet as the medium for transporting data. These systems use encryption and other security
mechanisms to ensure that only authorized users can access the network and that the data cannot be
intercepted.

OEM

(pronounced as separate letters) Short for original equipment manufacturer, which is a misleading term for a
company that has a special relationship with computer producers. OEMs buy computers in bulk and
customize them for a particular application. They then sell the customized computer under their own name.
The term is really a misnomer because OEMs are not the original manufacturers -- they are the customizers

SSL

pronounced as separate letters) Short for Secure Sockets Layer, a protocol developed by Netscape for
transmitting private documents via the Internet. SSL works by using a private key to encrypt data that's
transferred over the SSL connection. Both Netscape Navigator and Internet Explorer support SSL, and
many Web sites use the protocol to obtain confidential user information, such as credit card numbers. By
convention, URLs that require an SSL connection start with https: instead of http:.

MAVE EDUCATION / STUDENT INFORMATION CENTER


MAVE EDUCATION

API

Abbreviation of application program interface, a set of routines, protocols, and tools for building software
applications. A good API makes it easier to develop a program by providing all the building blocks. A
programmer puts the blocks together

NAT

Short for Network Address Translation, an Internet standard that enables a local-area network (LAN) to use
one set of IP addresses for internal traffic and a second set of addresses for external traffic. A NAT box
located where the LAN meets the Internet makes all necessary IP address translations

SIP

Short for single in-line package, a type of housing for electronic components in which the connecting pins
protrude from one side

ODBC

pronounced as separate letters) Short for Open DataBase Connectivity, a standard database access
method developed by the SQL Access group in 1992. The goal of ODBC is to make it possible to access any
data from any application, regardless of which database management system (DBMS) is handling the data.
ODBC manages this by inserting a middle layer, called a database driver , between an application and the
DBMS. The purpose of this layer is to translate the application's data queries into commands that the DBMS
understands.

SMTP

pronounced as separate letters) Short for Simple Mail Transfer Protocol, a protocol for sending e-mail
messages between servers. Most e-mail systems that send mail over the Internet use SMTP to send
messages from one server to another; the messages can then be retrieved with an e-mail client using either
POP or IMAP. In addition, SMTP is generally used to send messages from a mail client to a mail server.
This is why you need to specify both the POP or IMAP server and the SMTP server when you configure your
e-mail application

Routers

A device that forwards data packets along networks. A router is connected to at least two networks,
commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places
where two or more networks connect

IP Address

An identifier for a computer or device on a TCP/IP network. Networks using the TCP/IP protocol route
messages based on the IP address of the destination. The format of an IP address is a 32-bit numeric
address written as four numbers separated by periods. Each number can be zero to 255. For example,
1.160.10.240 could be an IP address

TCP/IP

for Transmission Control Protocol/Internet Protocol, the suite of communications protocols used to connect
hosts on the Internet. TCP/IP uses several protocols, the two main ones being TCP and IP. TCP/IP is built
into the UNIX operating system and is used by the Internet, making it the de facto standard for transmitting

MAVE EDUCATION / STUDENT INFORMATION CENTER


MAVE EDUCATION

data over networks. Even network operating systems that have their own protocols, such as Netware, also
support TCP/IP.

ISCSI

Short for Internet SCSI, an IP-based standard for linking data storage devices over a network and
transferring data by carrying SCSI commands over IP networks. iSCSI supports a Gigabit Ethernet interface
at the physical layer, which allows systems supporting iSCSI interfaces to connect directly to standard
Gigabit Ethernet switches and/or IP routers. When an operating system receives a request it generates the
SCSI command and then sends an IP packet over an Ethernet

The Five Generations of Computers


The history of computer development is often referred to in reference to the different generations of
computing devices. Each generation of computer is characterized by a major technological development that
fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more
powerful and more efficient and reliable devices. Read about each generation and the developments that led
to the current devices that we use today.

First Generation - 1940-1956: Vacuum Tubes

The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often
enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal
of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers
relied on machine language to perform operations, and they could only solve one problem at a time. Input
was based on punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was
the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Second Generation - 1956-1963: Transistors

Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was
invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far
superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient
and more reliable than their first-generation predecessors. Though the transistor still generated a great deal
of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-
generation computers still relied on punched cards for input and printouts for output.

Second-generation computers moved from cryptic binary machine language to symbolic, or assembly,
languages, which allowed programmers to specify instructions in words. High-level programming languages
were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also
the first computers that stored their instructions in their memory, which moved from a magnetic drum to
magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.

Third Generation - 1964-1971: Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of computers. Transistors
were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed
and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers through keyboards
and monitors and interfaced with an operating system, which allowed the device to run many different

MAVE EDUCATION / STUDENT INFORMATION CENTER


MAVE EDUCATION

applications at one time with a central program that monitored the memory. Computers for the first time
became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Fourth Generation - 1971-Present: Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were
built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of
the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the
central processing unit and memory to input/output controls - on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh.
Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and
more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks, which
eventually led to the development of the Internet. Fourth generation computers also saw the development of
GUIs, the mouse and handheld devices.

Fifth Generation - Present and Beyond: Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development, though there are
some applications, such as voice recognition, that are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and
nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation
computing is to develop devices that respond to natural language input and are capable of learning and self-
organization.

What is a Server?
(A computer or device on a network that manages network resources. For example, a file server is a
computer and storage device dedicated to storing files. Any user on the network can store files on the
server. A print server is a computer that manages one or more printers, and a network server is a computer
that manages network traffic. A database server is a computer system that processes database queries.

Servers are often dedicated, meaning that they perform no other tasks besides their server tasks. On
multiprocessing operating systems, however, a single computer can execute several programs at once. A
server in this case could refer to the program that is managing resources rather than the entire computer.

If you're not familiar with the functions of a server, but have heard the term in passing, you may think of a
server as some mystical computer beast that performs amazing tasks and generally is a hands-off system.
Before we delve into the inner-workings of a server, let's start by dispelling that "mystical" thing. From a
hardware perspective, a server is simply a computer on your network that is configured to share its
resources or run applications for the other computers on the network. You may have a server in place to
handle file or database sharing between all users on your network, or have a server configured to allow all
users to share a printer, rather than having a printer hooked up to each individual computer in your
organization.

What makes the term server doubly confusing is that it can refer to both hardware and software. That is, it
can be used to describe a specific software package running on a computer or the computer on which that
software is running. The type of server and the software you would use depends on the type of network.
LANs and WANs for example are going to use file and print servers while the Internet would use Web
servers. In this article we provide an overview on some of the more common types of servers such as
application servers, database servers, mail servers, and Web servers.

Application Server Also called an appserver. A program that handles all application operations between
users and an organization's backend business applications or databases. Application servers are typically
used for complex transaction-based applications. To support high-end needs, an application server has to

MAVE EDUCATION / STUDENT INFORMATION CENTER


MAVE EDUCATION

have built-in redundancy, monitors for high-availability, high-performance distributed application services
and support for complex database access.

Print Server

Print servers are set up on a network to route print requests from other computer workstations on the
network. The server handles the print file request and sends the file to the requested printer where it is
spooled. A print server allows multiple users on a network to share the printer.

Database Server

A database server is an application that is based on the client/server architecture model. The application
is divided into two parts: a front-end running on a workstation (where users collect and display the
database information) and the back-end running on a server where the tasks such as data analysis and
storage are performed.

Mail Server

Almost as ubiquitous and crucial as Web servers, mail servers move and store mail over corporate
networks (via LANs and WANs) and across the Internet. Today, most people think of mail servers in terms
of the Internet. Mail servers, however, were originally developed for corporate networks (LANs and
WANs).

Web Server

At its core, a Web server serves static content to a Web browser by loading a file from a disk and serving
it across the network to a user's Web browser. This entire exchange is mediated by the browser and
server talking to each other using HTTP. Any computer can be turned into a Web server by installing
server software and connecting the machine to the Internet. There are many Web server software
applications, including public domain software from NCSA and Apache, and commercial packages from
Microsoft, Netscape and others.

FTP Server

An FTP server is a software application running the File Transfer Protocol (FTP), the protocol for
exchanging files over the Internet. FTP works in the same way as HTTP for transferring Web pages from
a server to a user's browser and SMTP for transferring electronic mail across the Internet in that, like
these technologies, FTP uses the Internet's TCP/IP protocols to enable data transfer. FTP is most
commonly used to download a file from a server using the Internet or to upload a file to a server (e.g.,
uploading a Web page file to a server).

Proxy Server

A server that sits between a client application, such as a Web browser, and a real server. It intercepts all
requests to the real server to see if it can fulfill the requests itself. If not, it forwards the request to the real
server. Proxy servers have two main purposes. Proxy servers can dramatically improve performance for
groups of users. This is because it saves the results of all requests for a certain amount of time. proxy
servers also Filter Requests to block or disallow specific types of outgoing or incoming requests to the
server

MAVE EDUCATION / STUDENT INFORMATION CENTER


MAVE EDUCATION

JPG Vs. GIF Vs. PNG


Following are the most commonly used graphics file formats for putting graphics on the World Wide
Web and how each differs from the others.

JPEG/JPG
Short for Joint Photographic Experts Group, the original name of the committee that wrote the
standard. JPG is one of the image file formats supported on the Web. JPG is a lossy compression
technique that is designed to compress color and grayscale continuous-tone images. The information
that is discarded in the compression is information that the human eye cannot detect. JPG images
support 16 million colors and are best suited for photographs and complex graphics. The user typically
has to compromise on either the quality of the image or the size of the file. JPG does not work well on
line drawings, lettering or simple graphics because there is not a lot of the image that can be thrown
out in the lossy process, so the image loses clarity and sharpness.

GIF
Short for Graphics Interchange Format, another of the graphics formats supported by the Web. Unlike
JPG, the GIF format is a lossless compression technique and it supports only 256 colors. GIF is better
than JPG for images with only a few distinct colors, such as line drawings, black and white images and
small text that is only a few pixels high. With an animation editor, GIF images can be put together for
animated images. GIF also supports transparency, where the background color can be set to
transparent in order to let the color on the underlying Web page to show through. The compression
algorithm used in the GIF format is owned by Unisys, and companies that use the algorithm are
supposed to license the use from Unisys.*

PNG
Short for Portable Network Graphics, the third graphics standard supported by the Web (though not
supported by all browsers). PNG was developed as a patent-free answer to the GIF format but is also
an improvement on the GIF technique. An image in a lossless PNG file can be 5%-25% more
compressed than a GIF file of the same image. PNG builds on the idea of transparency in GIF images
and allows the control of the degree of transparency, known as opacity. Saving, restoring and re-
saving a PNG image will not degrade its quality. PNG does not support animation like GIF does.

The System Boot Process Explained


The typical computer system boots over and over again with no problems, starting the computer's
operating system (OS) and identifying its hardware and software components that all work together to
provide the user with the complete computing experience. But what happens between the time that the
user powers up the computer and when the GUI icons appear on the desktop?

In order for a computer to successfully boot, its BIOS, operating system and hardware components
must all be working properly; failure of any one of these three elements will likely result in a failed boot
sequence.

When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of
clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's
ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction,
which is the instruction to run the power-on self test (POST), in a predetermined memory address.
POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a
battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such
as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other
hardware devices, such as the keyboard and mouse, to ensure they are functioning properly.

Once the POST has determined that all components are functioning properly and the CPU has
successfully initialized, the BIOS looks for an OS to load.

MAVE EDUCATION / STUDENT INFORMATION CENTER


MAVE EDUCATION

The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in most PCs, the OS
loads from the C drive on the hard drive even though the BIOS has the capability to load the OS from
a floppy disk, CD or ZIP drive. The order of drives that the CMOS looks to in order to locate the OS is
called the boot sequence, which can be changed by altering the CMOS setup. Looking to the
appropriate boot drive, the BIOS will first encounter the boot record, which tells it where to find the
beginning of the OS and the subsequent program file that will initialize the OS.

Once the OS initializes, the BIOS copies its files into memory and the OS basically takes over control
of the boot process. Now in control, the OS performs another inventory of the system's memory and
memory availability (which the BIOS already checked) and loads the device drivers that it needs to
control the peripheral devices, such as a printer, scanner, optical drive, mouse and keyboard. This is
the final stage in the boot process, after which the user can access the system’s applications to
perform tasks.

Disclaimer: The information provided here is to the best of our knowledge. We do not take any responsibility that might
occur in this information.

MAVE EDUCATION / STUDENT INFORMATION CENTER

Das könnte Ihnen auch gefallen