Sie sind auf Seite 1von 29

SELECTED SOFTWARE

Microsoft.NET Framework

The .NET Framework (pronounced dot net) is a software framework that runs primarily on
Microsoft Windows. It includes a large library and supports several programming languages
which allows language interoperability. The .NET library is available to all the programming
languages that .NET supports. Programs written for the .NET Framework execute in a
software environment as contrasted to hardware environment, known as the Common
Language Runtime (CLR), an application virtual machine that provides important services
such as security, memory management, and exception handling. The class library and the
CLR together constitute the .NET Framework.

DESIGN FEATURES

Interoperability

Because computer systems commonly require interaction between new and older
applications, the .NET Framework provides means to access functionality that is
implemented in programs that execute outside the .NET environment. Access to COM
components is provided in the System.Runtime.InteropServices and
System.EnterpriseServices namespaces of the framework; access to other functionality is
provided using the P/Invoke feature.

Common Language Runtime Engine

The Common Language Runtime (CLR) is the execution engine of the .NET Framework.
All .NET programs execute under the supervision of the CLR, guaranteeing certain properties
and behaviors in the areas of memory management, security, and exception handling.

Language Independence

The .NET Framework introduces a Common Type System, or CTS. The CTS specification
defines all possible datatypes and programming constructs supported by the CLR and how
they may or may not interact with each other conforming to the Common Language
Infrastructure (CLI) specification. Because of this feature, the .NET Framework supports the
exchange of types and object instances between libraries and applications written using any
conforming .NET language.

Base Class Library

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a library of
functionality available to all languages using the .NET Framework. The BCL provides classes
which encapsulate a number of common functions, including file reading and writing, graphic
rendering, database interaction, XML document manipulation and so on.

Simplified Deployment

The .NET Framework includes design features and tools that help manage the installation of
computer software to ensure that it does not interfere with previously installed software, and
that it conforms to security requirements.

Security

The design is meant to address some of the vulnerabilities, such as buffer overflows, that
have been exploited by malicious software. Additionally, .NET provides a common security
model for all applications.

Portability

The design of the .NET Framework allows it theoretically to be platform agnostic, and thus
cross-platform compatible. That is, a program written to use the framework should run
without change on any type of system for which the framework is implemented. While
Microsoft has never implemented the full framework on any system except Microsoft
Windows, the framework is engineered to be platform agnostic, and cross-platform
implementations are available for other operating systems (see Silverlight and the Alternative
implementations section below). Microsoft submitted the specifications for the Common
Language Infrastructure (which includes the core class libraries, Common Type System, and
the Common Intermediate Language), the C# language, and the C++/CLI language to both
ECMA and the ISO, making them available as open standards. This makes it possible for
third parties to create compatible implementations of the framework and its languages on
other platforms.

Common Language Infrastructure (CLI)

The purpose of the Common Language Infrastructure is to provide a language-neutral


platform for application development and execution, including functions for Exception
handling, Garbage Collection, security, and interoperability. By implementing the core
aspects of the .NET Framework within the scope of the CLI, this functionality will not be tied
to a single language but will be available across the many languages supported by the
framework. Microsoft's implementation of the CLI is called the Common Language Runtime,
or CLR.

Common Language Runtime

The Common Language Runtime (CLR) is a special run time environment that provides
the underlying infrastructure for Microsoft's .NET framework. This runtime is where the
source code of an application is compiled into an intermediate language called CIL, originally
known as MSIL (Microsoft Intermediate Language). When the program is then run, the CIL
code is translated into the native code of the operating system using a just-in-time (JIT)
compiler. This intermediate language is used to keep the environment platform-neutral and as
a result, supports all .NET languages such as C# or VB.NET

General advantages

Note: are taken from Microsoft’s paper on CLR. The rest is from the Common Language
Runtime Architecture paper from Microsoft.

Portability

Using an intermediate language instead of compiling straight to native code requires n + m


translators instead of n*m translators to implement it in n languages on m platforms.

Security

The high level intermediate code is more ready for deployment and runtime enforcement of
security and typing constraints than just native binaries used in other languages. A .NET
application (.exe) can be de-compiled back to readable code by tools such as Reflector -
unlike native binaries that are compiled.

Interoperability

Every major .NET language supports CLR and all get compiled to CIL. In that intermediate
language, implementation of services such as security and garbage collection are the same.
This allows one library or application of one .NET language to inherit implementations from
classes written in another .NET language. This cuts down on the redundant code developers
would have to write to make a system work in multiple languages, allowing for multi-
language system designs and implementations.

Additionally, to keep full component interoperability, the runtime incorporates all metadata
into the component package itself, essentially making it self-describing. As a result, no
separate packages or metadata files need to be kept in sync at all times with the compilation
and the executable.

Flexibility

Combining high level intermediate level code with metadata enables the construction of
(typesafe) meta-programming such as reflection and dynamic code generation.

APPLICATION RELATED ADVANTAGES

The listed features of .NET 4.0


Automated Garbage Collection

Support for explicitly free threading, which allows for the creation of multi-threaded, scalable
applications. It supports for uniform exception handling. Use of delegate functions instead of
function pointers for increased type safety and security.

With regards to security, managed components are awarded varying degrees of trust,
depending on a number of factors that include their origin (such as the Internet, enterprise
network, or local computer). This means that a managed component might or might not be
able to perform file-access operations, registry-access operations, or other sensitive functions,
even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an executable
embedded in a Web page can play an animation on screen or sing a song, but cannot access
their personal data, file system, or network. The security features of the runtime thus enable
legitimate Internet-deployed software to be exceptionally feature rich.

type- and code-verification infrastructure called the common type system (CTS). The CTS
ensures that all managed code is self-describing. The various Microsoft and third-party
language compilers

Generate managed code that conforms to the CTS. This means that managed code can
consume other managed types and instances, while strictly enforcing type fidelity and type
safety.

In addition, the managed environment of the runtime eliminates many common software
issues. For example, the runtime automatically handles object layout and manages references
to objects, releasing them when they are no longer being used. This automatic memory
management resolves the two most common application errors, memory leaks and invalid
memory references.

The runtime also accelerates developer productivity. For example, programmers can write
applications in their development language of choice, yet take full advantage of the runtime,
the class library, and components written in other languages by other developers. Any
compiler vendor who chooses to target the runtime can do so. Language compilers that target
the .NET Framework make the features of the .NET Framework available to existing code
written in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of today
and yesterday. Interoperability between managed and unmanaged code enables developers to
continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language runtime
provides many standard runtime services, managed code is never interpreted. A feature called
just-in-time (JIT) compiling enables all managed code to run in the native machine language
of the system on which it is executing. Meanwhile, the memory manager removes the
possibilities of fragmented memory and increases memory locality-of-reference to further
increase performance.

Finally, the runtime can be hosted by high-performance, server-side applications, such as


Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure
enables you to use managed code to write your business logic, while still enjoying the
superior performance of the industry's best enterprise servers that support runtime hosting.

Assemblies

The CIL code is housed in .NET assemblies. As mandated by specification, assemblies are
stored in the Portable Executable (PE) format, common on the Windows platform for all DLL
and EXE files. The assembly consists of one or more files, one of which must contain the
manifest, which has the metadata for the assembly. The complete name of an assembly (not to
be confused with the filename on disk) contains its simple text name, version number,
culture, and public key token. The public key token is a unique hash generated when the
assembly is compiled, thus two assemblies with the same public key token are guaranteed to
be identical from the point of view of the framework. A private key can also be specified
known only to the creator of the assembly and can be used for strong naming and to
guarantee that the assembly is from the same author when a new version of the assembly is
compiled (required to add an assembly to the Global Assembly Cache).

Security
.NET has its own security mechanism with two general features: Code Access Security
(CAS), and validation and verification. Code Access Security is based on evidence that is
associated with a specific assembly. Typically the evidence is the source of the assembly
(whether it is installed on the local machine or has been downloaded from the intranet or
Internet). Code Access Security uses evidence to determine the permissions granted to the
code. Other code can demand that calling code is granted a specified permission. The demand
causes the CLR to perform a call stack walk: every assembly of each method in the call stack
is checked for the required permission; if any assembly is not granted the permission a
security exception is thrown. however, has to split the application into subdomains; it is not
done by the CLR.

Class library

The .NET Framework includes a set of standard class libraries. The class library is organized
in a hierarchy of namespaces. Most of the built in APIs are part of either System.* or
Microsoft.* namespaces. These class libraries implement a large number of common
functions, such as file reading and writing, graphic rendering, database interaction, and XML
document manipulation, among others. The .NET class libraries are available to all CLI
compliant languages. The .NET Framework class library is divided into two parts: the Base
Class Library and the Framework Class Library.

The Base Class Library (BCL) includes a small subset of the entire class library and is the
core set of classes that serve as the basic API of the Common Language Runtime. The classes
in mscorlib.dll and some of the classes in System.dll and System.core.dll are considered to be
a part of the BCL. The BCL classes are available in both .NET Framework as well as its
alternative implementations including .NET Compact Framework, Microsoft Silverlight and
Mono.

The Framework Class Library (FCL) is a superset of the BCL classes and refers to the entire
class library that ships with .NET Framework. It includes an expanded set of libraries,
including Windows Forms, ADO.NET, ASP.NET, Language Integrated Query, Windows
Presentation Foundation, Windows Communication Foundation among others. The FCL is
much larger in scope than standard libraries for languages like C++, and comparable in scope
to the standard libraries of Java.
Memory management

The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); instead it does the memory management itself even
though there are no actual guarantees as to when the Garbage Collector will perform its work,
unless an explicit double-call is issue. To this end, the memory allocated to instantiations
of .NET types (objects) is done contiguously from the managed heap, a pool of memory
managed by the CLR. As long as there exist a reference to an object, which might be either a
direct reference to an object or via a graph of objects, the object is considered to be in use by
the CLR. When there is no reference to an object, and it cannot be reached or used, it
becomes garbage. However, it still holds on to the memory allocated to it. .NET Framework
includes a garbage collector which runs periodically, on a separate thread from the
application's thread, that enumerates all the unusable objects and reclaims the memory
allocated to them.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-sweep


garbage collector. The GC runs only when a certain amount of memory has been used or
there is enough pressure for memory on the system. Since it is not guaranteed when the
conditions to reclaim memory are reached, the GC runs are non-deterministic. Each .NET
application has a set of roots, which are pointers to objects on the managed heap (managed
objects). These include references to static objects and objects defined as local variables or
method parameters currently in scope, as well as objects referred to by CPU registers. When
the GC runs, it pauses the application, and for each object referred to in the root, it
recursively enumerates all the objects reachable from the root objects and marks them as
reachable. It uses .NET metadata and reflection to discover the objects encapsulated by an
object, and then recursively walk them. It then enumerates all the objects on the heap (which
were initially allocated contiguously) using reflection. All objects not marked as reachable
are garbage. This is the mark phase. Since the memory held by garbage is not of any
consequence, it is considered free space. However, this leaves chunks of free space between
objects which were initially contiguous. The objects are then compacted together to make
used memory contiguous again. Any reference to an object invalidated by moving the object
is updated to reflect the new location by the GC. The application is resumed after the garbage
collection is over.
The GC used by .NET Framework is actually generational. Objects are assigned a
generation; newly created objects belong to Generation 0. The objects that survive a garbage
collection are tagged as Generation 1, and the Generation 1 objects that survive another
collection are Generation 2 objects. The .NET Framework uses up to Generation 2 objects.
Higher generation objects are garbage collected less frequently than lower generation objects.
This helps increase the efficiency of garbage collection, as older objects tend to have a larger
lifetime than newer objects. Thus, by removing older (and thus more likely to survive a
collection) objects from the scope of a collection run, fewer objects need to be checked and
compacted.

Base Class Library

The Base Class Library (BCL) is a standard library available to all languages using the
.NET Framework. .NET includes the BCL in order to encapsulate a large number of common
functions, such as file reading and writing, graphic rendering, database interaction, and XML
document manipulation, which makes the programmer's job easier. It is much larger in scope
than standard libraries for most other languages, including C++, and is comparable in scope
to the standard libraries of Java. The BCL is sometimes incorrectly referred to as the
Framework Class Library (FCL), which is a superset including the Microsoft.* namespaces.
The BCL is updated with each version of the .NET Framework.

The .NET Framework can be hosted by unmanaged components that load the common
language runtime into their processes and initiate the execution of managed code, thereby
creating a software environment that can exploit both managed and unmanaged features.
The .NET Framework not only provides several runtime hosts, but also supports the
development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for
managed code. ASP.NET works directly with the runtime to enable Web Forms applications
and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in the
form of a MIME type extension). Using Internet Explorer to host the runtime enables you to
embed managed components or Windows Forms controls in HTML documents. Hosting the
runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls)
possible, but with significant improvements that only managed code can offer, such as semi-
trusted execution and secure isolated file storage.

DATABASE:

A database is an organized collection of data for one or more purposes, usually in digital
form. The data are typically organized to model relevant aspects of reality (for example, the
availability of rooms in hotels), in a way that supports processes requiring this information.
However, not every collection of data is a database; the term database implies that the data is
managed to some level of quality (measured in terms of accuracy, availability, usability, and
resilience) and this in turn often implies the use of a general-purpose Database management
system (DBMS). A general-purpose DBMS is typically a complex software system that meets
many usage requirements, and the databases that it maintains are often large and complex.

A database is typically organized according to general Data models. A single database may be
viewed for convenience within different data models that are mapped between each other.
Many DBMSs support one data model only, externalized to database developers, but some
allow different data models to be used and combined.

Types of people involved

Three types of people are involved with a general-purpose DBMS:

DBMS developers - This are the people that design and build the DBMS product, and the
only ones who touch its code. They are typically the employees of a DBMS vendor (e.g.,
Oracle, IBM, Microsoft), or, in the case of Open source DBMSs (e.g., MySQL), volunteers or
people supported by interested companies and organizations. They are typically skilled
systems programmers. DBMS development is a complicated task, and some of the popular
DBMSs have been under development and enhancement (also to follow progress in
technology) for decades.

Application developers and Database administrators - These are the people that design
and build an application that uses the DBMS. The latter group members design the needed
database and maintain it. The first group members write the needed application programs
which the application comprises. Both are well familiar with the DBMS product and use its
user interfaces (as well as usually other tools) for their work. Sometimes the application itself
is packaged and sold as a separate product, which may include the DBMS inside (see
Embedded database; subject to proper DBMS licensing), or sold separately as an add-on to
the DBMS.

Application's end-users (e.g., accountants, insurance people, medical doctors, etc.) - This
people know the application and its end-user interfaces, but need neither to know nor to
understand the underlying DBMS. Thus, though comprising the intended and main
beneficiaries of a DBMS, they are only indirectly involved with it.

Data warehouse

Data warehouses archive data from operational databases and often from external sources
such as market research firms. Often operational data undergoes transformation on its way
into the warehouse, getting summarized, anonymized, reclassified, etc. The warehouse
becomes the central source of data for use by managers and other end-users who may not
have access to operational data. For example, sales data might be aggregated to weekly totals
and converted from internal product codes to use UPCs so that it can be compared with
ACNielsen data. Some basic and essential components of data warehousing include
retrieving, analyzing, and mining data, transforming, loading and managing data so as to
make it available for further use.

Operations in a data warehouse are typically concerned with bulk data manipulation, and as
such, it is unusual and inefficient to target individual rows for update, insert or delete. Bulk
native loaders for input data and bulk SQL passes for aggregation are the norm.

Database Connection

Before going into database connectivity let see what client server model is:

Client–Server Model

The client–server model of computing is a distributed application structure that partitions


tasks or workloads between the providers of a resource or service, called servers, and service
requesters, called clients. Often clients and servers communicate over a computer network on
separate hardware, but both client and server may reside in the same system. A server
machine is a host that is running one or more server programs which share their resources
with clients. A client does not share any of its resources, but requests a server's content or
service function. Clients therefore initiate communication sessions with servers which await
incoming requests.

The client–server characteristic describes the relationship of cooperating programs in an


application. The server component provides a function or service to one or many clients,
which initiate requests for such services.

Functions such as email exchange, web access and database access, are built on the client–
server model. Users accessing banking services from their computer use a web browser client
to send a request to a web server at a bank. That program may in turn forward the request to
its own database client program that sends a request to a database server at another bank
computer to retrieve the account information. The balance is returned to the bank database
client, which in turn serves it back to the web browser client displaying the results to the user.
The client–server model has become one of the central ideas of network computing. Many
business applications being written today use the client–server model. So do the Internet's
main application protocols, such as HTTP, SMTP, Telnet, and DNS.

The interaction between client and server is often described using sequence diagrams.
Sequence diagrams are standardized in the Unified Modeling Language.

Specific types of clients include web browsers, email clients, and online chat clients.

Specific types of servers include web servers, ftp servers, application servers, database
servers, name servers, mail servers, file servers, print servers, and terminal servers. Most web
services are also types of servers.

Database connectivity
A database connection is a facility in computer science that allows client software to
communicate with database server software, whether on the same machine or not. A
connection is required to send commands and receive answers.

Connections are a key concept in data-centric programming. Since some DBMS engines
require considerable time to connect, connection pooling was invented to improve
performance. No command can be performed against a database without an "open and
available" connection to it.

Connections are built by supplying an underlying driver or provider with a connection string,
which is a way of addressing a specific database or server and instance as well as user
authentication credentials (eg: Server=sqlbox, Database=Common, User ID=scott;
Pwd=tiger). Once a connection has been built it can be opened and closed at will, and
properties (such as the command time-out length, or transaction, if one exists) can be set. The
Connection String is composed of a set of key/value pairs as dictated by the data access
interface and data provider being used.

Databases, such as PostgreSQL, only allow one operation to be performed at a time on each
connection. If a request for data (a SQL Select statement) is sent to the database and a result
set is returned, the connection is open but not available for other operations until the client
finishes consuming the result set. Other databases, like SQL Server 2005 (and later), do not
impose this limitation. However, databases that provide multiple operations per connection
usually incur far more overhead than those that permit only a single operation task at a time.

There are commonly two types of database connectivity, they are,

ADO.NET

ADO.NET (ActiveX Data Object for .NET) is a set of computer software components that
programmers can use to access data and data services. It is a part of the base class library that
is included with the Microsoft .NET Framework. It is commonly used by programmers to
access and modify data stored in relational database systems, though it can also access data in
non-relational sources. ADO.NET is sometimes considered an evolution of ActiveX Data
Objects (ADO) technology. ADO.NET is conceptually divided into consumers and data
providers. The consumers are the applications that need access to the data, and the providers
are the software components that implement the interface and thereby provides the data to the
consumer. Functionality exists in the Visual Studio IDE to create specialized subclasses of the
DataSet classes for a particular database schema, allowing convenient access to each field
through strongly typed properties. This helps catch more programming errors at compile-time
and makes the IDE's Intellisense feature more beneficial.

ODBC

In computing, ODBC (Open Database Connectivity) is a standard software interface for


accessing database management systems (DBMS). The designers of ODBC aimed to make it
independent of programming languages, database systems, and operating systems. Thus, any
application can use ODBC to query data from a database, regardless of the platform it is on or
DBMS it uses. ODBC accomplishes platform and language independence by using an ODBC
driver as a translation layer between the application and the DBMS. The application thus only
needs to know ODBC syntax, and the driver can then pass the query to the DBMS in its
native format, returning the data in a format the application can understand.

ODBC provides a standard software API method for accessing both relational and non-
relational DBMS. Prior to its creation, if an application needed the ability to communicate
with more than a single database, it would have to support and maintain an interface for each.
ODBC provides a universal middleware layer between the application and DBMS, allowing
the application developers to only have to learn a single interface, nor do they have to update
their software if changes are made to the DBMS specification, only the driver needs updating.
An application that can communicate through ODBC is referred to as ODBC-compliant. Any
ODBC-compliant application can access any DBMS that has a corresponding driver.

Relational database management system

A relational database management system (RDBMS) is a database management system


(DBMS) that is based on the relational model as introduced by E. F. Codd. Most popular
commercial and open source databases currently in use are based on the relational database
model. A short definition of an RDBMS is: a DBMS in which data is stored in tables and the
relationships among the data are also stored in tables. The data can be accessed or
reassembled in many different ways without having to change the table forms.
Most commercial RDBMS's use the Structured Query Language (SQL) to access the
database, although SQL was invented after the development of the relational model and is not
necessary for its use. The leading RDBMS products are Oracle, IBM's DB2 and Microsoft's
SQL Server.

DATABASE: MySQL

MySQL is a relational database management system (RDBMS) that runs as a server


providing multi-user access to a number of databases. It is named after developer Michael
Widenius' daughter, My. The SQL phrase stands for Structured Query Language.

The MySQL development project has made its source code available under the terms of the
GNU General Public License, as well as under a variety of proprietary agreements. MySQL
was owned and sponsored by a single for-profit firm, the Swedish company MySQL AB, now
owned by Oracle Corporation.

Free-software-open source projects that require a full-featured database management system


often use MySQL. For commercial use, several paid editions are available, and offer
additional functionality. Applications which use MySQL databases include: Joomla,
WordPress, MyBB, phpBB, Drupal and other software built on the LAMP software stack.
MySQL is also used in many high-profile, large-scale World Wide Web products, including
Wikipedia, Google

The official MySQL Workbench is a free integrated environment developed by MySQL AB,
that enables users to graphically administer MySQL databases and visually design database
structure. MySQL Workbench replaces the previous package of software, MySQL GUI Tools.
Similar to other third-party packages, but still considered the authoritative MySQL frontend,
MySQL Workbench lets users manage the following:

 Database design & modeling


 SQL development – replacing MySQL Query Browser
 Database administration – replacing MySQL Administrator
MySQL can be built and installed manually from source code, but this can be tedious so it is
more commonly installed from a binary package unless special customizations are required.
On most Linux distributions the package management system can download and install
MySQL with minimal effort, though further configuration is often required to adjust security
and optimization settings.

It is still most commonly used in small to medium scale single-server deployments, either as
a component in a LAMP based web application or as a standalone database server. Much of
MySQL's appeal originates in its relative simplicity and ease of use, which is enabled by an
ecosystem of open source tools such as phpMyAdmin.

In the medium range, MySQL can be scaled by deploying it on more powerful hardware,
such as a multi-processor server with gigabytes of memory.

There are however limits to how far performance can scale on a single server, so on larger
scales, multi-server MySQL deployments are required to provide improved performance and
reliability. A typical high-end configuration can include a powerful master database which
handles data write operations and is replicated to multiple slaves that handle all read
operations.The master server synchronizes continually with its slaves so in the event of
failure a slave can be promoted to become the new master, minimizing downtime. Further
improvements in performance can be achieved by caching the results from database queries
in memory using memcached, or breaking down a database into smaller chunks called shards
which can be spread across a number of distributed server clusters.

Distinguishing Features

MySQL implements the following features, which some other RDBMS systems may not:

 Multiple storage engines, allowing one to choose the one that is most effective for
each table in the application (in MySQL 5.0, storage engines must be compiled in; in
MySQL 5.1, storage engines can be dynamically loaded at run time):
o Native storage engines (MyISAM, Falcon, Merge, Memory (heap), Federated,
Archive, CSV, Blackhole, Cluster, Berkeley DB, EXAMPLE, Maria, and
InnoDB, which was made the default as of 5.5)
o Partner-developed storage engines (solidDB, NitroEDB, Infobright (formerly
Brighthouse), Kickfire, XtraDB, IBM DB2). InnoDB used to be a partner-
developed storage engine, but with recent acquisitions, Oracle now owns both
MySQL core and InnoDB.
o Community-developed storage engines (memcache engine, httpd, PBXT,
Revision Engine)
o Custom storage engines
 Commit grouping, gathering multiple transactions from multiple connections together
to increase the number of commits per second.

Advantages of using MySQL:

Whether you are a Web developer or a dedicated network administrator with an interest in
building database applications, MySQL is easy to use, yet extremely powerful, secure, and
scalable. And because of its small size and speed, it is the ideal database solution for Web
sites.

 It's easy to use: While a basic knowledge of SQL is required—and most relational
databases require the same knowledge—MySQL is very easy to use. With only a few
simple SQL statements, you can build and interact with MySQL.
 It's secure: MySQL includes solid data security layers that protect sensitive data from
intruders. Rights can be set to allow some or all privileges to individuals. Passwords
are encrypted.
 It's inexpensive: MySQL is included for free with NetWare® 6.5 and available by free
download from MySQL Web site.
 It's fast: In the interest of speed, MySQL designers made the decision to offer fewer
features than other major database competitors, such as Sybase* and Oracle*.
However, despite having fewer features than the other commercial database products,
MySQL still offers all of the features required by most database developers.
 It's scalable: MySQL can handle almost any amount of data, up to as much as 50
million rows or more. The default file size limit is about 4 GB. However, you can
increase this number to a theoretical limit of 8 TB of data.
 It manages memory very well: MySQL server has been thoroughly tested to prevent
memory leaks.
 It supports Novell Cluster Services: MySQL on NetWare runs effectively with
Novell® Cluster Services™, letting you add your database solution to a Novell
cluster. If one server goes down, MySQL on an alternate server takes over and your
customers won't know that anything happened.
 It runs on many operating systems: MySQL runs on many operating systems,
including Novell NetWare, Windows, Linux and many varieties of UNIX and others.
 It supports several development interfaces: Development interfaces include JDBC,
ODBC, and scripting (PHP and Perl), letting you create database solutions that run not
only in your NetWare 6.5 environment, but across all major platforms, including
Linux, UNIX, and Windows.

Mysql Tables

The foundation of every Relational Database Management System is a database object called
table. Every database consists of one or more tables, which store the database’s
data/information. Each table has its own unique name and consists of columns and rows.

The database table columns (called also table fields) have their own unique names and have a
pre-defined data types. Table columns can have various attributes defining the column
functionality (the column is a primary key, there is an index defined on the column, the
column has certain default value, etc.). While table columns describe the data types, the table
rows contain the actual data for the columns.

Here is an example of a simple database table, containing customers data. The first row, listed
in bold, contains the names of the table columns:

How MySQL Deals with Constraints

MySQL enables you to work both with transactional tables that permit rollback and with
nontransactional tables that do not. Because of this, constraint handling is a bit different in
MySQL than in other DBMSs. We must handle the case when you have inserted or updated a
lot of rows in a nontransactional table for which changes cannot be rolled back when an error
occurs.
The basic philosophy is that MySQL Server tries to produce an error for anything that it can
detect while parsing a statement to be executed, and tries to recover from any errors that
occur while executing the statement.

PRIMARY KEY FOREIGN KEY AND UNIQUE


CONSTRAINTS
Primary Key Constraint

The PRIMARY KEY constraint uniquely identifies each record in a database table. Primary
keys must contain unique values. A primary key column cannot contain NULL values. Each
table should have a primary key, and each table can have only ONE primary key.

A table usually has a column or combination of columns whose values uniquely identify each
row in the table. This column (or columns) is called the primary key of the table and enforces
the entity integrity of the table. You can create a primary key by defining a PRIMARY KEY
constraint when you create or alter a table. A table can have only one PRIMARY KEY
constraint, and a column that participates in the PRIMARY KEY constraint cannot accept
null values. Because PRIMARY KEY constraints ensure unique data, they are often defined
for identity column.

When you specify a PRIMARY KEY constraint for a table, MySQL enforces data uniqueness
by creating a unique index for the primary key columns. This index also permits fast access to
data when the primary key is used in queries. If a PRIMARY KEY constraint is defined on
more than one column, values may be duplicated within one column, but each combination of
values from all the columns in the PRIMARY KEY constraint definition must be unique.

Unique Key

In relational database design, a unique key can uniquely identify each row in a table, and is
closely related to the Superkey concept. A unique key comprises a single column or a set of
columns. No two distinct rows in a table can have the same value or combination of values in
those columns if NULL values are not used. Depending on its design, a table may have
arbitrarily many unique keys but at most one primary key.
Unique keys do not enforce the NOT NULL constraint in practice. Because NULL is not an
actual value, when two rows are compared, and both rows have NULL in a column, the
column values are not considered to be equal. Thus, in order for a unique key to uniquely
identify each row in a table, NULL values must not be used. However, a column defined as a
unique key column allows only one NULL value, which in turn can uniquely identify that
row/tuple. A unique key should uniquely identify all possible rows that exist in a table and
not only the currently existing rows. Examples of unique keys are Social Security numbers
or ISBNs. Telephone books and dictionaries cannot use names, words, or Dewey Decimal
system numbers as candidate keys because they do not uniquely identify telephone numbers
or words.

A table can have at most one primary key, but more than one unique key. A primary key is a
combination of columns which uniquely specify a row. It is a special case of unique keys.
One difference is that primary keys have an implicit NOT NULL constraint while unique
keys do not. Thus, the values in unique key columns may or may not be NULL, and in fact
such a column may contain multiple NULL fields. Another difference is that primary keys
must be defined using another syntax. Unique keys as well as primary keys can be referenced
by foreign keys.

Foreign Key

A foreign key is a field in a relational table that matches a candidate key of another
table. The foreign key can be used to cross-reference tables.

The foreign key identifies a column or set of columns in one (referencing) table that
refers to a column or set of columns in another (referenced) table. The columns in the
referencing table must reference the columns of the primary key or other superkey in the
referenced table. The values in one row of the referencing columns must occur in a single row
in the referenced table. Thus, a row in the referencing table cannot contain values that don't
exist in the referenced table (except potentially NULL). This way references can be made to
link information together and it is an essential part of database normalization. Multiple rows
in the referencing table may refer to the same row in the referenced table. Most of the time, it
reflects the one (parent table or referenced table) to many (child table, or referencing table)
relationship.
The referencing and referenced table may be the same table, i.e. the foreign key refers
back to the same table. A table may have multiple foreign keys, and each foreign key can
have a different referenced table. Each foreign key is enforced independently by the database
system. Therefore, cascading relationships between tables can be established using foreign
keys. Improper foreign key/primary key relationships or not enforcing those relationships are
often the source of many database and data modeling problems.

Referential Integrity

Referential integrity is a property of data which, when satisfied, requires every value of one
attribute (column) of a relation (table) to exist as a value of another attribute in a different (or
the same) relation (table). Less formally, and in relational databases: For referential integrity
to hold, any field in a table that is declared a foreign key can contain only values from a
parent table's primary key or a candidate key. For instance, deleting a record that contains a
value referred to by a foreign key in another table would break referential integrity. Some
relational database management systems (RDBMS) can enforce referential integrity, normally
either by deleting the foreign key rows as well to maintain integrity, or by returning an error
and not performing the delete. Which method is used may be determined by a referential
integrity constraint defined in a data dictionary.

Data Dictionary

The term Data Dictionary and Data Repository are used to indicate a more general
software utility than a catalogue. A Catalogue is closely coupled with the DBMS Software; it
provides the information stored in it to user and the DBA,but it is mainly accessed by the
various software modules of the DBMS itself, such as DDL and DML compliers,the query
optimiser,the transaction processor, report generators, and the constraint enforce. on the other
hand, a Data Dictionary is a data structure that store meta-data,i.e., data about data.The
Software package for a stand-alone Data Dictionary or Data Repository may interact with
the software modules of the DBMS,but it is mainly used by the Designers,Users and
Administrators ofa computer system for information resource management. These systems
are used to maintain for information on system hardware and software configuration,
documentation, application and users as well as other information relevant to system
administration.
If a data dictionary system is used only by the designers, users, and administrators and
not by the DBMS Software , it is called a Passive Data Dictionary; otherwise, it is called an
Active Data Dictionary or Data Dictionary. An Active Data Dictionary is automatically
updated as changes occur in the database. A Passive Data Dictionary must be manually
updated. The data Dictionary consiste of record types (tables)created in the database by
systems generated command files,tailored for each supported back-end DBMS.Command
files contain SQL Statements for CREATE TABLE, CREATE UNIQUE INDEX,ALTER
TABLE(for referential integrity),etc., using the specific statement required by that type of
database.

Database users and application developers can benefit from an authoritative data
dictionary document that catalogs the organization, contents, and conventions of one or more
databases. This typically includes the names and descriptions of various tables and fields in
each database, plus additional details, like the type and length of each data element. There is
no universal standard as to the level of detail in such a document, but it is primarily a weak
kind of data.

Data Abstraction

Abstraction is the process of recognizing and focusing on important characteristics of


a situation or object and leaving/filtering out the un-wanted characteristics of that situation or
object.Lets take a person as example and see how that person is abstracted in various
situations The process of identifying the abstractions for a given system is called
as Modelling.

The major purpose of a database system is to provide users with an abstract view of the
system. The system hides certain details of how data is stored and created and maintained
complexity should be hidden from database users.

There are several levels of abstraction:


1. Physical Level:
o How the data are stored.
o E.g. index, B-tree, hashing.
o Lowest level of abstraction.
o Complex low-level structures described in detail.
2. Conceptual Level:
o Next highest level of abstraction.
o Describes what data are stored.
o Describes the relationships among data.
o Database administrator level.
3. View Level:
o Highest level.
o Describes part of the database for a particular group of users.
o Can be many different views of a database.

Computer network

A computer network, often simply referred to as a network, is a collection of computers and


devices interconnected by communications channels that facilitate communications and
allows sharing of resources and information among interconnected devices. Put more simply,
a computer network is a collection of two or more computers linked together for the purposes
of sharing information, resources, among other things. Computer networking or Data
Communications (Datacom) is the engineering discipline concerned with computer
networks. Computer networking is sometimes considered a sub-discipline of electrical
engineering, telecommunications, computer science, information technology and/or computer
engineering since it relies heavily upon the theoretical and practical application of these
scientific and engineering disciplines.

Networks may be classified according to a wide variety of characteristics such as medium


used to transport the data, communications protocol used, scale, topology, organizational
scope, etc. A communications protocol defines the formats and rules for exchanging
information via a network. Well-known communications protocols are Ethernet, which is a
family of protocols used in LANs, the Internet Protocol Suite, which is used not only in the
eponymous Internet, but today nearly ubiquitously in any computer network.

PROPERTIES COMPUTER NETWORKS:


Facilitate communications
Using a network, people can communicate efficiently and easily via email, instant
messaging, chat rooms, telephone, video telephone calls, and video conferencing.
Permit sharing of files, data, and other types of information
In a network environment, authorized users may access data and information stored
on other computers on the network. The capability of providing access to data and
information on shared storage devices is an important feature of many networks.
Share network and computing resources
In a networked environment, each computer on a network may access and use
resources provided by devices on the network, such as printing a document on a
shared network printer. Distributed computing uses computing resources across a
network to accomplish tasks.
May be insecure
A computer network may be used by computer hackers to deploy computer viruses or
computer worms on devices connected to the network, or to prevent these devices
from normally accessing the network (denial of service).
May interfere with other technologies
Power line communication strongly disturbs certain forms of radio communication,
e.g., amateur radio. It may also interfere with last mile access technologies such as
ADSL and VDSL.
May be difficult to set up
A complex computer network may be difficult to set up. It may also be very costly to
set up an effective computer network in a large organization or company.

Computer networks can be classified according to the hardware and associated software
technology that is used to interconnect the individual devices in the network, such as
electrical cable (HomePNA, power line communication, G.hn), optical fiber, and radio waves
(wireless LAN). In the OSI model, these are located at levels 1 and 2.

A well-known family of communication media is collectively known as Ethernet. It is defined


by IEEE 802 and utilizes various standards and media that enable communication between
devices. Wireless LAN technology is designed to connect devices without wiring. These
devices use radio waves or infrared signals as a transmission medium.
Wired Technologies
 Twisted pair wire is the most widely used medium for telecommunication. Twisted-
pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone
wires consist of two insulated copper wires twisted into pairs. Computer networking
cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper
cabling that can be utilized for both voice and data transmission. The use of two wires
twisted together helps to reduce crosstalk and electromagnetic induction. The
transmission speed ranges from 2 million bits per second to 10 billion bits per second.
Twisted pair cabling comes in two forms which are Unshielded Twisted Pair (UTP)
and Shielded twisted-pair (STP) which are rated in categories which are manufactured
in different increments for various scenarios.

 Coaxial cable is widely used for cable television systems, office buildings, and other
work-sites for local area networks. The cables consist of copper or aluminum wire
wrapped with insulating layer typically of a flexible material with a high dielectric
constant, all of which are surrounded by a conductive layer. The layers of insulation
help minimize interference and distortion. Transmission speed range from 200 million
to more than 500 million bits per second.

 ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and
power lines) to create a high-speed (up to 1 Gigabit/s) local area network.

 Optical fiber cable consists of one or more filaments of glass fiber wrapped in
protective layers that carries data by means of pulses of light. It transmits light which
can travel over extended distances. Fiber-optic cables are not affected by
electromagnetic radiation. Transmission speed may reach trillions of bits per second.
The transmission speed of fiber optics is hundreds of times faster than for coaxial
cables and thousands of times faster than a twisted-pair wire. This capacity may be
further increased by the use of colored light, i.e., light of multiple wavelengths.
Instead of carrying one message in a stream of monochromatic light impulses, this
technology can carry multiple signals in a single fiber.
Wireless Technologies
 Terrestrial microwave – Terrestrial microwaves use Earth-based transmitter and
receiver. The equipment looks similar to satellite dishes. Terrestrial microwaves use
low-gigahertz range, which limits all communications to line-of-sight. Path between
relay stations spaced approx, 48 km (30 miles) apart. Microwave antennas are usually
placed on top of buildings, towers, hills, and mountain peaks.

 Communications satellites – The satellites use microwave radio as their


telecommunications medium which are not deflected by the Earth's atmosphere. The
satellites are stationed in space, typically 35,400 km (22,200 miles) (for
geosynchronous satellites) above the equator. These Earth-orbiting systems are
capable of receiving and relaying voice, data, and TV signals.

 Cellular and PCS systems – Use several radio communications technologies. The
systems are divided to different geographic areas. Each area has a low-power
transmitter or radio relay antenna device to relay calls from one area to the next area.

 Wireless LANs – Wireless local area network use a high-frequency radio technology
similar to digital cellular and a low-frequency radio technology. Wireless LANs use
spread spectrum technology to enable communication between multiple devices in a
limited area. An example of open-standards wireless radio-wave technology is IEEE
802.11.

 Infrared communication can transmit signals between devices within small distances
of typically no more than 10 meters. In most cases, line-of-sight propagation is used,
which limits the physical positioning of communicating devices.

 A global area network (GAN) is a network used for supporting mobile


communications across an arbitrary number of wireless LANs, satellite coverage
areas, etc. The key challenge in mobile communications is handing off the user
communications from one local coverage area to the next. In IEEE Project 802, this
involves a succession of terrestrial wireless LANs.
DIFFERENT TYPES OF NETWORK

Networks are often classified by their physical or organizational extent or their purpose.
Usage, trust level, and access rights differ between these types of networks.

Personal area network

A personal area network (PAN) is a computer network used for communication among
computer and different information technological devices close to one person. Some
examples of devices that are used in a PAN are personal computers, printers, fax machines,
telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and
wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually
constructed with USB and Firewire connections while technologies such as Bluetooth and
infrared communication typically form a wireless PAN.

Local area network

A local area network (LAN) is a network that connects computers and devices in a limited
geographical area such as home, school, computer laboratory, office building, or closely
positioned group of buildings. Each computer or device on the network is a node. Current
wired LANs are most likely to be based on Ethernet technology, although new standards like
ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial
cables, phone lines and power lines).

Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are
handling multiple subnets (the different colors). Those inside the library, which have only
10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to
the central router, could be called "layer 3 switches" because they only have Ethernet
interfaces and must understand IP. It would be more correct to call them access routers,
where the router at the top is a distribution router that connects to the Internet and academic
networks' customer access routers.
The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include
their higher data transfer rates, smaller geographic range, and no need for leased
telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at
speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the
standardization of 40 and 100 Gbit/s. Local Area Networks can be connected to Wide area
network by using routers.

Home Network

A home network is a residential LAN which is used for communication between digital
devices typically deployed in the home, usually a small number of personal computers and
accessories, such as printers and mobile computing devices. An important function is the
sharing of Internet access, often a broadband service through a cable TV or Digital Subscriber
Line (DSL) provider.

Wide Area Network

A wide area network (WAN) is a computer network that covers a large geographic area such
as a city, country, or spans even intercontinental distances, using a communications channel
that combines many types of media such as telephone lines, cables, and air waves. A WAN
often uses transmission facilities provided by common carriers, such as telephone companies.
WAN technologies generally function at the lower three layers of the OSI reference model:
the physical layer, the data link layer, and the network layer. Wide area network is formed by
interconnecting Local Area network.

Metropolitan Area Network

A metropolitan area network (MAN) is a computer network that usually spans a city or a
large campus. A MAN usually interconnects a number of local area networks
(LANs) using a high-capacity backbone technology, such as fiber-optical links, and
provides up-link services to wide area networks (or WAN) and the Internet.

Virtual Private Network

A virtual private network (VPN) is a computer network in which some of the links between
nodes are carried by open connections or virtual circuits in some larger network (e.g., the
Internet) instead of by physical wires. The data link layer protocols of the virtual network are
said to be tunneled through the larger network when this is the case. One common application
is secure communications through the public Internet, but a VPN need not have explicit
security features, such as authentication or content encryption. VPNs, for example, can be
used to separate the traffic of different user communities over an underlying network with
strong security features.

VPN may have best-effort performance, or may have a defined service level agreement
(SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a
topology more complex than point-to-point.

Internetwork

An internetwork is the connection of two or more private computer networks via a common
routing technology (OSI Layer 3) using routers. The Internet can be seen as a special case of
an aggregation of many connected internetworks spanning the whole earth. Another such
global aggregation is the telephone network.

Das könnte Ihnen auch gefallen