Sie sind auf Seite 1von 3

A computing system that is composed of two logical parts: a server, which provides services, and a client, which requests

them. The two parts can run on separate machines on a network, allowing users to access powerful server resources from their personal computers. See Local-area networks, Wide-area networks Client-server systems are not limited to traditional computers. An example is an automated teller machine (ATM) network. Customers typically use ATMs as clients to interface to a server that manages all of the accounts for a bank. This server may in turn work with servers of other banks (such as when withdrawing money at a bank at which the user does not have an account). The ATMs provide a user interface and the servers provide services, such as checking on account balances and transferring money between accounts. To provide access to servers not running on the same machine as the client, middleware is usually used. Middleware serves as the networking between the components of a cl ient-server system; it must be run on both the client and the server. It provides everything required to get a request from a client to a server and to get the server's response back to the client. Middleware often facilitates communication between different types of computer systems. This communication provides cross-platform client-server computing and allows many types of clients to access the same data. The server portion almost always holds the data, and the client is nearly always responsible for the user interface. The application logic, which determines how the data should be acted on, can be distributed between the client and the server. The part of a system with a disproportionately large amount of application logic is termed fat; a thin portion of a system is a part with less responsibility delegated to it. Fat server systems, such as groupware systems and web servers, delegate more responsibility for the application logic to the server, whereas fat client systems, such as most database systems , place more responsibility on the client. See Human-computer interaction The canonical client-server model assumes two participants in the system. This is called a two-tiered system; the application logic must be in the client or the server, or shared between the two. It is also possible to have the application logic reside in a third layer separate from the user interface and the data, turning the system into a three-tier system. Complete separation is rare in actual systems; usually the bulk of the application logic is in the middle tier, but select portions of it are the responsibility of the client or the server. The three-tier model is more flexible than the two-tier model because the separation of the application logic from the client and the server gives application logic processes a new level of autonomy. The processes become more robust since they can operate independently of the clients and servers. Furthermore, decoupling the application logic from the data allows data from multiple sources to be used in a single transaction without a breakdown in the client-server model. This advancement in client-

server architecture is largely responsible for the notion of distributed data. See Distributed systems (computers) Standard web applications are the most common examples of three-tier systems. The first tier is the user interface, provided via interpretation of Hyper Text Markup Language (HTML) by a web browser. The embedded components being displayed by the browser reside in the middle tier, and provide the application logic pertinent to the system. The final tier is the data from a web server. Quite often this is a database-style system, but it could be a data-warehousing or groupware system.

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes. Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts, Peers are both suppliers and consumers of resources, in contrast to the traditional clientserver model where only servers supply (send), and clients consume (receive).

In P2P networks, clients provide resources, which may include bandwidth, storage space, and computing power. This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and

the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical clientserver architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client-server based system. Another important property of peer-to-peer systems is the lack of a system administrator. This leads to a network that is easier and faster to setup and keep running because a full staff is not required to ensure efficiency and stability. Decentralized networks introduce new security issues because they are designed so that each user is responsible for controlling their data and resources. Peer-to-peer networks, along with almost all network systems, are vulnerable to unsecure and unsigned codes that may allow remote access to files on a victim's computer or even compromise the entire network. A user may encounter harmful data by downloading a file that was originally uploaded as a virus disguised in an .exe, .mp3, .avi, or any other filetype. This type of security issue is due to the lack of an administrator that maintains the list of files being distributed. For example:Peer to peer networks gained widespread popularity in the last 1990s with several file sharing services ,such as NAPSTER AND GNUTELLA that enable peers to exchange files wih one another. The napster system uses an approach similar to the first type describe above: a Centralized server maintains an index of all files takes place between the peer nodes . The\gnutella system uses a technique similar to the second type : a client broadcast file request respond directly to the client .the future of exchanging files remains uncertained because many of the files copyrighted (music) .