Multi-User DBMS Architectures
In this section we look at the common architectures that are used to implement multi-user
database management systems, namely teleprocessing, file-server, and client–server.
Teleprocessing
The traditional architecture for multi-user systems was teleprocessing, where there is one
computer with a single central processing unit (CPU) and a number of terminals, as
illustrated in Figure 2.10. All processing is performed within the boundaries of the same
physical computer. User terminals are typically ‘dumb’ ones, incapable of functioning on
their own. They are cabled to the central computer. The terminals send messages via the
communications control subsystem of the operating system to the user’s application program,
which in turn uses the services of the DBMS. In the same way, messages are routed
back to the user’s terminal. Unfortunately, this architecture placed a tremendous burden
on the central computer, which not only had to run the application programs and the
DBMS, but also had to carry out a significant amount of work on behalf of the terminals
(such as formatting data for display on the screen).
In recent years, there have been significant advances in the development of highperformance
personal computers and networks. There is now an identifiable trend in
industry towards downsizing, that is, replacing expensive mainframe computers with
more cost-effective networks of personal computers that achieve the same, or even better,
results. This trend has given rise to the next two architectures: file-server and client–server.
File-Server Architecture
In a file-server environment, the processing is distributed about the network, typically a
local area network (LAN). The file-server holds the files required by the applications and
the DBMS. However, the applications and the DBMS run on each workstation, requesting
files from the file-server when necessary, as illustrated in Figure 2.11. In this way, the
file-server acts simply as a shared hard disk drive. The DBMS on each workstation
sends requests to the file-server for all data that the DBMS requires that is stored on disk.
This approach can generate a significant amount of network traffic, which can lead to
performance problems. For example, consider a user request that requires the names of
staff who work in the branch at 163 Main St. We can express this request in SQL (see
Chapter 5) as:
SELECT fName, lName
FROM Branch b, Staff s
WHERE b.branchNo = s.branchNo AND b.street = ‘163 Main St’;
As the file-server has no knowledge of SQL, the DBMS has to request the files corresponding
to the Branch and Staff relations from the file-server, rather than just the staff
names that satisfy the query.
The file-server architecture, therefore, has three main disadvantages:
(1) There is a large amount of network traffic.
(2) A full copy of the DBMS is required on each workstation.
(3) Concurrency, recovery, and integrity control are more complex because there can be
multiple DBMSs accessing the same files.
Traditional Two-Tier Client–Server Architecture
To overcome the disadvantages of the first two approaches and accommodate an increasingly
decentralized business environment, the client–server architecture was developed.
Client–server refers to the way in which software components interact to form a system.
As the name suggests, there is a client process, which requires some resource, and a
server, which provides the resource. There is no requirement that the client and server
must reside on the same machine. In practice, it is quite common to place a server at one
site in a local area network and the clients at the other sites. Figure 2.12 illustrates the
client–server architecture and Figure 2.13 shows some possible combinations of the
client–server topology.
Data-intensive business applications consist of four major components: the database,
the transaction logic, the business and data application logic, and the user interface. The
traditional two-tier client–server architecture provides a very basic separation of these
components. The client (tier 1) is primarily responsible for the presentation of data to the
user, and the server (tier 2) is primarily responsible for supplying data services to the
client, as illustrated in Figure 2.14. Presentation services handle user interface actions and
the main business and data application logic. Data services provide limited business
application logic, typically validation that the client is unable to carry out due to lack of
information, and access to the requested data, independent of its location. The data can
come from relational DBMSs, object-relational DBMSs, object-oriented DBMSs, legacy
DBMSs, or proprietary data access systems. Typically, the client would run on end-user
desktops and interact with a centralized database server over a network.
A typical interaction between client and server is as follows. The client takes the user’s
request, checks the syntax and generates database requests in SQL or another database
language appropriate to the application logic. It then transmits the message to the server,
waits for a response, and formats the response for the end-user. The server accepts and
processes the database requests, then transmits the results back to the client. The processing
involves checking authorization, ensuring integrity, maintaining the system catalog,
and performing query and update processing. In addition, it also provides concurrency and
recovery control. The operations of client and server are summarized in Table 2.1.
There are many advantages to this type of architecture. For example:
n It enables wider access to existing databases.
n Increased performance – if the clients and server reside on different computers then different
CPUs can be processing applications in parallel. It should also be easier to tune
the server machine if its only task is to perform database processing.
n Hardware costs may be reduced – it is only the server that requires storage and processing
power sufficient to store and manage the database.
n Communication costs are reduced – applications carry out part of the operations on the
client and send only requests for database access across the network, resulting in less
data being sent across the network.
n Increased consistency – the server can handle integrity checks, so that constraints need
be defined and validated only in the one place, rather than having each application program
perform its own checking.
n It maps on to open systems architecture quite naturally.
Some database vendors have used this architecture to indicate distributed database capability,
that is a collection of multiple, logically interrelated databases distributed over a
computer network. However, although the client–server architecture can be used to provide
distributed DBMSs, by itself it does not
Three-Tier Client–Server Architecture
The need for enterprise scalability challenged this traditional two-tier client–server model.
In the mid-1990s, as applications became more complex and potentially could be deployed
to hundreds or thousands of end-users, the client side presented two problems that prevented
true scalability:
n A ‘fat’ client, requiring considerable resources on the client’s computer to run effectively.
This includes disk space, RAM, and CPU power.
n A significant client-side administration overhead.
By 1995, a new variation of the traditional two-tier client–server model appeared to solve
the problem of enterprise scalability. This new architecture proposed three layers, each
potentially running on a different platform:
(1) The user interface layer, which runs on the end-user’s computer (the client).
(2) The business logic and data processing layer. This middle tier runs on a server and is
often called the application server.
(3) A DBMS, which stores the data required by the middle tier. This tier may run on a
separate server called the database server.
As illustrated in Figure 2.15 the client is now responsible only for the application’s user
interface and perhaps performing some simple logic processing, such as input validation,
thereby providing a ‘thin’ client. The core business logic of the application now resides
in its own layer, physically connected to the client and database server over a local area
network (LAN) or wide area network (WAN). One application server is designed to serve
multiple clients.
The three-tier design has many advantages over traditional two-tier or single-tier
designs, which include:
n The need for less expensive hardware because the client is ‘thin’.
n Application maintenance is centralized with the transfer of the business logic for many
end-users into a single application server. This eliminates the concerns of software
distribution that are problematic in the traditional two-tier client–server model.
n The added modularity makes it easier to modify or replace one tier without affecting the
other tiers.
n Load balancing is easier with the separation of the core business logic from the database
functions.
An additional advantage is that the three-tier architecture maps quite naturally to the Web
environment, with a Web browser acting as the ‘thin’ client, and a Web server acting as
the application server. The three-tier architecture can be extended to n-tiers, with additional
tiers added to provide more flexibility and scalability. For example, the middle tier
of the three-tier architecture could be split into two, with one tier for the Web server and
another for the application server.
This three-tier architecture has proved more appropriate for some environments, such as
the Internet and corporate intranets where a Web browser can be used as a client. It is also
an important architecture for Transaction Processing Monitors, as we discuss next.
Reviewed by Shopping Sale on 22:35 Rating: 5

No comments:

Powered by Blogger.