The architecture of a database system is greatly influenced by the underlying computer system on which it runs, in particular by such aspects of computer architecture as networking, parallelism, and distribution:
- Networking of computers allows some tasks to be executed on a server system and some tasks to be executed on client systems. This division of work has led to client–server database systems.
- Parallel processing within a computer system allows database-system activities to be speeded up, allowing faster response to transactions, aswell asmore transactions per second. Queries can be processed in a way that exploits the parallelism offered by the underlying computer system. The need for parallel query processing has led to parallel database systems.
- Distributing data across sites in an organization allows those data to reside where they are generated ormost needed, but still to be accessible fromother sites and from other departments. Keeping multiple copies of the database across different sites lso allows large organizations to continue their database operations even when one site is affected by a natural disaster, such as flood, fire, or earthquake. Distributed database systems handle geographically or administratively distributed data spread across multiple database systems.
- Teleprocessing Architecture. Teleprocessing system is the old one. This is an traditional architecture. This architecture show that single mainframe with number of terminals attached. That’s mean, so many computers joined with 1 central processing unit. You can add so many terminals as you want. Disadvantages : Huge burden on central computer. If the cpu broken, it will affect every workstation.
- Client–Server Systems. As personal computers became faster, more powerful, and cheaper, there was a shift away from the centralized system architecture. Personal computers supplanted terminals connected to centralized systems. Correspondingly, personal computers assumed the user-interface functionality that used to be handled directly by the centralized systems. As a result, centralized systems today act as server systems that satisfy requests generated by client systems. Figure 17.2 shows the general structure of a client–server system. Functionality provided by database systems can be broadly divided into two parts—the front end and the back end. The back end anages access structures, query evaluation and optimization, concurrency control, and recovery. The front end of a database system consists of tools such as the SQL user interface, forms interfaces, report generation tools, and data mining and analysis tools (see Figure 17.3). The interface between the front end and the back end is through SQL, or through an application program. Disadvantages : Dependability,when the server goes down, operations cease. Lack of mature tools – it is a relatively new technology and needed tools are lacking. Automated client software distribution. Lack of scalability – network operating systems (e.g.. Novell Netware, Windows NT Server) are not very scalable. Higher than anticipated costs. Can cause network congestion.
- Server System Architectures. Server systems can be broadly categorized as transaction servers and data servers.• Transaction-server systems, also called query-server systems, provide an interface to which clients can send requests to perform an action, in response to which they execute the action and send back results to the client. Usually, client machines ship transactions to the server systems, where those transactions are executed, and results are shipped back to clients that are in charge of displaying the data. Requests may be specified by using SQL, or through a specialized application program interface.
• Data-server systems (or same as file-server system) allow clients to interact with the servers by making requests to read or update data, in units such as files or pages. For example, file servers provide a file-system interface where clients can create, update, read, and delete files. Data servers for database systems offer much more functionality; they support units of data—such as pages, tuples, or objects that are smaller than a file. They provide indexing facilities for data, and provide transaction facilities so that the data are ever left in an inconsistent state if a client machine or process fails. Disadvantages : Large amount of network traffic. Copy of DBMS required on each workstation. Concurency, recovery, and integrity control are more complex.
- Three-tier architecture. Three-tier is a client–server architecture in which the user interface, functional process logic (“business rules”),computer data storage and data access are developed and maintained as independent modules, most often on separateplatforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded inCambridge, Massachusetts.
The three-tier model is a software architecture pattern.
Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently in response to changes in requirements ortechnology. For example, a change of operating system in the presentation tier would only affect the user interface code.
Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic that may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe that contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an “n-tier architecture”).
Three-tier architecture has the following three tiers:
- Presentation tier
- This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network. (In simple terms it’s a layer which users can access directly such as a web page, or an operating systems GUI)
- Application tier (business logic, logic tier, data access tier, or middle tier)
- The logical tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.
- Data tier
- This tier consists of database servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.
Advantages : Reduce network trafic. Hardware cost may be reduced. Increased consistency. Application maintenance centralized. Easier to modify/replace one tier without affecting others. Separation business logic from database functions → easier to implement load balancing. Maps naturally to Web environment. Disadvantages : Creates an increased need for network traffic management, server load balancing, and fault tolerance. Current tools are relatively immature and are more complex. Maintenance tools are currently inadequate for maintaining server libraries. This is a potential obstacle for simplifying maintenance and promoting code reuse throughout the organization.
And the winner should be : three-tier architecture
- This architecture are not so hard to built. It is so easy as others architecture. We can cuztomize it too. We can join it with TP monitor as 2nd Tier. It will divides the database consider of condition we made to the where database servers belong to.
- Time, place, energy Efficiency. So many companies in the worldwide use this architecture. ex : Java, Microsoft etc. Doing a lot of a work in short of time causing us to sending or receiving the data or file often and quick too.
- The best of the best architecture. Okay, maybe we need to spend more money but it is equal with what we get.