Unit 5 Flashcards
Which factors measure the performance of a distributed system’s software architecture?
The performance of a system can be measured by responsiveness and throughput.
What are the weaknesses of the client–server style?
The central role of the server forms a bottleneck and thus limits the performance of the system. This central role also forms the single point of failure: if the server goes down, the whole system goes down.
What different types of services can be offered by proxy servers?
Proxy servers can provide:
- a caching service by storing local copies of previously visited web pages on the proxy;
- a form of security, by anonymising the IP address details of clients;
- implementation of data content policies, by allowing acceptable content to reach a client and blocking offensive web content.
What is load balancing? Give an example of load balancing for each style of software architecture discussed.
in the classic client–server system architecture
in the multiple servers model
in the proxy server model
in the peer-to-peer architecture
Load balancing is a strategy to improve the performance of a distributed system, such that the workload of the system is shared among many processes, rather than carried out by one process which inevitably leads to other processes having to wait. For the system architectures discussed:
- in the classic client–server system architecture, load balancing can take place if some of the processing is carried out by the clients;
- in the multiple servers model, the load can be shared among the servers;
- in the proxy server model, the proxy takes over some of the work from the servers, and relays information to the clients that would otherwise have had to be provided by the server;
- in the peer-to-peer architecture, the work is shared among all the peers.
Give the advantages and disadvantages for the two-tier and n-tier architectures.
- *Advantages** of both two-tier and n-tier: can incorporate separate presentations for different types of user.
- *Disadvantage** of two-tier: does not scale well, poor strategy for dealing with failure, difficulty in updating client software for remote users.
- *Advantages** of n-tier compared to two-tier: better scalability, encourages reuse, improved security and availability.
- *Disadvantage** of n-tier: increased complexity, increased security risks.
What are thin and thick clients? Explain which type of client is more likely to be used in the two-and n-tier architectures.
Thin clients (typically used in n-tier architectures) do not carry out much processing and leave this to the server.
Thick clients (more regularly used in two-tier architectures) carry out more processing.
What do the standard three-tier architecture and the Java EE tiered architecture have in common, and how do they differ?
The Java EE tiered architecture is a specific example of the n-tier architecture, with separate database, middle and client tiers.
It differs from the three-tier architecture in that the middle tier is split into further tiers: the web tier and the business tier.
Which form of communication, i.e. synchronous or asynchronous, requires a buffer? Explain why.
In asynchronous message passing, a process sends a message with no regard as to whether the recipient is in a position to accept it or not. If it arrives before the recipient is ready to receive it, the message must be stored somewhere, i.e. buffered, until the recipient can read it.
In an asynchronous message-passing system, the process that sends a message can continue processing. A receiving process is normally blocked until the receive method has completed. Explain how a non-blocking receive could be implemented.
By creating a separate thread that deals with the receive, the main thread can continue processing until it arrives at the point when it requires the data from the receive thread. The main thread and the receive thread would then be joined again.
How can deadlock arise in communication? How can deadlock be resolved?
Deadlock can be the result of programming errors, or misunderstandings about the ordering of the receive and send operations with two or more processes waiting to receive from each other before they can continue. Most deadlock problems are overcome by using timeouts so that the call is abandoned if it appears unsuccessful after a specified time slot.
Describe the difference between a ServerSocket object and a Socket object.
A ServerSocket object listens at a particular port for requests from a client.
A Socket object forms part of the connection by means of which client and server communicate.
Both the client and the server have a socket which, together, form the connection. Each socket provides input and output streams that enable the client and the server to receive data from and send data to the connection.
Describe how the connection between a client and a server is made. How do client and server communicate once this connection is set up?
The client creates a socket based on the internet address of the server’s computer and the port on the server’s computer at which the connection will be made. This act of creating the socket can be thought of as requesting a connection with the server. The server creates a ServerSocket object associated with the port on its own machine at which the connection with the client is to be made. The method accept is invoked on this server socket, which causes the server to wait for the client’s request. When this arrives, accept makes the connection with the client and returns the server’s connection socket object through which it will communicate with the client. The client and the server use the output and input streams associated with their own socket to the connection for sending data to and from each other.
Explain whether the DateServer application is synchronous or asynchronous.
It is asynchronous because the sending process can send its message, irrespective of whether the receiving process is ready for it. An example of this is the statement
out.write(message);
executed by the DateServer, which is equivalent to a send. Similarly, the receiving process can pick up the message as and when it requires the message (which may have been buffered).
What are the advantages of having a multithreaded server, rather than a singlethreaded server?
If a server can handle only one client at a time, the server would be rather unsatisfactory for most applications. Each client trying to communicate with this server would have to wait until the previous client has finished. This would be very unresponsive.
Multiple concurrent clients
In the case of implementing a client for a chat server, why does it make sense to implement this with two threads?
In the case of a chat client, there are two quite separate actions going on:
1 forwarded messages from fellow chat clients (as sent by the chat server) are being received and printed to the screen;
2 contributions from the chat client itself are being sent to the server so that they can be forwarded.
The two separate actions are going on continually, and it cannot be determined beforehand at exactly which point the client would want to make a contribution, nor when all the fellow chatters want to make a contribution.