Wednesday, September 1, 2021

Performance optimization and load balancing

 

What is Scalability? 

Scalability is the ability of a system, network, or process to handle a growing amount of load by adding more resources. The adding of resource can be done in two ways 

  • Scaling Up 

This involves adding more resources to the existing nodes. For example, adding more RAM, Storage or processing power. 

  • Scaling Out 

This involves adding more nodes to support more users. 

Any of the approaches can be used for scaling up/out an application, however the cost of adding resources (per user) may change as the volume increases. If we add resources to the system It should increase the ability of application to take more load in a proportional manner of added resources. 
 
An ideal application should be able to serve high level of load in less resources. However, in practical, linearly scalable system may be the best option achievable.  
 
Poorly designed applications may have really high cost on scaling up/out since it will require more resources/user as the load increases. 


What is a Cluster? 

A cluster is group of computer machines that can individually run a software. Clusters are typically utilized to achieve high availability for a server software.  
 
Clustering is used in many types of servers for high availability. 

  • App Server Cluster 

An app server cluster is group of machines that can run a application server that can be reliably utilized with a minimum of down-time. 

  • Database Server Cluster 

A database server cluster is group of machines that can run a database server that can be reliably utilized with a minimum of down-time. 


Why do you need Clustering? 

Clustering is needed for achieving high availability for a server software. The main purpose of clustering is to achieve 100% availability or a zero down time in service.  
 
A typical server software can be running on one computer machine and it can serve as long as there is no hardware failure or some other failure. 
 
By creating a cluster of more than one machine, we can reduce the chances of our service going un-available in case one of the machine fails. 
 
Doing clustering does not always guarantee that service will be 100% available since there can still be a chance that all the machine in a cluster fail at the same time. However it in not very likely in case you have many machines and they are located at different location or supported by their own resources. 


What is MiddleTier Clustering? 

Middle tier clustering is just a cluster that is used for service the middle tier in an application. This is popular since many clients may be using middle tier and a lot of heavy load may also be served by middle tier that requires it be to highly available. 
 
Failure of middle tier can cause multiple clients and systems to fail, therefore it's one of the approaches to do clustering at the middle tier of an application. 
 
In java world, it is really common to have EJB server clusters that are used by many clients. In general any application that has a business logic that can be shared across multiple clients can use a middle tier cluster for high availability. 


What is Load Balancing? 

Load balancing is simple technique for distributing workloads across multiple machines or clusters.  
The most common and simple load balancing algorithm is Round Robin. In this type of load balancing the request is divided in circular order ensuring all machines get equal number of requests and no single machine is overloaded or underloaded 
 
The Purpose of load balancing is to 

  • Optimize resource usage (Avoid overload and under-load of any machines.) 

  • Achieve Maximum Throughput 

  • Minimize response time 

Most common load balancing techniques in web based applications are 

  1. Round robin 

  1. Session affinity or sticky session 

  1. IP Address affinity 

What is Session replication? 

Session replication is used in application server clusters to achieve session failover. A user session is replicated to other machines of a cluster, every time the session data changes. If a machine fails, the load balancer can simply send incoming requests to another server in the cluster.  The user can be sent to any server in the cluster since all machines in a cluster have copy of the session. 
 
Session replication may allow your application to have session failover but it may require you to have extra cost in terms of memory and network bandwidth. 


What is Sticky Session (session Affinity) Load Balancing? What do you mean by 'session Affinity'? 

Sticky session or a session affinity technique another popular load balancing technique that requires a user session to be always served by an allocated machine.  

Why Sticky Session? 

In a load balanced server application where user information is stored in session it will be required to keep the session data available to all machines. This can be avoided by always serving a particular user session request from one machine. 

How It Is Done? 

The machine is associated with a session as soon as the session is created. All the requests in a particular session are always redirected to the associated machine. This ensures the user data is only at one machine and load is also shared. 
 
In Java world, this is typically done by using jsessionid cookie. The cookie is sent to the client for the first request and every subsequent request by client must be containing that same cookie to identify the session. 

What Are The Issues With Sticky Session? 

There are few issues that you may face with this approach 

  • The client browser may not support cookies, and your load balancer will not be able to identify if a request belongs to a session. This may cause strange behavior for the users who use no cookie based browsers. 

  • In case one of the machine fails or goes down, the user information (served by that machine) will be lost and there will be no way to recover user session. 


What is IP Address Affinity technique for Load Balancing? 

IP address affinity is another popular way to do load balancing. In this approach, the client IP address is associated with a server node. All requests from a client IP address are served by one server node. 
 
This approach can be really easy to implement since IP address is always available in a HTTP request header and no additional settings need to be performed. 
 
This type of load balancing can be useful if you clients are likely to have disabled cookies. 
 
However there is a down side of this approach. If many of your users are behind a NATed IP address then all of them will end up using the same server node. This may cause uneven load on your server nodes. 
 
NATed IP address is really common, in fact anytime you are browsing from office network it's likely that you and all your coworkers are using same NATed IP address. 


What is Fail Over? 

Fail over means switching to another server when one of the server fails.  
 
Fail over is an important technique in achieving high availability. Typically a load balancer is configured to fail over to another machine when the main machine fails.  
 
To achieve least down time, most load balancer support a feature of heart beat check. This ensures that target machine is responding. As soon as a hear beat signal fails, load balancer stops sending request to that machine and redirects to other machines or cluster.

Web Distributed Systems Design


Principles of Web Distributed Systems Design 

  • Availability: The uptime of a website is absolutely critical to the reputation and functionality of many companies. For some of the larger online retail sites, being unavailable for even minutes can result in thousands or millions of dollars in lost revenue, so designing their systems to be constantly available and resilient to failure is both a fundamental business and a technology requirement. High availability in distributed systems requires the careful consideration of redundancy for key components, rapid recovery in the event of partial system failures, and graceful degradation when problems occur. 

  • Performance: Website performance has become an important consideration for most sites. The speed of a website affects usage and user satisfaction, as well as search engine rankings, a factor that directly correlates to revenue and retention. As a result, creating a system that is optimized for fast responses and low latency is key. 

  • Reliability: A system needs to be reliable, such that a request for data will consistently return the same data. In the event the data changes or is updated, then that same request should return the new data. Users need to know that if something is written to the system, or stored, it will persist and can be relied on to be in place for future retrieval. 

  • Scalability: When it comes to any large distributed system, size is just one aspect of scale that needs to be considered. Just as important is the effort required to increase capacity to handle greater amounts of load, commonly referred to as the scalability of the system. Scalability can refer to many different parameters of the system: how much additional traffic can it handle, how easy is it to add more storage capacity, or even how many more transactions can be processed. 

  • Manageability: Designing a system that is easy to operate is another important consideration. The manageability of the system equates to the scalability of operations: maintenance and updates. Things to consider for manageability are the ease of diagnosing and understanding problems when they occur, ease of making updates or modifications, and how simple the system is to operate. (i.e., does it routinely operate without failure or exceptions?) 

  • Cost: Cost is an important factor. This obviously can include hardware and software costs, but it is also important to consider other facets needed to deploy and maintain the system. The amount of developer time the system takes to build, the amount of operational effort required to run the system, and even the amount of training required should all be considered. Cost is the total cost of ownership. 


Generative AI: Paving the way for Performance-Driven Enterprise Architecture

  Generative AI is not just reshaping the technological frontier; it's rapidly becoming an essential tool in optimizing enterprise archi...