SQL Server cluster design: One big cluster vs #sql #server #load #balancing

#

SQL Server cluster design: One big cluster vs. small clusters

workload to balance utilization and to schedule planned maintenance without downtime.

This cluster design uses software heartbeats to detect failed applications or servers. In the event of a server failure, it automatically transfers ownership of resources (such as disk drives and IP addresses) from a failed server to a live server. Note that there are methods to maintain higher availability of the heartbeat connections, such as in case of a general failure at a site.

MSCS does not require any special software on client computers, so the user experience during failover depends on the nature of the client side of the client-server application. Client reconnection is often transparent, because MSCS has restarted the applications, file shares and so on, at exactly the same IP address. Furthermore, the nodes of a cluster can reside in separate, distant sites for disaster recovery.

SQL Server on a cluster server

SQL Server 2000 can be configured on a cluster with up to four nodes, while SQL Server 2005 can be clustered on up to eight nodes. When a SQL Server instance is clustered, the disk resources, IP address and services are forming a cluster group to allow for failover. For more technical information, refer to the article How to cluster SQL Server 2005.

SQL Server 2000 allows installation of 16 instances on a single cluster. According to Books-On-Line (BOL), SQL Server 2005 supports up to 50 SQL Server instances on a single server or processor, but…, only 25 hard drive letters can be used, so these have to be planned accordingly if you need many instances.

Note that the failover period of a SQL Server instance is the amount of time it takes for the SQL Server service to start, which can vary from a few seconds to a few minutes. If you need higher availability, consider using other methods, such as log shipping and database mirroring. For more information about disaster recovery and HA for SQL Server, go to Disaster recovery features in SQL Server 2005 and Microsoft’s description of disaster recovery options in SQL Server.

One big SQL Server cluster vs. small clusters

Here are the advantages of having one big cluster, consisting of more nodes:

  • Higher Availability (more nodes to failover to)
  • More load balancing options for performance (more nodes)
  • Cheaper maintenance costs
  • Growth agility. Up to four or eight nodes, depending on SQL version
  • Improved manageability and simplified environment (less to manage)
  • Maintenance with less downtime (more options for failover)
  • Failover performance unaffected by the number of nodes in the cluster

    Here are the disadvantages of having one big cluster:

  • Limited number of clustered nodes (what if a ninth node is needed?)
  • Limited number of SQL instances on a cluster
  • No protection against failure — if disk array fails, no failover can take place
  • Can’t create failover clusters at the database level or database object level, such as a table, with failover clustering

    Virtualization and clustering

    Virtual machines can also participate in a cluster. Virtual and physical machines can be clustered together with no problem. SQL Server instances can reside on a virtual machine, but performance may be impacted, depending on the resource consumption by the instance. Before installing your SQL Server instance on a virtual machine, you should stress test to verify that it can hold the necessary load.

    In this flexible architecture, you can load balance SQL Server between a virtual machine and a physical box when the two are clustered together. For example, develop applications using a SQL Server instance on a virtual machine. Then fail over to a stronger physical box within the cluster when you need to stress test the development instance.

    Important links describing Windows and/or SQL Server clustering

  • SQL Server clustering resources (This article contains important links and information about clustering).

    A cluster server can be used for high availability, disaster recovery, scalability and load balancing in SQL Server. It’s often better to have one bigger cluster, consisting of more nodes, than to have smaller clusters with fewer nodes. A big cluster allows a more flexible environment where instances can move from one node to the other for load balancing and maintenance.

    ABOUT THE AUTHOR:

    Michelle Gutzait works as a senior database consultant for ITERGY International Inc.. an IT consulting firm specializing in the design, implementation, security and support of Microsoft products in the enterprise. Gutzait has been involved in IT for 20 years as a developer, business analyst and database consultant. For the last 10 years, she has worked exclusively with SQL Server. Her skills include SQL Server infrastructure design, database design, performance tuning, security, high availability, VLDBs, replication, T-SQL/packages coding, and more.
    Copyright 2007 TechTarget





  • Configure Sticky Sessions for Your Classic Load Balancer – Elastic Load Balancing

    #

    Configure Sticky Sessions for Your Classic Load Balancer

    By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity ), which enables the load balancer to bind a user’s session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.

    The key to managing sticky sessions is to determine how long your load balancer should consistently route the user’s request to the same instance. If your application has its own session cookie, then you can configure Elastic Load Balancing so that the session cookie follows the duration specified by the application’s session cookie. If your application does not have its own session cookie, then you can configure Elastic Load Balancing to create a session cookie by specifying your own stickiness duration.

    Elastic Load Balancing creates a cookie, named AWSELB, that is used to map the session to the instance.

    An HTTP/HTTPS load balancer.

    At least one healthy instance in each Availability Zone.

    The RFC for the path property of a cookie allows underscores. However, Elastic Load Balancing URI encodes underscore characters as %5F because some browsers, such as Internet Explorer 7, expect underscores to be URI encoded as %5F. Because of the potential to impact browsers that are currently working, Elastic Load Balancing continues to URI encode underscore characters. For example, if the cookie has the property path=/my_path. Elastic Load Balancing changes this property in the forwarded request to path=/my%5Fpath.

    You can’t set the secure flag or HttpOnly flag on your duration-based session stickiness cookies. However, these cookies contain no sensitive data. Note that if you set the secure flag or HttpOnly flag on an application-controlled session stickiness cookie, it is also set on the AWSELB cookie.

    If you have a trailing semicolon in the Set-Cookie field of an application cookie, the load balancer ignores the cookie.

    Duration-Based Session Stickiness

    The load balancer uses a special cookie to track the instance for each request to each listener. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the request is sent to the instance specified in the cookie. If there is no cookie, the load balancer chooses an instance based on the existing load balancing algorithm. A cookie is inserted into the response for binding subsequent requests from the same user to that instance. The stickiness policy configuration defines a cookie expiration, which establishes the duration of validity for each cookie. After a cookie expires, the session is no longer sticky.

    If an instance fails or becomes unhealthy, the load balancer stops routing requests to that instance, and chooses a new healthy instance based on the existing load balancing algorithm. The request is routed to the new instance as if there is no cookie and the session is no longer sticky.

    If a client switches to a listener with a different backend port, stickiness is lost.

    To enable duration-based sticky sessions for a load balancer using the console

    On the navigation pane, under LOAD BALANCING. choose Load Balancers.

    Select your load balancer.

    On the Description tab, choose Edit stickiness.

    On the Edit stickiness page, select Enable load balancer generated cookie stickiness.

    (Optional) For Expiration Period. type the cookie expiration period, in seconds. If you do not specify an expiration period, the sticky session lasts for the duration of the browser session.

    To enable duration-based sticky sessions for a load balancer using the AWS CLI

    Use the following create-lb-cookie-stickiness-policy command to create a load balancer-generated cookie stickiness policy with a cookie expiration period of 60 seconds:

    Use the following set-load-balancer-policies-of-listener command to enable session stickiness for the specified load balancer:

    The set-load-balancer-policies-of-listener command replaces the current set of policies associated with the specified load balancer port. Every time you use this command, specify the –policy-names option to list all policies to enable.

    (Optional) Use the following describe-load-balancers command to verify that the policy is enabled:

    The response includes the following information, which shows that the policy is enabled for the listener on the specified port:

    Application-Controlled Session Stickiness

    The load balancer uses a special cookie to associate the session with the instance that handled the initial request, but follows the lifetime of the application cookie specified in the policy configuration. The load balancer only inserts a new stickiness cookie if the application response includes a new application cookie. The load balancer stickiness cookie does not update with each request. If the application cookie is explicitly removed or expires, the session stops being sticky until a new application cookie is issued.

    If an instance fails or becomes unhealthy, the load balancer stops routing requests to that instance, and chooses a new healthy instance based on the existing load balancing algorithm. The load balancer treats the session as now “stuck” to the new healthy instance, and continues routing requests to that instance even if the failed instance comes back.

    To enable application-controlled session stickiness using the console

    On the navigation pane, under LOAD BALANCING. choose Load Balancers.

    Select your load balancer.

    On the Description tab, choose Edit stickiness.

    On the Edit stickiness page, select Enable application generated cookie stickiness.

    For Cookie Name. type the name of your application cookie.

    To enable application-controlled session stickiness using the AWS CLI

    Use the following create-app-cookie-stickiness-policy command to create an application-generated cookie stickiness policy:

    Use the following set-load-balancer-policies-of-listener command to enable session stickiness for a load balancer:

    The set-load-balancer-policies-of-listener command replaces the current set of policies associated with the specified load balancer port. Every time you use this command, specify the –policy-names option to list all policies to enable.

    (Optional) Use the following describe-load-balancers command to verify that the sticky policy is enabled:

    The response includes the following information, which shows that the policy is enabled for the listener on the specified port:





    Etherchannel Loadbalancing on Catalyst Switches #etherchannel #load #balancing

    #

    Etherchannel Loadbalancing on Catalyst Switches

    Introduction:

    In this document you will learn about Ether-channel load balancing on Cat 6K, 7600, 4500, 3750.

    Catalyst 6k and 7600:

    How it is implemented on this platform?

    The way EtherChannel load balancing works is that the switch assigns a hash result from 0-7 based on the configured hash method ( load balancing algorithm ) for the type of traffic. This hash result is commonly called as Result Bundle Hash (RBH).

    As for an example, let us consider that the port-channel algorithm configured on the box is src-dst ip then the source and destination IP of the packet will be sending to the hash algorithm (a complicated 17th degree polynomial) to calculate the RBH. Now each RBH is mapped to one unique physical port in the port-channel, whereas one physical port can be mapped to multiple RBHs (please look at following example for further clarification).

    Let us consider the configured LB algorithm is src-mac and the switch is trying to send packets from 3 different src macs a, b c over the ether channel( 5/1-2).

    Now for packets from “a” the hash algorithm computes a RBH of 6, 5 for “b” and 4 for “c”.

    It is possible that RBH of 5/1 is mapped to RBH of 6 4, 5 for 5/2 but one RBH can be mapped to one physical port only. It is not possible that a RBH ( say 3) is mapped to both 5/1 and 5/2.

    Things to check/how to check

    1. What is the configured load balancing algorithm?

    From the SP “show etherchannel load-balance”

    for gig3/2 bits 1,3,5 and 7 are set. So RBH value of 1,3,5,and 7 will choose gi3/1.

    for gi3/1 bits 0,2,4 and 6 are set. So RBH value of 0,2,4,and 6 will choose gi3/2

    From the outputs you can observe that 4bits are set for each of the two interfaces. Hence when there are 2 links in the ether channel then each link has equal probability of getting utilized.

    However for 3 links in etherchannel the test etherchannel output will look similar to this.

    6500-20#show interface port-channel 1 etherchannel

    Port-channel1 (Primary aggregator)

    Age of the Port-channel = 0d:01h:05m:54s

    Logical slot/port = 14/1 Number of ports = 2

    HotStandBy port = null

    Port state = Port-channel Ag-Inuse

    Note: This table only lists the number of values, which the hash algorithm calculates, that a particular port accepts. You cannot control the port that a particular flow uses. You can only influence the load balance with a frame distribution method that results in the greatest variety.

    We also support “per module load balancing” for DFC LC, where you can define LB algorithm per module basis.

    For this implementation we would need to keep in mind that the hash decision is taken on the INGRESS line card. If you have configured ether channel LB for the DFC where the actual physical link in the ether channel exist then your Load balancing might not work as you have desired as ingress LC will decide the egress physical port. By default any LC ( with or without DFC ) will load balance traffic based on the algorithm configured on the PFC. To check the “test etherchannel” command session to the DFC module and then issue the command.

    For Cat 4500:

    How it is implemented on this platform?

    On this platform we use the concept of Agg2PhyPort mapping table. Agg2PhyPort table as the array of 8 elements, each can contain a port number, say for 2 ports a and b then .

    Hash function will calculate the index into that array based on the input information: so it will be either 0 or 1 (index is 0 based).

    Here’s an example:

    Imagine you use 3 links in a bundle (say port 5, 10 and 20), then agg2phyport table would look like:

    max-length=3 (number of ports in a bundle)

    Now, hash algorithm produces, say 7 (for configured input parameters), then index will be calculated as 7%3=1 and port 10 will be selected.

    What to check/how to check?

    1. How to check Agg2PhyPort mapping table?

    “show platform mapping port” is the command however it is not worth doing it as the output of the command provided in step 4 gives you the egress port all the time.

    2. How to check the o/p of hashing algorithm?

    Not worth checking because of the above reason. The hash value for the 4500 is calculated via a ‘rolling XOR’ which is Cisco Confidential.

    3. Check the configured load balancing algorithm by using the command “show etherchannel load-balance”.

    4. Use the command “show platform software etherchannel port-channel 1 map “ to find the egress interface.

    BGL-4500-12#show platform software etherchannel port-channel 1 map ip 1.1.1.1 2.2.2.2

    Map port for Ip 1.1.1.1, 2.2.2.2 is Gi2/1(Po1)

    NOTE: Software forwarded traffic will use Gi2/1(Po1)

    While using the above command please keep in mind CSCtf75400 (registered customer only)

    If you hit this bug then unfortunately you have to rely on the sniffer capture to get the actual egress interface,

    In K5 based architecture we have actually got rid of unequal load balancing problem when the number of links are 3,5,6 or 7. As mentioned in the doc that we use 8 bits of hash result to determine the load balancing, in a scenario where we have 3 physical links in the ether channel 3 bits will be chosen for link 1, 3 for link 2 and 2 for link 3. So the ether channel load balancing probability is ( 3:3:2). However in K5 we use only last 3 bits of the hash result ( for 3 links in EC. 5 bits for 5 links in EC. 6 bits for 6 links in EC and so on ) for 3 links in EC. This way we ensured that all the links in the EC has equal probability of taking the traffic. In K5, in order to improve load-balancing determination and flow distribution we stepped away from the “modulo” approach and load-balancing is based on the pre-programmed hardware mapping table.

    For Cisco 3750:

    On 3750 we use a similar 8 bit hashing algorithm and hence traffic distribution is more even when number of links in the ether channel is 2 4 or 8 ( please look at the common scenario section for details).

    Command to check the egress interface in the port-channel is

    ” test etherchannel load-balance interface port-channel port-channel # mac/ip source address destination address “

    Ether channel not load-balancing properly?

    To understand the scenario it is important for us to determine all the flows which the etherchannel is handling. Number of flows will depend on the configured load balancing algorithm. Let us take an example.

    Source 10.0.0.1 (mac a.a.a ) sending a tcp stream to 15.0.0.1 ( mac b.b.b ) with a source tcp port of 50 and destination 2000

    Source 10.0.0.1 (mac a.a.a ) sending a tcp stream to 15.0.0.2 ( mac c.c.c ) with a source tcp port of 60 and destination 2000.

    If configured load balancing algorithm is SRC_MAC

    Then no of flows = 1

    If configured load balancing algorithm is DST_MAC

    Then no of flows = 2

    If configured load balancing algorithm is DST_PORT

    Then no of flows= 1

    The ways you can capture the flows are:

    – Sniffer – difficult and hectic.

    – Netflow – relatively easier.

    – External monitoring tool.

    Once you have a good idea of the flows then check which flow will take which physical interface. Use the tools discussed above to determine the physical interface.

    This step will help you to explain why we see unequal distribution of traffic over the physical interfaces.

    Here are the few scenarios which can cause unequal distribution?

    1. Let us consider we have two flows and two physical interfaces in the etherchannel. It might be possible that one flow is more talkative than the other.

    Now consider i have 5 flows out of which one is super talkative, this flow can overwhelm others. Whichever physical interface this flow is choosing will have relatively higher utilization than the others.

    Resolution- flow control the super talker, need to look at from the host side.

    2. One very common problem is that you do not have enough number of flows and out of whatever small number of flows which you have most of them are hashing to the same physical interface.

    Resolution- Increase the number of flows. Try changing the hashing algorithm to the one most appropriate to the traffic.

    3. When you have 3, 5, 6 or 7 physical links in the ether channel then few links will have higher probability of taking the traffic than the other ( based on number of hashing bit assigned to each physical interface )and hence there is an inherent chance the traffic will be unequally distributed.

    Resolution – use 2, 4 or 8 numbers of links in the ether channel.

    Related Information:





    Balancing the Goals of Health Care Provision #hotels #search #engine

    #health care provision

    #

    Balancing the Goals of Health Care Provision

    A desirable system for providing and financing health care would achieve three goals: (1) preventing the deprivation of care because of a patient’s inability to pay; (2) avoiding wasteful spending; and (3) allowing care to reflect the different tastes of individual patients. Although it is not possible to realize fully all three of these goals, they can condition and inform the design of a good system for financing health care. This paper discusses the application of these goals in more detail and use them to consider a reform of the system of Health Savings Accounts that was enacted as part of the 2003 Medicare legislation and, separately, the challenge posed by the very expensive treatments for rare diseases that are becoming more common.

    The NBER Bulletin on Aging and Health provides summaries of publications like this. You can sign up to receive the NBER Bulletin on Aging and Health by email.

    Document Object Identifier (DOI): 10.3386/w12279

    Published: Feldstein, Martin. “Balancing The Goals of Health Care Provision and Financing.” Health Affairs Vol 25, No 6, 1603-1611. November/December 2006

    Users who downloaded this paper also downloaded these: