Load Testing Web Services with soapUI #soapui #load #testing


Load Testing Web Services with soapUI

There are many tools available for testing web services. One of the ones we use that’s easy and has a fairly rich set of features is soapUI (did I mention that it’s also free and open source). With this tool you can easily test SOAP or RESTful services either by manually adding the links and messages, or via a WSDL or WADL utilizing their import feature. In either case, entering a request and receiving a response is a fairly trivial task and for purposes of time, will not be discussed here. You can find more information on how to get started with soapUI on their website .

Setting up a Test Suite

Another capability of soapUI that you might not know about is the ability to setup Test Suites for automated Unit Testing and a step beyond that, actually running simulated Load Tests for your web services. Setting up a Test Suite with a Test Case is as simple as right-clicking on an existing request and selecting “Add to Test Case”. When the dialog appears, you can either pick an existing Test Suite or add a new one. The Test Case editor will then appear and let you configure the Test Case with the request details (input message, headers, etc.) and the Assertions you want to perform. soapUI allows quite a few options for how to build the assertions such as XQuery, XPath, contains, not contains, schema compliance, and SLA Response. More details on how to setup test cases can be found here .

For example, a simple Test Case should have some assertion(s) that check the response message for correctness and another for the SLA Response Time. So perhaps one of your assertions could use an XPath expression to test the number of XML nodes returned or maybe the content of the JSON response. Setting up an SLA Response Time assertion is pretty straight-forward but is significant if you are running load tests and it requires some thought as to how long each Test Case should run. If you have a service request that is hitting the database using a primary key search, this should be much faster than one that uses wildcards. These types of scenarios should be considered when determining the desired response time. Once you setup your assertions and request details, you can run your Test Case to make sure it passes. Run it several times and check out the “Request Log” tab to get an idea of the SLA Response Time and adjust the assertion if necessary.

Load Testing

Once you have your Test Suite setup with at least one Test Case, you can perform a Load Test; this is where you find out how your service is handling a lot of messages all at once. To create a Load Test, you simply right click on the Load Tests option under the Test Suite and then select “New Load Test”. Once you get to the Load Test Editor screen you will see that it was nice enough to add all of your Test Cases (or Steps) to the new Load Test. Now you can configure the Load Test to setup how long it will run, how many threads, the delay between requests, and the strategy (this drives more test options depending on which you choose). soapUI offers several load testing strategies which simulate different scenarios and types of load. Here are a few of the ones you might want to try (you can find a detailed list here ):

  • Simple – Runs a specified number of threads using a randomized delay between requests. This is good for setting a baseline for performance of your service. Run this test first to make sure you’re service is performing well under normal circumstances.
  • Variance – Runs a varied number of threads in a “sawtooth” (randomly increasing/decreasing number of threads; think of an ECG line of your heartbeat). This is good for stress testing and potentially baseline testing based upon what you believe will be “normal” load on your service.
  • Burst – Runs a specified number of threads all at once for a specified amount of time, then waits until the next “burst” to begin based off the set delay. This is good for recovery testing, i.e. if you service is completely slammed, how does it react.

Putting your Service to the Test

Once you have a load test setup, the challenge becomes setting the thresholds for your service that will help to meet the desired SLA Response Time. This means different things for different applications. In a Java or .NET application, it might mean adjusting the number of threads running in parallel. It could also mean you need more hardware (i.e. Memory, CPU) to handle the expected load. Whatever the case may be, it is always fun (and useful) to get a before and after snapshot of the performance of the service. This way you can make sure your changes actually worked!

In the following example, we have hosted HTTP RESTful Services using IBM’s Message Broker (version 7). Message Broker makes it very easy to scale the number of threads by adjusting the number of additional instances of a flow that is deployed into the runtime. Each instance of the flow is essentially an additional thread that can be utilized for handling requests. The default is 0 additional instances which means, in this case, only one (1) HTTP request can be handled at a time. When you hit the service with multiple threads, the subsequent threads will wait for the previous one to finish. As you can see, this is not a good setup for a production web service:

You can see in the image above that during a 60 second time period, 51 runs of the Search_ById service resulted in an assertion error which, in this case, is because it did not meet the SLA response Time. This is simply using the default settings of the Simple Test strategy which this should not be a problem for a typical service to handle. At this point, running any other testing strategy is pointless until we figure out how to make some pretty drastic changes to the service. So we will increase the number of instances of the flow in Message Broker to 5 and re-run our test:

After this change, you can plainly see that the service handled the load much better. There was only 1 error and the SLA set on the “by Id” search is pretty aggressive, so even the max response of 2,347 milliseconds is still acceptable. You can also see that it executed 236 more tests in the same 60 second time period. This is because the requests did not have to wait for the previous request to finish. This service is now acceptable to run in a production environment. Just for fun, let’s try the service with a more aggressive test strategy and see how it does:

In this scenario, we started with 8 threads, set the interval to 60, and the variance to 0.8 which means the number of threads will increase from 8 to 14 within the first 15 seconds, then decrease back to 8 and continue down to 2 threads after 45 seconds, and finally go back up 8 after 60 seconds. You can see that the service performed fairly well overall during this test. We might want to raise the expected SLA on the “by Id” search service to a little less aggressive number, but again, 1,968ms for the max response is acceptable.

Taking it a Step Further

With the latest release of soapUI, you have the ability to run the load tests out of another tool called, loadUI. This tool uses the Test Suite and Load Tests you have setup in soapUI to run the tests but provides much more detail and better graphs and charts to analyze your results. I have not used this tool, but the soapUI website recommends using it for benefits such as distributed tests in real-time, better UI, scalable testing, live analytics, and more. It is free and open source and integrates seamlessly with soapUI, so why not give it a shot? If you have already used loadUI or decide to try it out, leave some feedback below.

Load Balancer #spring #balancers, #spring #balancer, #load #balancers, #load #balancer, #tool #balancer,


ofSpring Balancers ( Load Balancers / Tool Balancers)

Portable Hand Tools, Welder Guns, Spray guns, Short Blast Guns, Drifting Attachments in mines and Quaries, Jute and synthetic bag closing machines, Gauges, Jigs and Fixtures, Pendant Station, Switch Button of Hoists, incase of de-oxidation, water washing of plating work with raising and lowering of work etc.

Design of Load Balancer

POWERMASTER Overhead Spring Balancers are specially designed to free the operator from weight of the hand tools. When properly balanced by adjusting spring tension, the load balancing tools become almost weightless in the hands of the operator, and can be moved up and down with very little effort.

Safety Features of Load Balancers

Prior to the safety initiatives and modifications undertaken by Powermaster older design Spring Balancers had only limited safety provisions, due to which these older type Spring Balancers were prone to operational hazards and industrial accidents leading tohuge compensations.
Powermaster has introduced several safety features into our new designed Spring Balancers while retaining all the existing safety features.
Powermaster Load Balancers have thus become more user friendly, eliminating chances of accidents.

Exclusive Safety Features of Load Balancers Introduced:

Spring Balancers from other suppliers are supplied with opening both in Body Cover causing accidental crunch of operator’s fingers between revolving drum and the body of the Spring Balancers. To prevent this, side windows on the body of the load balancing tools and cover have been eliminated. This would prevent industrial accidents.
live video of load balancer(Windows Media Video – 2.9MB)

    In the unfortunate event of accidental dislodging of the suspended weight, breakage of bottom hook / wire rope, spring tension will pull back the wire rope, resulting in whiping action likely to hurt personnel around.
    This mechanism of the load balancer will lock the last revolving drum pulley the wire rope assembly will be stopped with max. 20cm travel. This will prevent the whiping of wire rope assembly avoided any injury to operator or any personnel around.
    live video of load balancers (Windows Media Video – 2.8MB)
    In order to prevent wear of the slot of the Spring Balancer body, a split type Nylon liner for wire rope has been introduced in the new design Spring Balancers.
    This liner can be removed and installed on site without any need for dismantling of the wire rope.
    This will prevent wear of the costly body resulting in overall saving and reduced down time of equipment.
    live video (Windows Media Video – 744KB)
    In Spring Balancers from other manufactures the worm gear box for adjustment of Spring Tension worked in the horizontal axis. Gear box position has been rotated through 90 degrees so that spring tension adjustment can be made by working in vertical axis.
    This will facilitate adjustment from ground level without any need to rise up to Spring Balancer hanging level. This reduces down time, facilities easy operation and reduces the chance of any mishap during an operator rising to the level of the balancer.
    live video (Windows Media Video – 1.2MB)
    A small slot has been provided to facilitate removal and Re-installation of the wire rope without any need for dismantling of the Spring Balancers. This will reduce the down time of the equipment. This facility is not available for long range Spring Balancers.
    live video (Windows Media Video – 975KB)
    No longer do you need to handle open springs which are difficult and dangerous to handle with sharp edges. All springs are now supplied in concealed containers for easy replacement. With containerised springs accidents are avoided during maintenance and replacements of springs.
    live video (Windows Media Video – 1.1MB)

  • Advantages of POWERMASTER Spring Balancer / Load Balancer / Tool Balancer

    Increases Productivity
    Balancers keep poised for action, minimize motions required to bring tool from rest to work positions.

  • Extends Tool Life
    Balancers eliminate pick-up and lay down wear and prevent damage from dropping.
  • Reduces Operator Fatigue
    Load Balancers make the heaviest tool light as a feather. Operator effort can be directed to controlling the tool rather than supporting.
  • No Need of Power
    There is no need of electrical or mechanical power.
  • Increases Safety
    Balancers keep work area uncluttered to reduce chances of damages of accessories or accidental start up of tools during handling.
  • Effective Use of Space
    The working place being widely utilized, and also to be cleaned up, the production is smoothly carried out.

  • SQL Server cluster design: One big cluster vs #sql #server #load #balancing


    SQL Server cluster design: One big cluster vs. small clusters

    workload to balance utilization and to schedule planned maintenance without downtime.

    This cluster design uses software heartbeats to detect failed applications or servers. In the event of a server failure, it automatically transfers ownership of resources (such as disk drives and IP addresses) from a failed server to a live server. Note that there are methods to maintain higher availability of the heartbeat connections, such as in case of a general failure at a site.

    MSCS does not require any special software on client computers, so the user experience during failover depends on the nature of the client side of the client-server application. Client reconnection is often transparent, because MSCS has restarted the applications, file shares and so on, at exactly the same IP address. Furthermore, the nodes of a cluster can reside in separate, distant sites for disaster recovery.

    SQL Server on a cluster server

    SQL Server 2000 can be configured on a cluster with up to four nodes, while SQL Server 2005 can be clustered on up to eight nodes. When a SQL Server instance is clustered, the disk resources, IP address and services are forming a cluster group to allow for failover. For more technical information, refer to the article How to cluster SQL Server 2005.

    SQL Server 2000 allows installation of 16 instances on a single cluster. According to Books-On-Line (BOL), SQL Server 2005 supports up to 50 SQL Server instances on a single server or processor, but…, only 25 hard drive letters can be used, so these have to be planned accordingly if you need many instances.

    Note that the failover period of a SQL Server instance is the amount of time it takes for the SQL Server service to start, which can vary from a few seconds to a few minutes. If you need higher availability, consider using other methods, such as log shipping and database mirroring. For more information about disaster recovery and HA for SQL Server, go to Disaster recovery features in SQL Server 2005 and Microsoft’s description of disaster recovery options in SQL Server.

    One big SQL Server cluster vs. small clusters

    Here are the advantages of having one big cluster, consisting of more nodes:

  • Higher Availability (more nodes to failover to)
  • More load balancing options for performance (more nodes)
  • Cheaper maintenance costs
  • Growth agility. Up to four or eight nodes, depending on SQL version
  • Improved manageability and simplified environment (less to manage)
  • Maintenance with less downtime (more options for failover)
  • Failover performance unaffected by the number of nodes in the cluster

    Here are the disadvantages of having one big cluster:

  • Limited number of clustered nodes (what if a ninth node is needed?)
  • Limited number of SQL instances on a cluster
  • No protection against failure — if disk array fails, no failover can take place
  • Can’t create failover clusters at the database level or database object level, such as a table, with failover clustering

    Virtualization and clustering

    Virtual machines can also participate in a cluster. Virtual and physical machines can be clustered together with no problem. SQL Server instances can reside on a virtual machine, but performance may be impacted, depending on the resource consumption by the instance. Before installing your SQL Server instance on a virtual machine, you should stress test to verify that it can hold the necessary load.

    In this flexible architecture, you can load balance SQL Server between a virtual machine and a physical box when the two are clustered together. For example, develop applications using a SQL Server instance on a virtual machine. Then fail over to a stronger physical box within the cluster when you need to stress test the development instance.

    Important links describing Windows and/or SQL Server clustering

  • SQL Server clustering resources (This article contains important links and information about clustering).

    A cluster server can be used for high availability, disaster recovery, scalability and load balancing in SQL Server. It’s often better to have one bigger cluster, consisting of more nodes, than to have smaller clusters with fewer nodes. A big cluster allows a more flexible environment where instances can move from one node to the other for load balancing and maintenance.


    Michelle Gutzait works as a senior database consultant for ITERGY International Inc.. an IT consulting firm specializing in the design, implementation, security and support of Microsoft products in the enterprise. Gutzait has been involved in IT for 20 years as a developer, business analyst and database consultant. For the last 10 years, she has worked exclusively with SQL Server. Her skills include SQL Server infrastructure design, database design, performance tuning, security, high availability, VLDBs, replication, T-SQL/packages coding, and more.
    Copyright 2007 TechTarget

  • STATNAMIC LOAD TESTING – ppt download #statnamic #load #test, #statnamic #load #testing


    Presentation on theme: “STATNAMIC LOAD TESTING”— Presentation transcript:

    2 Presentation Outline Pile Load Testing – background
    Brief Statnamic Introduction Recent activities in the US Statnamic Theory and Analysis Recent activities in Taiwan 20MN testing at the TFC project, Taiwan other notable jobs Standardisation of “RAPID” Load Testing Q width: 250px; margin: 0 10px 5px 0;” src=”http://slideplayer.com/1598242/5/images/2/Presentation+Outline+Pile+Load+Testing+-+background.jpg” />

    3 Quick Statnamic Facts 21 Statnamic devices world-wide
    12 Statnamic testing companies Over 1200 contract Statnamic load tests performed in 16 countries – more than one test every day, somewhere in the world! Over 80 published papers, including papers from 2 International Statnamic Seminars More than 10 Universities currently researching Statnamic (USA – Auburn, USF, BYU, Umass, John Hopkins, plus others) Acceptance by 16 State DOT’s in the US, US Army Corps of Engineers, FHWA, and Japanese Geotechnical Society





    12 The Idea Statnamic Note: The JGS defines a Rapid Load Test as 5 tr 500, where tr is the number of times a stress wave will travel up and down the pile during the loading event

    13 Inertial Load Testing (Bermingham – 1987)
    This type of test was clearly different from a Dynamic Load Test A NEW WORD WAS REQUIRED. Inertial Load Testing (Bermingham ) STATNAMIC (Middendorp – (1989)) Pseudo-static (Fundex PS PLT – early 1990’s) Kinetic (Holeyman ) Rapid Load Test (Japanese Study Group ) Transient Long-period (Janes -1997) Slow dynamic (Goble, Rausche ) others – impulse, kinematic, push, etc.

    14. a global perspective. In March of 2000, the Japanese Geotechnical Society added “Rapid Load Testing” to their national standard for pile testing. In the year 2000, it is estimated that there will be more than 500 Statnamic Load Tests on foundations around the world.












    40 Lateral Test Programs in the US
    New Bern, North Carolina DOT (50 tons) Brigham Young University – (200 tons) Utah DOT & CALTRANS Auburn University, Alabama – (250 tons) (FHWA) Pascagoula, Mississippi DOT (800 tons, over-water) Providence, Rhode Island DOT (400 tons, over-water) San Juan, Puerto Rico Trans Authority (400 tons) New Bern, North Carolina DOT (1200 tons, over-water)

    42 Foundation Types Tested in the USA Using Statnamic
    Drilled Shafts tested up to 3500 tons laterally and axially Driven Piles (all types) Pile Groups tested laterally and axially Stone Columns Auger-Cast Piles conventional and ‘displacement’ types Spread Footings and Plates Other types of “Ground Modification”

    43 Background Statnamic Theory and Analysis
    GOAL: to derive the STATIC load displacement behavior from a STATNAMIC load test (usual goal for axial compression testing)

    47 Physical Model m c k F u In a STATNAMIC LOAD TEST:
    F = Applied force from the Statnamic device (measured by a load cell) m = Pile mass (easy to calculate) c = pile/soil damping (UNKNOWN) k = pile and soil stiffness (the term we need to find) u, v, a = measured by an optical sensor and/or accelerometer

    48 Physical Model F = ma + cv + ku GENERAL LIMITATIONS:
    This equation makes the following assumptions: 1. Inertia (mass x acceleration) – assumes that a single value of ‘m’ (the pile mass) represents all of the moving mass in the system 2. Damping (damping coefficient x velocity) – assumes that a single value of ‘c’ is valid throughout the entire load test, and that the damping force is directly proportional to velocity 3. Stiffness (stiffness coefficient x displ.) – the calculated stiffness is the stiffness of the pile and soil system under a RAPID load – no correction is made for long-term, time-dependent pile behavior, which includes effects such as changes in pore-pressure and creep F = ma + cv + ku

    50 Physical Model F = ma + cv + ku c m k F u EQUATION OF MOTION:
    This equation describes the equilibrium between some forcing function and the 3 forces: Inertia (mass x acceleration) Damping (damping coefficient x velocity) Stiffness (stiffness coefficient x displ.) This equation forms the basis for describing the motion of any single degree of freedom system. F = ma + cv + ku

    51 Analysis Assuming that stress-waves can be ignored, the analysis of a Statnamic Load Test is greatly simplified in comparison to a dynamic load test. Although stress-waves may be ignored, the ‘dynamic’ effects of INERTIA and DAMPING CANNOT! Result: a detailed model, which includes pile and soil properties IS NOT NEEDED. A simple physical model can be used to remove the effects of damping and inertia from the measured signals – no information about the soil is needed, and subjective judgement is minimized.















    79 Standardisation of RAPID Load Testing
    Recommendations on STN testing of PILES in soil and rock (FHWA) Japanese Geotechnical Society, Standard for Rapid Load Testing (2000) ASTM – Standard for Rapid Axial Compressive Load (2008) Florida LRFD Design Guidelines

    Citrix: Edgesight for loadtesting best practices – My Virtual Vision #citrix #edgesight


    My Virtual Vision

    Citrix: Edgesight for loadtesting best practices

    I ve been involved with quite a few Citrix XenApp/XenDesktop projects and a reoccurring phase in all of these projects is scaling/load testing. When starting a project there s a initial phase where you try to give an answer to all of the questions concerning scalability and sizing. So I ve build a load test a couple of times now, the problem with these load tests is that no load test is equal to another. Every company has it s own applications, settings and user types thus these load tests are custom made and can be re-used only partly for other customers.

    Citrix offers EdgeSight for loadtesting when you buy the right Citrix XenApp licenses (See load testing services ):

    At one of my recent projects I had to write a load testing script containing an average user with average applications . My first problem was to define what an average user is for this organization. The second problem was to define what the average applications where for this average user. To tackle this problem I had a couple of meetings with key users (which normally aren t average users btw), application admins and the sys admins. That s when I created a matrix of users and their generated workload based on the outcome of these meetings. The applications where fairly easy to determine, the core applications for the different users where chosen and based on input of users and sys admins we decided to use that certain applications.

    The start of the project was to determine what to test and the user type (work load). After this we should determine what we want to measure. In a best practices document for XenDesktop I found the following parameters:

    1 Provisioning Server Process

    2 Desktop Delivery Controller Process

    Is this what you need to monitor for your own environment? It depends, it covers a great deal of the most important parameters but you ll still need to fine tune your own dataset and need to analyze the outcome.

    So the most important things you ll need to know when defining a load test:

    • What applications do I want to test?
    • At what load should these applications be tested?
    • What parameters do I need to analyze?

    A couple of tips while writing the load tests:

    • Make sure you have a description of the steps you want to test. Create manuals for user actions so you can reproduce the load test and are able to show what you ve tested, even after a couple of years (create a baseline).
    • Define the load, create different load profiles for the different types of users.
    • Install Operating System Updates and make sure driver installations are finished.
    • Ensure that any environment-specific changes are made to the golden image at the start. Disk cache optimizations, memory usage limitations and firewall settings are just a few examples. Also ensure that local and domain policies will work as-planned.
    • Tackle First-Time User Log Ons so that the load test will be completed successfully.
    • Build your load test fool proof, use the windows shortcut keys as much as possible. Make sure timing is right!
    • When recording the load test Edgesight will record the name of the screens, make sure these are anonymous otherwise the script will fail while checking the screen names and expecting another value.
    • Make sure that all of the users used for the test can access the applications and run the processes defined to that user type.

    This will probably can be used on all types of load testing so you can use it while preparing for any load testing software and if you ve got additional comments on preparing for. and conducting load testing please let me know and I will add them!

    How to Implement Load Balancing to Distribute Workload #load #balance #algorithm


    I am doing same without any modification then also I am getting exception

    at Eneter.Messaging.MessagingSystems.TcpMessagingSystem.TcpOutputConnector.OpenConnection(Action`1 responseMessageHandler) in C:\EneterMessaging702\EneterMessagingFramework\MessagingSystems\TcpMessagingSystem\TcpOutputConnector.cs:line 155
    at Eneter.Messaging.MessagingSystems.SimpleMessagingSystemBase.DefaultDuplexOutputChannel.OpenConnection() in C:\EneterMessaging702\EneterMessagingFramework\MessagingSystems\SimpleMessagingSystemBase\DefaultDuplexOutputChannel.cs:line 105
    at Eneter.Messaging.Infrastructure.Attachable.AttachableDuplexOutputChannelBase.AttachDuplexOutputChannel(IDuplexOutputChannel duplexOutputChannel) in C:\EneterMessaging702\EneterMessagingFramework\Infrastructure\Attachable\AttachableDuplexOutputChannelBase.cs:line 41
    at Client.Form1.OpenConnection() in e:\Q3\LoadBalancing\LoadBalancing\Client\Form1.cs:line 49
    at Client.Form1..ctor() in e:\Q3\LoadBalancing\LoadBalancing\Client\Form1.cs:line 26
    at Client.Program.Main() in e:\Q3\LoadBalancing\LoadBalancing\Client\Program.cs:line 18
    at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
    at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
    at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
    at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
    at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
    at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
    at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
    at System.Threading.ThreadHelper.ThreadStart()>

    Here is the code where exception came
    using System;
    using System.Windows.Forms;
    using Eneter.Messaging.EndPoints.TypedMessages;
    using Eneter.Messaging.MessagingSystems.MessagingSystemBase;
    using Eneter.Messaging.MessagingSystems.TcpMessagingSystem;

    namespace Client
    public partial class Form1. Form
    // Requested calculation range.
    public class Range
    public double From;
    public double To;

    public Form1()

    public void OpenConnection()
    // Create TCP messaging for the communication.
    // Note: Requests are sent to the balancer that will forward them
    // to available services.
    IMessagingSystemFactory myMessaging
    = new TcpMessagingSystemFactory();
    IDuplexOutputChannel anOutputChannel
    = myMessaging.CreateDuplexOutputChannel(“tcp://”);

    // Create sender to send requests.
    IDuplexTypedMessagesFactory aSenderFactory
    = new DuplexTypedMessagesFactory();
    mySender = aSenderFactory.CreateDuplexTypedMessageSender double, Range ();

    // Subscribe to receive response messages.
    mySender.ResponseReceived += OnResponseReceived;

    // Attach the output channel and be able to send messages and receive responses.

    private void Form1_FormClosed(object sender, FormClosedEventArgs e)
    // Detach the output channel and stop listening for responses.

    private void CalculatePiBtn_Click(object sender, EventArgs e)
    myCalculatedPi = 0.0;

    // Split calculation of PI to 400 ranges and sends them for the calculation.
    for (double i = -1.0; i = 1.0; i += 0.005)
    Range anInterval = new Range() < From = i, To = i + 0.005 >;

    private void OnResponseReceived(object sender, TypedResponseReceivedEventArgs double e)
    // Receive responses (calculations for ranges) and calculate PI.
    myCalculatedPi += e.ResponseMessage;

    // Display the number.
    // Note: The UI control can be used only from the UI thread.
    InvokeInUIThread(() = ResultTextBox.Text = myCalculatedPi.ToString());

    // Helper method to invoke some functionality in UI thread.
    private void InvokeInUIThread(Action uiMethod)
    // If we are not in the UI thread then we must synchronize via the invoke mechanism.
    if (InvokeRequired)

    private IDuplexTypedMessageSender double, Range mySender;
    private double myCalculatedPi;

    Configure Sticky Sessions for Your Classic Load Balancer – Elastic Load Balancing


    Configure Sticky Sessions for Your Classic Load Balancer

    By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity ), which enables the load balancer to bind a user’s session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.

    The key to managing sticky sessions is to determine how long your load balancer should consistently route the user’s request to the same instance. If your application has its own session cookie, then you can configure Elastic Load Balancing so that the session cookie follows the duration specified by the application’s session cookie. If your application does not have its own session cookie, then you can configure Elastic Load Balancing to create a session cookie by specifying your own stickiness duration.

    Elastic Load Balancing creates a cookie, named AWSELB, that is used to map the session to the instance.

    An HTTP/HTTPS load balancer.

    At least one healthy instance in each Availability Zone.

    The RFC for the path property of a cookie allows underscores. However, Elastic Load Balancing URI encodes underscore characters as %5F because some browsers, such as Internet Explorer 7, expect underscores to be URI encoded as %5F. Because of the potential to impact browsers that are currently working, Elastic Load Balancing continues to URI encode underscore characters. For example, if the cookie has the property path=/my_path. Elastic Load Balancing changes this property in the forwarded request to path=/my%5Fpath.

    You can’t set the secure flag or HttpOnly flag on your duration-based session stickiness cookies. However, these cookies contain no sensitive data. Note that if you set the secure flag or HttpOnly flag on an application-controlled session stickiness cookie, it is also set on the AWSELB cookie.

    If you have a trailing semicolon in the Set-Cookie field of an application cookie, the load balancer ignores the cookie.

    Duration-Based Session Stickiness

    The load balancer uses a special cookie to track the instance for each request to each listener. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the request is sent to the instance specified in the cookie. If there is no cookie, the load balancer chooses an instance based on the existing load balancing algorithm. A cookie is inserted into the response for binding subsequent requests from the same user to that instance. The stickiness policy configuration defines a cookie expiration, which establishes the duration of validity for each cookie. After a cookie expires, the session is no longer sticky.

    If an instance fails or becomes unhealthy, the load balancer stops routing requests to that instance, and chooses a new healthy instance based on the existing load balancing algorithm. The request is routed to the new instance as if there is no cookie and the session is no longer sticky.

    If a client switches to a listener with a different backend port, stickiness is lost.

    To enable duration-based sticky sessions for a load balancer using the console

    On the navigation pane, under LOAD BALANCING. choose Load Balancers.

    Select your load balancer.

    On the Description tab, choose Edit stickiness.

    On the Edit stickiness page, select Enable load balancer generated cookie stickiness.

    (Optional) For Expiration Period. type the cookie expiration period, in seconds. If you do not specify an expiration period, the sticky session lasts for the duration of the browser session.

    To enable duration-based sticky sessions for a load balancer using the AWS CLI

    Use the following create-lb-cookie-stickiness-policy command to create a load balancer-generated cookie stickiness policy with a cookie expiration period of 60 seconds:

    Use the following set-load-balancer-policies-of-listener command to enable session stickiness for the specified load balancer:

    The set-load-balancer-policies-of-listener command replaces the current set of policies associated with the specified load balancer port. Every time you use this command, specify the –policy-names option to list all policies to enable.

    (Optional) Use the following describe-load-balancers command to verify that the policy is enabled:

    The response includes the following information, which shows that the policy is enabled for the listener on the specified port:

    Application-Controlled Session Stickiness

    The load balancer uses a special cookie to associate the session with the instance that handled the initial request, but follows the lifetime of the application cookie specified in the policy configuration. The load balancer only inserts a new stickiness cookie if the application response includes a new application cookie. The load balancer stickiness cookie does not update with each request. If the application cookie is explicitly removed or expires, the session stops being sticky until a new application cookie is issued.

    If an instance fails or becomes unhealthy, the load balancer stops routing requests to that instance, and chooses a new healthy instance based on the existing load balancing algorithm. The load balancer treats the session as now “stuck” to the new healthy instance, and continues routing requests to that instance even if the failed instance comes back.

    To enable application-controlled session stickiness using the console

    On the navigation pane, under LOAD BALANCING. choose Load Balancers.

    Select your load balancer.

    On the Description tab, choose Edit stickiness.

    On the Edit stickiness page, select Enable application generated cookie stickiness.

    For Cookie Name. type the name of your application cookie.

    To enable application-controlled session stickiness using the AWS CLI

    Use the following create-app-cookie-stickiness-policy command to create an application-generated cookie stickiness policy:

    Use the following set-load-balancer-policies-of-listener command to enable session stickiness for a load balancer:

    The set-load-balancer-policies-of-listener command replaces the current set of policies associated with the specified load balancer port. Every time you use this command, specify the –policy-names option to list all policies to enable.

    (Optional) Use the following describe-load-balancers command to verify that the sticky policy is enabled:

    The response includes the following information, which shows that the policy is enabled for the listener on the specified port:

    Website Speed Test #website #load #testing #tools #free


    Free Website Speed Test

    Test your Website Performance.

    How fast does your website load?

    When it comes to delivering the best website experience to your users, slow and steady page-load times just won’t do. With the Free Uptrends Website Speed Test. you can put the performance of any web page to the test.

    Simply enter a website URL, select any 1 of our 35* available global checkpoints, and hit ‘Start.’

    Our speedy. high-tech. website performance monitoring robots will then check your webpage and display the Resolve, TCP Connect, HTTPS Handshake, Send, Wait, and Receive times.

    *Paid Website Monitoring account has access to over 165 checkpoints

    Reading the speed test results

    The following colors/descriptions represent different element states displayed in the test result waterfall chart.

    Element states

    Resolve: The browser is performing a DNS Lookup

    TCP Connect: The browser is trying to set up a TCP Connection

    Https handshake: The browser performs an Https handshake

    Send: The browser is sending data to the web server

    Wait: The browser is waiting for data from the server

    Receive: The browser is receiving data from the server

    Try Uptrends free for four weeks.

    No commitment. No credit card. Just 24/7 website monitoring.

    Always know how your website is performing.

    Interested in capturing your live website performance data on a regular basis? By signing up for a free 4-week trial of Uptrends Website Monitoring, you can see for yourself how valuable having a constant data stream can be when it comes to maximizing your website uptime and performance. This is the only Synthetic Monitoring toolkit you need!

    Web Application Monitoring

    Monitor the performance of your shopping cart, login, search queries and other website interactions with Uptrends’ Web Application Monitoring .

    Always be in the know

    When a downtime or performance related event occurs, you’ll be the first to know, with configurable SMS, phone, e-mail, and push alerts that reach your whole team as needed.

    Your data, presented your way

    Data is no good to you if you can’t understand it. Uptrends compiles your data into easy-to-read, configurable dashboards. You can even make your own!

    Built to handle your team

    Website management isn’t a one-person job. Uptrends accommodates your entire team with unlimited contacts, customizable escalation levels, and maintenance times.

    More than 20,000 users use Uptrends to monitor their websites and servers. Join them today. Get started for free No commitment • No credit card • Risk free

    Free Load Boards #load #boards, #truck #load #boards, #load #board, #truckload #board,


    Dispatch Software

    Trulos is using technology to make trucking great.
    Everyday, truckers and shippers and brokers use our free load board and dispatch software to make money.
    Our mission is to provide a simple easy to use tool that will enable anyone to easily manage trucks, drivers and customers. Our Transportation Management System (TMS) is really awesome

    Our TMS program enables users organize their business quickly and keep track of revenue. It does a lot. Here are a few of the important ones

    • Organize Shipments
    • Manage Customers
    • Manage Drivers and or Carriers
    • Generate BOL, Rate, Invoices
    • Invoice Customers
    • Pay carriers and drivers
    • Track revenue
    • View historic data

    Basically this program makes your work much much easier then if you are not using it.
    Log in / Register More info

    1. Terms

    By accessing this web site, you are agreeing to be bound by these web site Terms and Conditions of Use, all applicable laws and regulations, and agree that you are responsible for compliance with any applicable local laws. If you do not agree with any of these terms, you are prohibited from using or accessing this site. The materials contained in this web site are protected by applicable copyright and trade mark law.

    2. Use License

    1. Permission is granted to temporarily download one copy of the materials (information or software) on Trulos Transportation’s web site for personal, non-commercial transitory viewing only. This is the grant of a license, not a transfer of title, and under this license you may not:
      1. modify or copy the materials;
      2. use the materials for any commercial purpose, or for any public display (commercial or non-commercial);
      3. attempt to decompile or reverse engineer any software contained on Trulos Transportation’s web site;
      4. remove any copyright or other proprietary notations from the materials; or
      5. transfer the materials to another person or “mirror” the materials on any other server.
    2. This license shall automatically terminate if you violate any of these restrictions and may be terminated by Trulos Transportation at any time. Upon terminating your viewing of these materials or upon the termination of this license, you must destroy any downloaded materials in your possession whether in electronic or printed format.

    3. Disclaimer

    1. The materials on Trulos Transportation’s web site are provided “as is”. Trulos Transportation makes no warranties, expressed or implied, and hereby disclaims and negates all other warranties, including without limitation, implied warranties or conditions of merchantability, fitness for a particular purpose, or non-infringement of intellectual property or other violation of rights. Further, Trulos Transportation does not warrant or make any representations concerning the accuracy, likely results, or reliability of the use of the materials on its Internet web site or otherwise relating to such materials or on any sites linked to this site.

    4. Limitations

    In no event shall Trulos Transportation or its suppliers be liable for any damages (including, without limitation, damages for loss of data or profit, or due to business interruption,) arising out of the use or inability to use the materials on Trulos Transportation’s Internet site, even if Trulos Transportation or a Trulos Transportation authorized representative has been notified orally or in writing of the possibility of such damage. Because some jurisdictions do not allow limitations on implied warranties, or limitations of liability for consequential or incidental damages, these limitations may not apply to you.

    5. Revisions and Errata

    The materials appearing on Trulos Transportation’s web site could include technical, typographical, or photographic errors. Trulos Transportation does not warrant that any of the materials on its web site are accurate, complete, or current. Trulos Transportation may make changes to the materials contained on its web site at any time without notice. Trulos Transportation does not, however, make any commitment to update the materials.

    6. Links

    Trulos Transportation has not reviewed all of the sites linked to its Internet web site and is not responsible for the contents of any such linked site. The inclusion of any link does not imply endorsement by Trulos Transportation of the site. Use of any such linked web site is at the user’s own risk.

    7. Site Terms of Use Modifications

    Trulos Transportation may revise these terms of use for its web site at any time without notice. By using this web site you are agreeing to be bound by the then current version of these Terms and Conditions of Use.

    8. Governing Law

    Any claim relating to Trulos Transportation’s web site shall be governed by the laws of the State of Texas without regard to its conflict of law provisions.

    General Terms and Conditions applicable to Use of a Web Site.

    Etherchannel Loadbalancing on Catalyst Switches #etherchannel #load #balancing


    Etherchannel Loadbalancing on Catalyst Switches


    In this document you will learn about Ether-channel load balancing on Cat 6K, 7600, 4500, 3750.

    Catalyst 6k and 7600:

    How it is implemented on this platform?

    The way EtherChannel load balancing works is that the switch assigns a hash result from 0-7 based on the configured hash method ( load balancing algorithm ) for the type of traffic. This hash result is commonly called as Result Bundle Hash (RBH).

    As for an example, let us consider that the port-channel algorithm configured on the box is src-dst ip then the source and destination IP of the packet will be sending to the hash algorithm (a complicated 17th degree polynomial) to calculate the RBH. Now each RBH is mapped to one unique physical port in the port-channel, whereas one physical port can be mapped to multiple RBHs (please look at following example for further clarification).

    Let us consider the configured LB algorithm is src-mac and the switch is trying to send packets from 3 different src macs a, b c over the ether channel( 5/1-2).

    Now for packets from “a” the hash algorithm computes a RBH of 6, 5 for “b” and 4 for “c”.

    It is possible that RBH of 5/1 is mapped to RBH of 6 4, 5 for 5/2 but one RBH can be mapped to one physical port only. It is not possible that a RBH ( say 3) is mapped to both 5/1 and 5/2.

    Things to check/how to check

    1. What is the configured load balancing algorithm?

    From the SP “show etherchannel load-balance”

    for gig3/2 bits 1,3,5 and 7 are set. So RBH value of 1,3,5,and 7 will choose gi3/1.

    for gi3/1 bits 0,2,4 and 6 are set. So RBH value of 0,2,4,and 6 will choose gi3/2

    From the outputs you can observe that 4bits are set for each of the two interfaces. Hence when there are 2 links in the ether channel then each link has equal probability of getting utilized.

    However for 3 links in etherchannel the test etherchannel output will look similar to this.

    6500-20#show interface port-channel 1 etherchannel

    Port-channel1 (Primary aggregator)

    Age of the Port-channel = 0d:01h:05m:54s

    Logical slot/port = 14/1 Number of ports = 2

    HotStandBy port = null

    Port state = Port-channel Ag-Inuse

    Note: This table only lists the number of values, which the hash algorithm calculates, that a particular port accepts. You cannot control the port that a particular flow uses. You can only influence the load balance with a frame distribution method that results in the greatest variety.

    We also support “per module load balancing” for DFC LC, where you can define LB algorithm per module basis.

    For this implementation we would need to keep in mind that the hash decision is taken on the INGRESS line card. If you have configured ether channel LB for the DFC where the actual physical link in the ether channel exist then your Load balancing might not work as you have desired as ingress LC will decide the egress physical port. By default any LC ( with or without DFC ) will load balance traffic based on the algorithm configured on the PFC. To check the “test etherchannel” command session to the DFC module and then issue the command.

    For Cat 4500:

    How it is implemented on this platform?

    On this platform we use the concept of Agg2PhyPort mapping table. Agg2PhyPort table as the array of 8 elements, each can contain a port number, say for 2 ports a and b then .

    Hash function will calculate the index into that array based on the input information: so it will be either 0 or 1 (index is 0 based).

    Here’s an example:

    Imagine you use 3 links in a bundle (say port 5, 10 and 20), then agg2phyport table would look like:

    max-length=3 (number of ports in a bundle)

    Now, hash algorithm produces, say 7 (for configured input parameters), then index will be calculated as 7%3=1 and port 10 will be selected.

    What to check/how to check?

    1. How to check Agg2PhyPort mapping table?

    “show platform mapping port” is the command however it is not worth doing it as the output of the command provided in step 4 gives you the egress port all the time.

    2. How to check the o/p of hashing algorithm?

    Not worth checking because of the above reason. The hash value for the 4500 is calculated via a ‘rolling XOR’ which is Cisco Confidential.

    3. Check the configured load balancing algorithm by using the command “show etherchannel load-balance”.

    4. Use the command “show platform software etherchannel port-channel 1 map “ to find the egress interface.

    BGL-4500-12#show platform software etherchannel port-channel 1 map ip

    Map port for Ip, is Gi2/1(Po1)

    NOTE: Software forwarded traffic will use Gi2/1(Po1)

    While using the above command please keep in mind CSCtf75400 (registered customer only)

    If you hit this bug then unfortunately you have to rely on the sniffer capture to get the actual egress interface,

    In K5 based architecture we have actually got rid of unequal load balancing problem when the number of links are 3,5,6 or 7. As mentioned in the doc that we use 8 bits of hash result to determine the load balancing, in a scenario where we have 3 physical links in the ether channel 3 bits will be chosen for link 1, 3 for link 2 and 2 for link 3. So the ether channel load balancing probability is ( 3:3:2). However in K5 we use only last 3 bits of the hash result ( for 3 links in EC. 5 bits for 5 links in EC. 6 bits for 6 links in EC and so on ) for 3 links in EC. This way we ensured that all the links in the EC has equal probability of taking the traffic. In K5, in order to improve load-balancing determination and flow distribution we stepped away from the “modulo” approach and load-balancing is based on the pre-programmed hardware mapping table.

    For Cisco 3750:

    On 3750 we use a similar 8 bit hashing algorithm and hence traffic distribution is more even when number of links in the ether channel is 2 4 or 8 ( please look at the common scenario section for details).

    Command to check the egress interface in the port-channel is

    ” test etherchannel load-balance interface port-channel port-channel # mac/ip source address destination address “

    Ether channel not load-balancing properly?

    To understand the scenario it is important for us to determine all the flows which the etherchannel is handling. Number of flows will depend on the configured load balancing algorithm. Let us take an example.

    Source (mac a.a.a ) sending a tcp stream to ( mac b.b.b ) with a source tcp port of 50 and destination 2000

    Source (mac a.a.a ) sending a tcp stream to ( mac c.c.c ) with a source tcp port of 60 and destination 2000.

    If configured load balancing algorithm is SRC_MAC

    Then no of flows = 1

    If configured load balancing algorithm is DST_MAC

    Then no of flows = 2

    If configured load balancing algorithm is DST_PORT

    Then no of flows= 1

    The ways you can capture the flows are:

    – Sniffer – difficult and hectic.

    – Netflow – relatively easier.

    – External monitoring tool.

    Once you have a good idea of the flows then check which flow will take which physical interface. Use the tools discussed above to determine the physical interface.

    This step will help you to explain why we see unequal distribution of traffic over the physical interfaces.

    Here are the few scenarios which can cause unequal distribution?

    1. Let us consider we have two flows and two physical interfaces in the etherchannel. It might be possible that one flow is more talkative than the other.

    Now consider i have 5 flows out of which one is super talkative, this flow can overwhelm others. Whichever physical interface this flow is choosing will have relatively higher utilization than the others.

    Resolution- flow control the super talker, need to look at from the host side.

    2. One very common problem is that you do not have enough number of flows and out of whatever small number of flows which you have most of them are hashing to the same physical interface.

    Resolution- Increase the number of flows. Try changing the hashing algorithm to the one most appropriate to the traffic.

    3. When you have 3, 5, 6 or 7 physical links in the ether channel then few links will have higher probability of taking the traffic than the other ( based on number of hashing bit assigned to each physical interface )and hence there is an inherent chance the traffic will be unequally distributed.

    Resolution – use 2, 4 or 8 numbers of links in the ether channel.

    Related Information:

    How To Use HAProxy to Set Up HTTP Load Balancing on an





    We hope you find this tutorial helpful. In addition to guides like this one, we provide simple cloud infrastructure for developers. Learn more

    How To Use HAProxy to Set Up HTTP Load Balancing on an Ubuntu VPS

    Posted September 26, 2013 410.8k views Scaling Ubuntu

    About HAProxy

    HAProxy(High Availability Proxy) is an open source load balancer which can load balance any TCP service. It is particularly suited for HTTP load balancing as it supports session persistence and layer 7 processing.

    With the introduction of Shared Private Networking in DigitalOcean HAProxy can be configured as a front-end to load balance two VPS through private network connectivity.


    We will be using three VPS (droplets) here:

    Droplet 1 – Load Balancer
    Hostname: haproxy
    OS: Ubuntu
    Public IP:
    Private IP:

    Droplet 2 – Node 1
    Hostname: lamp1
    OS: LAMP on Ubuntu
    Private IP:

    Droplet 2 – Node 2
    Hostname: lamp2
    OS: LAMP on Ubuntu
    Private IP:

    Installing HAProxy

    Use the apt-get command to install HAProxy.

    We need to enable HAProxy to be started by the init script.

    Set the ENABLED option to 1

    To check if this change is done properly execute the init script of HAProxy without any parameters. You should see the following.

    Configuring HAProxy

    We’ll move the default configuration file and create our own one.

    Create and edit a new configuration file:

    Let us begin by adding configuration block by block to this file:

    The log directive mentions a syslog server to which log messages will be sent. On Ubuntu rsyslog is already installed and running but it doesn’t listen on any IP address. We’ll modify the config files of rsyslog later.

    The maxconn directive specifies the number of concurrent connections on the frontend. The default value is 2000 and should be tuned according to your VPS’ configuration.

    The user and group directives changes the HAProxy process to the specified user/group. These shouldn’t be changed.

    We’re specifying default values in this section. The values to be modified are the various timeout directives. The connect option specifies the maximum time to wait for a connection attempt to a VPS to succeed.

    The client and server timeouts apply when the client or server is expected to acknowledge or send data during the TCP process. HAProxy recommends setting the client and server timeouts to the same value.

    The retries directive sets the number of retries to perform on a VPS after a connection failure.

    The option redispatch enables session redistribution in case of connection failures. So session stickness is overriden if a VPS goes down.

    This contains configuration for both the frontend and backend. We are configuring HAProxy to listen on port 80 for appname which is just a name for identifying an application. The stats directives enable the connection statistics page and protects it with HTTP Basic authentication using the credentials specified by the stats auth directive.

    This page can viewed with the URL mentioned in stats uri so in this case, it is ;
    a demo of this page can be viewed here .

    The balance directive specifies the load balancing algorithm to use. Options available are Round Robin ( roundrobin ), Static Round Robin ( static-rr ), Least Connections ( leastconn ), Source ( source ), URI ( uri ) and URL parameter ( url_param ).

    Information about each algorithm can be obtained from the official documentation .

    The server directive declares a backend server, the syntax is:

    The name we mention here will appear in logs and alerts. There are many paratmeters supported by this directive and we’ll be using the check and cookie parameters in this article. The check option enables health checks on the VPS otherwise, the VPS is
    always considered available.

    Once you’re done configuring start the HAProxy service:

    Testing Load Balancing and Failover

    To test this setup, create a PHP script on all your web servers (backend servers – LAMP1 and LAMP2 here).

    Now we will use curl and request this file multiple times.

    Notice here how HAProxy alternatively toggled the connection between LAMP1 and LAMP2, this is how Round Robin works. The client IP we see here is the Private IP address of the load balancer and the X-Forwarded-For header is your IP.

    To see how failover works, go to a web server and stop the service:

    Send requests with curl again to see how things work.

    Session Stickiness

    If your web application serves dynamic content based on users’ login sessions (which application doesn’t), visitors will experience odd things due to continuous switching between VPS. Session stickiness ensures that a visitor sticks on to the VPS which served their first request. This is possible by tagging each backend server with a cookie.

    We’ll use the following PHP code to demonstrate how session stickiness works.

    This code creates a PHP session and displays the number of page views in a single session.

    Cookie insert method

    In this method, all responses from HAProxy to the client will contain a Set-Cookie: header with the name of a backend server as its cookie value. So going forward the client (web browser) will include this cookie with all its requests and HAProxy will forward the request to the right backend server based on the cookie value.

    For this method, you will need to add the cookie directive and modify the server directives under listen

    This causes HAProxy to add a Set-Cookie: header with a cookie named SRVNAME having its value as S1 or S2 based on which backend answered the request. Once this is added restart the service:

    and use curl to check how this works.

    This is the first request we made and it was answered by LAMP1 as we can see from Set-Cookie: SRVNAME=S1; path=/. Now, to emulate what a web browser would do for the next request, we add these cookies to the request using the –cookie parameter of curl.

    Both of these requests were served by LAMP1 and the session was properly maintained. This method is useful if you want stickiness for all files on the web server.

    Cookie Prefix Method

    On the other hand, if you want stickiness only for specific cookies or don’t want to have a separate cookie for session stickiness, the prefix option is for you.

    To use this method, use the following cookie directive:

    The PHPSESSID can be replaced with your own cookie name. The server directive remains the same as the previous configuration.

    Now let’s see how this works.

    Notice how the server cookie S1 is prefixed to the session cookie. Now, let’s send two more requests with this cookie.

    We can clearly see that both the requests were served by LAMP1 and the session is perfectly working.

    Configure Logging for HAProxy

    When we began configuring HAProxy, we added a line: log local0 notice which sends syslog messages to the localhost IP address. But by default, rsyslog on Ubuntu doesn’t listen on any address. So we have to make it do so.

    Edit the config file of rsyslog.

    Add/Edit/Uncomment the following lines:

    Now rsyslog will work on UDP port 514 on address but all HAProxy messages will go to /var/log/syslog so we have to separate them.

    Create a rule for HAProxy logs.

    Add the following line to it.

    Now restart the rsyslog service:

    This writes all HAProxy messages and access logs to /var/log/haproxy.log .

    Keepalives in HAProxy

    Under the listen directive, we used option httpclose which adds a Connection: close header. This tells the client (web browser) to close a connection after a response is received.

    If you want to enable keep-alives on HAProxy, replace the option httpclose line with:

    Set the keep-alive timeout wisely so that a few connections don’t drain all the resources of the load balancer.

    Testing Keepalives

    Keepalives can be tested using curl by sending multiple requests at the same time. Unnecessary output will be omitted in the following example:

    In this output, we have to look for the line: Re-using existing connection! (#0) with host which indicates that curl used the same connection to make subsequent requests.

    Almost there!

    Report a Bug

    Load Balancing ADFS on Windows 2012 R2 #sticky #sessions #load #balancer


    Load Balancing ADFS on Windows 2012 R2

    Greetings, everyone! I ran across this issue recently with a customer’s Exchange Server 2007 to Office 365 migration and wanted to pass along the lessons learned.

    The Plan

    It all started so innocently: the customer was going to deploy two Exchange Server 2013 hybrid servers into their existing Exchange Server 2007 organization for a Hybrid organization using directory synchronization and SSO with ADFS. They’ve been investing a lot of work into upgrading their infrastructure and have been upgrading systems to newer versions of Windows, including some spiffy new Windows Server 2012 Hyper-V servers. We decided that we’d deploy all of the new servers on Windows Server 2012 R2, the better to future-proof them. We were also going to use Windows NLB for the ADFS and ADFS proxy servers instead of using their existing F5 BIG-IP load balancer, as the network team is in the middle of their own projects.

    The Problem

    There were actually two problems. The first, of course, was the combination of Hyper-V and Windows NLB. Unicast was obviously no good, multicast has its issues, and because we needed to get the servers up and running as fast as possible we didn’t have time to explore using IGMP with Multicast. Time to turn to the F5. The BIG-IP platform is pretty complex and full of features, but F5 is usually good about documentation. Sure enough, the F5 ADFS 2.0 deployment guide (Deploying F5 with Microsoft Active Directory Federation Services ) got us most of the way there. If we had been deploying ADFS 2.0 on Server 2012 and the ADFS proxy role, I’d have been home free.

    In Windows 2012 R2 ADFS, you don’t have the ADFS proxy role any more – you use the Web Application Proxy (WAP) role service component of the Remote Access role. However, that’s not the only change. If you follow this guide with Windows Server 2012 R2, your ADFS and WAP pools will fail their health checks (F5 calls them monitors) and the virtual server will not be brought online because the F5 will mistakenly believe that your pool servers are down. OOPS!

    The Resolution

    So what’s different and how do we fix it?

    ADFS on Windows Server 2012 R2 is still mostly ADFS 2.0, but some things have been changed – out with the ADFS proxy role, in with the WAP role service. That’s the most obvious change, but the real sticker here is under the hood in the guts of the Windows Server 2012 R2 HTTP server. In Windows Server 2012 R2, IIS and the Web server engine has a new architecture that supports the SNI extension to TLS. SNI is insanely cool. The connecting machine tells it what host name it’s trying to connect to as part of the HTTPS session setup so that one IP address can be used host multiple HTTPS sites with different certificates, just like HTTP 1.1 added the Hosts: header to HTTP.

    But the fact that Windows 2012 R2 uses SNI gets in the way of the HTTPS health checks that the F5 ADFS 2.0 deployment guide has you configure. We were able to work around it by replacing the HTTPS health checks with TCP Half Open checks, which connect to the pool servers on the target TCP port and wait for the ACK. If they receive it, the server is marked up.

    For long-term use, the HTTPS health checks are better; they allow you to configure the health check to probe a specific URL and get a specific response back before it declares a server in the pool is healthy. This is better than ICMP or TCP checks which only check for ping responses or TCP port responses. It’s totally possible for a machine to be up on the network and IIS answering connections but something is misconfigured with WAP or ADFS so it’s not actually a viable service. Good health checks save debugging time.

    The Real Fix

    As far as I know there’s no easy, supported way to turn SNI off, nor would I really want to; it’s a great standard that really needs to be widely deployed and supported because it will help servers conserve IP addresses and allow them to deploy multiple HTTPS sites on fewer IP/port combinations while using multiple certificates instead of big heavy SAN certificates. Ultimately, load balancer vendors and clients need to get SNI-aware fixes out for their gear.

    If you’re an F5 user, the right way is to read and follow this F5 DevCentral blog post Big-IP and ADFS Part 5 – “Working with ADFS 3.0 and SNI” to configure your BIG-IP device with a new SNI-aware monitor; you’re going to want it for all of the Windows Server 2012 R2 Web servers you deploy over the next several years. This process is a little convoluted – you have to upload a script to the F5 and pass in custom parameters, which just seems really wrong (but is a true measure of just how powerful and beastly these machines really are) – but at the end of the day, you have a properly configured monitor that not only supports SNI connections to the correct hostname, but uses the specific URI to ensure that the ADFS federation XML is returned by your servers.

    What do you do if you don’t have an F5 load balancer and your vendor doesn’t support F5? Remember when I said that there’s no way to turn SNI off? That’s not totally true. You can go mess with the SNI configuration and change the SSL bindings in a way that seems to mimic the old behavior. You run the risk of really messing things up, though. What you can do is follow the process in this TechNet blog post How to support non-SNI capable Clients with Web Application Proxy and AD FS 2012 R2.


    As a side note, almost everyone seems to be calling the ADFS flavor on Windows Server 2012 R2 “ADFS 3.0.” Everyone, that is, except for Microsoft. It’s not a 3.0; as I understand it the biggest differences have to do with the underlying server architecture, not the ADFS functionality on top of it per se. So don’t call it that, but recognize most other people will. It’s just AD FS 2012 R2 .

    Share this post: