How many sessions are enough for your application servers?

Posted in: Microsoft SQL Server, MySQL, Oracle, Technical Track

How much sand can you pour through a funnel? Perhaps this is something you have tried to do at some time.

If you have, you would have found that to do so successfully it is necessary to control how much sand is directed at the funnel.

If you try to force too much sand into the funnel, it just clogs up and gets stuck. As a result, very little sand is now getting through the funnel.

Now imagine an entire array of funnels, and you must get a truckload of sand through them.

If you just back up the truck and dump all of the sand on the funnels, do you think that will work?

As you have guessed by now, just trying to dump all of the sand on the array of funnels will fail if the goal is to get all of the sand through them.

So, what has this to do with databases?

Here is a very similar situation that is all too commonly seen.

A database is running on a server with a finite number of processors, let’s say there are 64 non-hyperthreaded cores. (leaving HT out of the mix to keep it simple)

Those 64 cores are required to do a massive load of work. There is a job that must be run that will cause non-stop database activity, a batch job.

Which of the following scenarios is likely to work?

Use a lot of sessions

Start 640 sessions from 10 applications servers, each of the application servers will be running 64 sessions.
The reasoning behind this; we usually have 3000 sessions connect to the database, and it seems ok.
We want this job to run quickly, so we will use many sessions to get it done.

Use a limited number of sessions

Reasoning that there are only so many resources (64 CPU cores available), and there are some other users on the system that may be trying to do work, we run this job with 30 sessions.
Having once tried to stuff sand through funnels when we were children, we realize that trying to force too much work through the system will not work very well.

OK, that previous sentence is a dead giveaway as to what the correct choice is.

If you use a total of 30 sessions to complete this job it will finish much sooner than if 640 sessions were used.

Why is that?

Think of the CPUs in the database server as if they were that array of funnels discussed earlier.

If the choice is made to use 640 sessions, each CPU is trying to serve 10 very busy database sessions; a significant amount of time must be spent by the O/S to manage those sessions.

To oversimplify the process a bit:

– pop a sessions values off the stack and into memory
– push the new sessions values onto the stack
– do some work ( very little work )
– repeat until finished.

Let’s assume the cost of managing all of this extra overhead is 20% of CPU time.

For each second, that leaves 800 milliconds in total that must be divided between 10 active database sessions, so that each session will get 80 milliseconds per second of CPU time – that is 8% of a CPU.

But what if instead of 640 sessions, 30 were used instead?

Immediately the extra overhead for scheduling sessions on to the CPU is eliminated.

If you are doing the math right now you may be thinking to yourself ‘Hold on a minute, the 640 sessions should still get more work done than the 30 sessions!’

Let’s take a look.

With 30 sessions, let’s assume each session gets 100% CPU for the duration of the job.
Lets see how many CPU seconds of work are performed in an hour using these simple metrics.

30 sessions at 100% cpu x 3600 seconds = 108000 CPU seconds of work.

( 640 sessions each getting .08 seconds per second of CPU ) x 3600 seconds = 184320 CPU seconds of work.

What we have considered so far is quite an oversimplification, in that we are considering only the CPU time available.

The extra 610 sessions will cause quite a bit of extra work to take place, as well as extra waits.

Here are just a few things that will also cause the jobs to require more time:

– free buffer/read by other session waits
– IO contention – there will be much more contention with too many sessions
– memory management
– network congestion (possibly)

If you would like to see an excellent graphical presentation of this in action, I highly recommend the two following videos from Oracle Real World Performance.

Real-World Performance – 13 – Large Dynamic Connection Pools – Part 1

Real-World Performance – 14 – Large Dynamic Connection Pools – Part 2

If you can spare 30 minutes to watch these two presenations, they will be well worth your time, as they clearly show how reducing the number of application server connections will actually allow for more work to be done in the database.

email

Interested in working with Jared? Schedule a tech call.

About the Author

Oracle experience: started with Oracle 7.0.13 Programming Experience: Perl, PL/SQL, Shell, SQL Also some other odds and ends that are no longer useful Systems: Networking, Storage, OS to varying degrees. Have fond memories of DG/UX

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *