The Internal Workflow of an E-Business Suite Concurrent Manager Process

Posted in: Technical Track

Concurrent processing is one of the key elements of any E-Business Suite system. It provides scheduling and queuing functionality for background jobs, and it’s used by most of the application modules. As many things depend on concurrent processing, it’s important to make sure that the configuration is tuned for your requirements and hardware specification.

This is the first article in a series about the performance of concurrent processing. We’ll take a closer look at the internals of concurrent managers, the settings that affect their performance, and the ways of diagnosing performance and configuration issues. Today we’ll start with an overview of the internal workflow of a concurrent manager process. Enjoy the reading!

There are few things that need to be clear before we start:

  1. This is only about how the concurrent processing framework (the concurrent managers) works and not about the concurrent requests executed in the framework.
  2. There are multiple types of concurrent managers in EBS – internal manager, conflict resolution manager, workflow agent listener service, standard manager, etc. Their roles are different, and in this post, I’ll discuss only managers that pick up scheduled concurrent requests from the “queue” and execute them – or specifically all managers that have a type of “Concurrent Manager” set in Concurrent managers’ definition form. Typical examples of these managers are “Standard Manager”, “Inventory Manager”, and probably your own custom concurrent managers created for processing specific types of concurrent requests.
"Concurrent Managers" Form

“Concurrent Managers” Form

It is important to understand the internal workflow of a concurrent manager because otherwise, it’s hard to realize how a configuration change actually affects the system. Several years ago, I had to implement an online change of a specialization rule, and it triggered a bounce of all Standard Manager processes – that’s when I realized I had to understand how it worked and since then I have spent lots of hours looking into internals of concurrent managers. I’m not saying that everything is 100% clear for me now – there are too many little things that matter in certain situations. This series of posts will be more about concepts. I hope you’ll find it useful.

So how does a concurrent manager process work? Here is a diagram I created to explain it:

Internal workflow of a concurrent manager process

Internal workflow of a concurrent manager process

I’ve numbered each step of the diagram to provide more details about them:

  1. This is where the story begins. There is no EXIT state in the diagram as the managers normally process requests in an infinite loop. Obviously, there is a way for a concurrent manager process to receive the command to quit when the managers need to be shut down, but that’s not included here for simplicity.
  2. Internal Concurrent Manager (ICM) requests the Service Manager (FNDSM) to start up the Concurrent Manager process. For the Standard Manager processes, the binary executable FNDLIBR is started. For the Inventory Manager, it’s  INVLIBR. There are others too.
  3. The manager process connects to the database and reads the settings (e.g profile options, sleep seconds, cache size).
  4. The process saves information about itself in FND_CONCURRENT_PROCESSES table (os process id, database name, instance name, DB session identifiers, logfile path and name, and others). It also updates FND_CONCURRENT_QUEUES by increasing the value of RUNNING_PROCESSES.
  5. The concurrent manager process collects information from the database to build the SQL for querying the FND_CONCURRENT_REQUESTS table. The query will be used every time the manager process looks for scheduled concurrent requests.  This is the only time the manager process reads the Specialization Rules (which programs it is allowed to execute) from the database. Keep in mind that if the specialization rules are changed while the managers are running, they are bounced without warning as that is the only way to update the specialization rules cached by the manager process.
  6. The SQL (from step 4) is executed to collect information about pending concurrent requests from FND_CONCURRENT_REQUESTS table.
  7. The results are checked to verify if any requests are pending for execution.
  8. If no requests are pending for execution, the manager process sleeps and then goes to step 5. The “Sleep Seconds” parameter of the  “Work Shifts” settings of the concurrent manager determines how long the process sleeps before FND_CONCURRENT_REQUESTS table is queried again. This is the only time the “sleep seconds” setting is used.
  9. If there is at least one concurrent request pending for execution, the concurrent manager process caches rowids for the FND_CONCURRENT_REQUESTS rows of pending concurrent requests. The “Cache Size” setting of the concurrent manager specifies how many rowids to cache.
  10. The cached list of rowids is checked to verify if there are any unprocessed concurrent requests (rows in FND_CONCURRENT_REQUESTS table) left. If none are left – the processing returns to step 5 and the FND_CONCURRENT_REQUESTS table is queried again.
  11. The next unprocessed rowid is picked from the process cache, and the processing starts.
  12. Concurrent manager process executes a SELECT-for-UPDATE statement to lock the STATUS_CODE in FND_CONCURRENT_PROCESSES for the request it’s about to process. This is the mechanism to ensure that each concurrent request is executed only once and only by one manager process even if many processes are running simultaneously. The SELECT-for-UPDATE statement can complete with “ORA-00054: resource busy and acquire with NOWAIT specified” or “0 rows updated” if another manager process has started processing the request already.
  13. If the STATUS_CODE of the request was locked successfully, the concurrent manager executes the concurrent request. The processing moves to step 9 where the cached list of concurrent requests (rowids) is being checked again.

The workflow is not very complex, but it’s important to remember that there are normally multiple concurrent manager processes running at the same time, and they are competing for the requests to run. This competition introduces another dimension of tuning for settings, like number of concurrent manager processes, sleep seconds, or cache size. Stay tuned for the next post in the series to find out more!



Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Maris Elsins is an experienced Oracle Applications DBA currently working as Lead Database Consultant at The Pythian Group. His main areas of expertise are troubleshooting and performance tuning of Oracle Database and e-Business Suite systems. He is a blogger and a frequent speaker at Oracle related conferences such as UKOUG, Collaborate, Oracle OpenWorld, HotSos, and others. Maris is an Oracle ACE, an Oracle Certified Master, and a co-author of “Practical Oracle Database Appliance” (Apress, 2014). He's also a member of the board at Latvian Oracle User Group.

20 Comments. Leave new

Great post Maris! Looking forward to the next one in the series..


Nice article Maris, if you can add how concurrent manager starts in PCP environment to this article it would be helpful .

Maris Elsins
March 22, 2013 3:34 am

Thanks Santhosh,

I think the startup process of Concurrent Managers in PCP enabled environment does not really differ much from the single node implementations – the internal concurrent manager (ICM) connects to the apps listener on each concurrent node and starts up the Service Managers (FNDSM). When it’s done the ICM uses remore procedure calls (RCP) to communicate with FNDSM processes and “asks” them to start up the necessary managers on each node.
But there are definitely interesting questions around PCP. For example, how are node failures monitored and how do concurrent managers “know” when they need to be restarted on the secondary node?
Your comment generated few ideas for the next posts about concurrent manager internals (i.e. The startup sequence of concurrent managers, Internode communications in a PCP-enabled concurrent processing environment). Stay tuned!



In case of PCP there is internal monitor process as well, which controls ICM . Waiting for your next post Maris :)


Good stuff Maris,

May I suggest considering to write a blog post on what process track specification rules and how it restarts Concurrent Managers. My understanding is that just one queue concurrent managers got restarted and if one concurrent manager from the queue is running a long concurrent request than we may run in troubles as Oracle would wait until the CR completes. I wonder if we can cancel the restart Concurrent Request in such case and get everything back to normal? In such case just one concurrent manager will not be aware about the change in the specification rules.

Keep them going,
I will read them for sure :)


Maris Elsins
March 22, 2013 3:13 am

Hi Yury,

You’re absolutely right. Only the concurrent manager queues affected by the change of a specialization rule are restarted. And yes, if there is a long running concurrent manager process it will block the startup of other concurrent processes.
This topic is indeed worth a separate blog post, I’ve added to the queue :)


The Internal Workflow of e-Business Suite Concurrent Manager Process | Just Another Day of Apps DBA's Life
March 22, 2013 3:45 am

[…] This post originally appeared at the Pythian blog. If you ‘d like to leave a comment, please comment on the original post here. […]


Liked this article, thanks! However, I have a small remark. A SELECT .. FOR UPDATE may have an OF clause. In my understanding this is used to indicate which table rows to lock in a multi-table query, as locking is never at a finer level than a row. In this case I believe the concurrent requests table (rows) are locked. The column chosen in the OF clause is a way of documenting which data is going to be updated. It is completely arbitrary in the FOR UPDATE OF clause as far as locking is concerned, though.
I was actually looking for the meaning of MANAGER_TYPE in the concurrent queues and concurrent processes table when it is not in the set of lookup values provided by the lookup type ‘CP_MANAGER_TYPE’. I am looking t a 12.1.2 installation with manager_type values of between 1007 and 1082. Any ideas?


Thanks Paul,

You’re right it really makes sense that the whole rows are locked and not individual columns. Now I recall how locks are represented in data block dumps and there’s no other way, but the row level.

I just checked 12.1.3 and MANAGER_TYPE>1000 are not “decrypted” there either. They’re also not decoded in “Managers -> Define” form. I don’t know where these ID’s are from, but it looked like all Java-based managers have them + few otehrs, so you could probably trace how a Java-type manager is being created (i.e. Add another OPP manager from OAM) to find out where these ID’s come from.



Thanks a Lot


Thanks Maris! for the great article.


Hi Maris.

Thanks for article.

As sleep time of ICM means,”The sleep time parameter indicates the seconds that the ICM should wait between checking for requests(fnd_concurrent_requests) that are waiting to run”.

does it mean ICM cheks request in fnd_concurrent_request and assigns it to other managers?

Maris Elsins
March 2, 2016 3:32 am

Hi Abhijeet,

Not sure where you’ve got that quote from. It’s theoretically correct, but misleading. ICM doesn’t distribute requests to other managers, each concurrent manager checks the queue by itself (just as explained in this post). However, ICM runs some requests too (shutdown concurrent mangers, verify, etc… ), that it will pick up from the same FND_CONCURRENT_REQUESTS table, and the sleep time applies to this too.

I would rewrite that statement the following way so it was more accurate: ”The sleep time parameter indicates the seconds that a concurrent manager should wait between checking for requests (fnd_concurrent_requests) that are waiting to run”.



You mention the auto-reboot “feature” of changing specialization rules. I’ve been looking for a reference as to what you can change (cache, sleep, etc.) live without impacting a running system. Do you know of such a reference or which ones can be changed live for tuning on a busy prod system. The issue is that we cannot replicate the transaction activity of prod anywhere else.


Hi Jeff,

I don’t recall these things being specifically outlined anywhere.
So I think your best bet is testing these changes in a test system and observing how they impact the managers.



Excellent note and very good description note beginner.
Thank you Maris


Hi Manoj,

happy to hear it was useful!



if conncurent manager is down what hapend in appliction server


Hi Raju,

that’s not enough information to tell anything about what happened.



Excellent Article!! Thanks Maris.


Leave a Reply

Your email address will not be published. Required fields are marked *