Salesforce Apex Flex Queue Governor Limit

Framework to fix – Governor Limit of 100 jobs in Flex Queue

Recently , I inherited an Org with huge customization , which enqueue 100+ Batch Jobs in few scenario. Now, don’t ask me why ?

I remember, few years back, Salesforce had limit of 5 Apex Batch that can be executed at a time, but we had expectation and demand !!! Salesforce introduced Flex queue and we can have 100 Batch apex waiting to be executed, still we are not happy. After all, human is wanting animal.

After 100 Batch apex, all jobs were failing with error System.AsyncException You have exceeded the limit of 100 Jobs in the flex queue. As I explained my life previously in this post, I had to fix this issue as well.

Right way to fix it was to analyze existing code, perform code review, why do we need even customization etc.. However, time was crucial and I had to do something quickly.

Below framework was used to fix the issue

  1. Create a Custom Object to hold information about batch
  2. Use an intermediate Apex class to queue Batch Apex and Queueable Apex. If there is no availability in Flex queue then simply dump it in custom object.
  3. Use a Scheduler which will run every 10 mins, check Flex queue and submit batches from Custom object if limit available
Async Queue Framework in Salesforce to address governor limits
Async Queue Framework in Salesforce to address governor limits

As you can see in below code, syntax to submit batch Apex is almost similar.

Instead of

ID batchprocessid = Database.executeBatch(batchApexInstance,200);

we would be using

ID batchprocessid = AsyncApexFramework.submitBatch(batchApexInstance,batchSize,Prioity,isRetry);
AsyncApexFramework.flush(); 

Where ,

  • @Param 1 – Instance of Batch Apex
  • @Param 2 – Batch Size / Scope Size
  • @Param 3 – Priority to process if there are many job in Queue. If null then default would be 99
  • @Param 4 – if Batch Apex has error, should it be retried again ? Use this option carefully , make sure your design does not negatively impact if same Batch runs multiple time

Same way , instead of using

Id jobId = System.enqueueJob(queueableClassInstance)

We would be using

Id jobId = AsyncApexFramework.submitQueueable(queueableClassInstance,priority,isRetry);

AsyncApexFramework.flush();

Where,

  • @Param 1 – Instance of Queueable class
  • @Param 2 – Priority to process if there are many job in Queue. If null then default would be 99
    @Param 3 – if Batch Apex has error, should it be retried again ? Use this option carefully , make sure your design does not negatively impact if same Batch runs multiple time

Below sample code, shows how we can add 300+ job in single transaction using this framework

//Code snippet to submit Batch and Queuable
for(Integer i = 0; i<300;i++){
    BatchDemo b = new BatchDemo('#B'); 
	AsyncApexFramework.submitBatch(b,1,99,true);
    AsyncApexFramework.submitQueueable(new QueueableDemo(),99,true);
}
AsyncApexFramework.flush(); 
Custom Object AsyncQueue__c
Custom Object AsyncQueue__c

Feel free to add your feedback and findings on this.

Source Code of Framework :

Posted

in

by


Related Posts

Comments

15 responses to “Framework to fix – Governor Limit of 100 jobs in Flex Queue”

  1. Guru Kalle Avatar

    This is one solution for handling the Batch Jobs in Salesforce.

    Having worked in Mainframe Jobs for some time, where it is not uncommon to have more than 1000 batch jobs running for an organization at any time, I think that providing a similar environment to run batch jobs becomes responsibility for Salesforce especially when many customers want to migrate their applications to Salesforce because of its GUI, Sales Cloud, Service Cloud Features, Ability to quickly develop applications. 100 jobs in FlexQueue is a good start. However this is not a sufficient number.

    Salesforce should develop facilities such as Auto Scaling, Load Balancing etc to take care of similar situations (as provided by Amazon Web Services etc).

    GUI is a strength of Salesforce. However, it appears like, it needs to strengthen batch capabilities.

    1. Tim Clair Avatar
      Tim Clair

      Hey Guru, While i appreciate a different perspective from your MainFrame work experience but you cannot really compare AWS & Salesforce. For most of the enterprise customers, former being a more private cloud, compared to public one like salesforce. There is a reason, we have governor limits like on a multi-tenant platform like Salesforce. If you really want to compare AWS capabilities , then Heroku ( runs on AWS, surprise) would be a good platform.

      1. Guru Kalle Avatar

        Hey Tim,

        You are right. We can not compare Salesforce with Mainframe or even AWS. I just mentioned,if some of the useful features in different domains could be added to Salesforce to make it more worthwhile.

        Yes Salesforce is a far better platform than AWS and comes with lots of features and makes it more easy to use. Since it is a public cloud with lots of customers residing on the cloud, features like Governor Limit need to be there. Probably for different licence types, more relaxed governor limits need to be there.

        Thanks for your reply

  2. Chris Mattison Avatar

    Pretty freaking cool, Jitendra!

  3. Tim Clair Avatar
    Tim Clair

    This is great Jitendra, came across your blog for the first time and this is one of the cleanest and nice way to approach this problem.

  4. Sandesh kulkarni Avatar
    Sandesh kulkarni

    Nice solution Jitendra! Great work.

  5. david cereghetti Avatar

    this is exactly what I’m looking for. thanks for sharing.

  6. Nelson Chisoko Avatar
    Nelson Chisoko

    Thank you so much Jitendra, where can I get the object definition for the AsyncQueue object, especially for the picklists etc?

  7. Zachary Alexander Avatar
    Zachary Alexander

    You’re my hero, Jitendra.

    I noticed that (if the flex queue is full) records were getting created with the Status and Retry Count fields as null. Modifying the code, or otherwise defaulting those fields to “Queued” and “0” respectively seems to solve the problem.

    Thank you again for this awesome program. If you see this message, please let me know if I’ve made a mistake in my assessment.

    1. Ankit Agarwal Avatar
      Ankit Agarwal

      Hello Alexander/Jitendra,

      @Zachary Alexander
      I also noticed the same which you had noticed here…I also noticed that do we need to update the status to Failed of all records when the available limit hits zero in submitbatch method? I think we also need to update the value of error_collection_status to not collected in submitbatch . Could you please share me your updated code?

      Thank you in advance! please help me

  8. Nazrul Amin Avatar
    Nazrul Amin

    Hi Jitendra,

    Would the above resolve tbis similar issue for me please: “System.AsyncException: You have exceeded the maximum number (100) of Apex scheduled jobs.”

    Thanks,
    Nazrul

  9. Lydia Sharpin Avatar
    Lydia Sharpin

    Thank you for this – I have forwarded this blog post to many people. I can tell you from experience that this approach works and really does solve a serious limitation in Enterprise Edition of Salesforce.

  10. David Avatar
    David

    Hi, I’ve used your fixed but I have not error now, but I just can use 100 of then, I cannot use more than this, maybe you know why??

  11. Ravi Avatar
    Ravi

    That’s great, Being a Mainframe background this looks simple, from Salesforce’s perspective it’s really a great solution to overcome the governer limits.

  12. q0156 Avatar
    q0156

    That’s great

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Jitendra Zaa

Subscribe now to keep reading and get access to the full archive.

Continue Reading