This is a work of fiction. Names, characters, businesses, places, events and incidents are either the products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.
Ha ha ha joking.. its combination of many real stories of my previous projects and few readers might be from my team itself 😉 . Leave comment my brave team mates if you are able to co-relate !!!
I tried to summaries as much as possibilities about governor limit errors and way to get out of it. Most of scenario described below may not be the first choice of Technical Architects however because of many reasons like budget, resource skill, tools available and compliance, bad decision could be taken which causes ripple effect on Salesforce scalibility.
Before start, I would encourage readers to leave comment on post and let everyone know in this suspenseful story, did you catch governor limits and Solution already before its exposed ? In this blog post, answers are mostly in invisible/white font which would not be seen until you highlight it. Its for few readers who don’t want spoilers.
Let’s start a story.
Part 1
There was a huge Salesforce Implementation project going on from last 18 months. I Joined this project almost at end where all major decisions were taken and implementation was done. There was one particular screen where total 14 API calls made. Salesforce team tried their best to convert this to single API and move everything else to ETL tool however it was ruled out, because of many non disclosable limitations and reasons. For sake of simplicity, lets assume Visualforce page is designed in such a way that each section on page needs two API callouts. User can navigate each section one by one or jump to last section. If user directly jumps to last section then all 14 API calls will be made in single transaction. One thing to consider here is that API calls are dependent, You cannot call API number 2 before API 1.
You already started imagining problems ? Well, lets wait for invisible(Select previous invisible text) text.
Problem 1
You need to call two APIs in sequence however save response from first API. Do some processing and use it as request for second API. Guess the problem ? By default, callouts aren’t allowed after DML operations in the same transaction because DML operations result in pending uncommitted work that prevents callouts from executing. (Select previous invisible text for problem)
Solution
Guess a solution ? Create an Apex based REST API which will perform DML operations(Select previous invisible text for solution). This is not a suggested solution however it was the quick way looked feasible at that time.
Problem 2
So, everything was fine, code implemented and tested successfully. However on stress environment, guess what obvious error QA team must have started facing ? Any guess ? Error was Concurrent requests limit exceeded. Check this blog post to know cause of this error message.
Solution
We cannot use Batch, future or Queueable Apex because, response needs to be returned on Visualforce. User needs to change some choices before hitting next section. So best solution found without hitting Concurrent limit request error was to use Continuation Object. You can refer this blog post to read more about it. This blog post explains how to use it in Lightning Component.
Problem 3
Finally we were able to make our system scalable and long running requests were not problem anymore. Ohh boy, we were saved. It was time to go play table tennis, fusse ball and all your favorite games over the weekend.
But wait, you get a call from your technical lead saying sir, new design is not working in one particular scenario. if user directly jumps to last section, we had to make 14 API callouts back to back from Apex. We got an error, guess what it is this time ? You cannot use more than 3 sequential continuation in a single request handling. You used 4 Continuation. (Select previous invisible text for error).
Solution
Was there any solution here ? Its Salesforce governor limit. If we don’t use Continuation Object, we will hit Concurrent request limit error, is catch 22 situation. But, Here is the solution, Chain Unlimited Continuation Objects.
Part 2
We were able to solve some governor limit errors and system was atleast in working state. Problem was, it was taking around 4-6 minutes to complete end to end transactions, which was unacceptable. So, Part 2 for me was improve performance without impacting existing functionalities. So, I thought to check source code. Ohh boy, not sure which design pattern was used but almost 8-9 classes were involved. And there was one class I remember, which had more than 6k lines of code. Anyways, point is, I had very less time to address this as customers were unhappy and got threat that they may abandon System. So, my next quest was analyzing debug log. For one click, around 7 debug log files were generated. After spending hours, I found first Issue in code. Lets repeat Question – System was very slow, and there were two objects with 6 Millions+ records.
Problem 4
Any guess on problem, why Salesforce was slow ? Should not be difficult to guess 🙂 It was unindexed SOQL. Problem was not only indexing issue but the way Database was designed. Code was fetching information from 5 level of objects and condition was written on relationship fields. The object on which SOQL was primarily written had 6 Millions + records. Solution to this problem was to convert these 5 objects to one flat object (denormalized table) with indexed field. However application was already in use from last 2 years and there were huge impact. What else could have been done to address the problem asap ?
Solution 4
We searched whole organization using Force.com IDE to identify SOQL on same object and which fields from parents were used in condition. We created actual field on child object and indexed it. Select previous line to know answer.
Problem 5
So, in solution 4, we identified all fields which needs to be created in flat object. We considered leaf object as flat object because it will always have parent records (5 level). Problem was, how to update these fields without impacting existing code? We didn’t needed these fields in current transaction so decided to use Asynchronous Apex, Queueable Apex. Advantage of this approach was that we didn’t touch any existing trigger or Apex class. Do you smell problem here ? Started getting sporadic errors related to – Unable to lock row – Record currently unavailable, read problem by selecting previous text.
Solution 5
If we move our code in Asynchronous block and if this code takes more than 10 second to complete then other area of application may receive this error. unfortunately, in our case around 7-8k records were going through DML at same time causing lock more than 10 sec. So as a solution, we did what we were trying to avoid, writing trigger on flat object to populate indexed fields.
Problem 6
There were some Apex code where around 7-8k records were upserted in same transaction. This object was vanilla object without any triggers. We added a trigger on this object to populate fields before insert or update of record. Guess the possible governor limit in this approach? QA and UAT testing was successful however our best friend Apex Test classes saved us by giving early alarm of one possible error. “Apex CPU time limit exceeded” (Select previous invisible text for error). Check this blog post for best practices.
Solution 6
When we are inserting / updating around 7k records in single transactions, even small code will cause Apex CPU time limit exceeded error. First of all, we should have avoided this huge DML in single transaction. But implementation was like a spider web with huge dependency with each other. Didn’t had much time left to perform impact analysis as old code was already on production.
So how did we solved it ? We broke upsert into update and insert by introducing extra SOQL. Other than this, we did dirty check before performing any update operation. Select previous line to know answer. Believe me, dirty check reduced around 70% of DML need in update operations.
Few important best practices I would like to share here:
- We cannot use static variable to avoid recursive execution of trigger if 200+ record gets inserted via Apex. Static variable approach will work with dataloader but in case of Apex, trigger will not execute after 200 until its handled specially.
- Dirty check of record is very important. In your Apex code, before performing any update DML, check if data is even changed or not ? I have seen code around 90% of time, where DML is done on record but it already has same value in Database. All you have to do is issue 1 extra SOQL and compare. It will save CPU time as well DML governor limit. Its huge game changing approach in LDV (Large Data Volume) orgs.
- Upsert statements are very slow. Even slower than combined insert and update. It gives lots of advantages however needs to be considered while we are working with LDV. Upsert is even worst in terms of performance when we create parent child relationship using parent records external Id.
Problem 7
This would be last problem for sake of simplicity of this blog post. All above solutions worked but there was a problem. This leaf object was growing exponentially. After stable code, it reached 10 Millions+ records. This object was only used to display around 4-5k records on screen in read only format with the help of server side pagination. We still wanted to show data on screen however reduce total number of records being created in object. Any guess ?
Solution 7
We saved these 4-5k records in single JSON file and saved in Attachment. At controller side, we used wrapper class to deserialize records and show it back on page. Select previous line to know answer.
Hope this blog post will give you a different perspective while designing solutions on Force.com platform. If you reached till this point then am pretty sure you must have got more than half solution by your self.
Let me know Tale of Governor limit in your project and how you handled it in below comment section.
Leave a Reply