Salesforce provides many ways to integrate with external systems like SOAP, Rest, Bulk API, User Interface API and so on. One of useful way to integrate any existing web applications with Salesforce is using Canvas.
For sake of this post, I’m using Nodejs application and complete source code can be found here , on my Github repository. It can be deployed on Heroku easily, however I used my local computer to run canvas. That also proves point that integration is happening via Browser and therefore canvas application can be hosted on premise and not necessarily on DMZ layer.
Step 1 : Create Connected App in Salesforce
Enable OAuth in Connected app and provide any Callback URL. Canvas app does not use callback URL however we DO NEED scope.
Next step would be enabling canvas app itself connected app along with locations where we would be using it.
In your CI / CD process, it could be very common scenario that you need to know name of all files thats part of any pull request. Example- in Salesforce you want to perform delta deployment with only components that are part of user story.
Below shell script demonstrates how we can read all file names and iterate through it. For Demo purpose, I’m just adding white space at end of each file however you can do anything as per you continuous integration pipeline requirement.
# File name - AddWhitespace.sh
# Read list of all unique file and store as array - 231327
echo "Provide Pull Request Number"
read prNumber
echo "Your entered $prNumber"
#Read all files that are part of Git
fileNames=$( git log origin/remoteBranchName --grep "$prNumber" --pretty=format: --name-only | grep -v -e "^$" | sort | uniq )
#convert variable to array
IFS=$'\n' array=($fileNames)
echo "------ Printing file names"
for element in "${array[@]}"
do
echo "Trying to add White space in $element"
printf " " >> $element
done
Below command can be executed from anywhere in your system.
git config --global credential.helper wincred
Turn off Warning _LF will be replaced by CRLF_
git config core.autocrlf true
or
git config --global core.autocrlf true
In Unix systems the end of a line is represented with a line feed (LF). In windows a line is represented with a carriage return (CR) and a line feed (LF) thus (CRLF). when you get code from git that was uploaded from a unix system they will only have an LF.
I have read many posts and watched video to understand Microservices precisely however I found Martin Fowler’s explanation about Microservices most helpful. This blog post is just the recap & summary of what I understood about Microservices Architecture.
Characteristics of Microservices
Build services in form of Components
Components can be independently replaceable and upgradable
Components can be combination of Libraries and Services
Services can be built in other languages and services can inter communicate
Organized keeping business rules in mind
Traditionally (Monolithic), Services were organized considering technical aspects like different services related to UI, Database, Server etc
Microservices, suggests to group it as per business capabilities like shipping, Order, Catalog etc
Smart end points and dump pipe
In ESB (aka spaghetti box 😉 lol), we tend to add all smartness in ESB itself and endpoint is just a dump where consumer gets preprocessed data
MicroServices on other hand encourages dump pipe (ESB) and smart endpoints
Decentralized Governance or Data Governance
Every Service should be responsible for their own database & persistence
Can’t communicate to other databases directly, it should be via API’s only (These are mostly inspired by Amazon’s 2 Pizza team size)
Every service can have different languages or tools
Infrastructure Automation
Continuous Delivery is very important for each services to make sure there is no or minimal down time
Top class monitoring capabilities to perform analysis of degraded performance or downtime
Important to have roll back plan and ability to spin up new server in case of service or service fail
Design for failure
As there could be many microservices, its inevitable that they would fail.
Companies like Netflix, they have a application (chaos monkey) which randomly goes out and fail their microservices deliberately
Its important to perform these kind of exercises to understand how resilient their network and microservices are.
Data warehouse is also known as Enterprise Data Warehouse (EDW). Data warehouse is used as source for Business Intelligent’s reporting and analysis. Data Warehouse system collects data from multiple sources and contains historical data for trend analysis reporting. ETL tool is used mostly to build Data Warehouse and interfaces around it. Data Warehouse acts as Single Version of truth.
Data warehouse overview (From Wikipedia)
2. Operational Data Store (ODS)
Operational Data Store is frequently confused and definition is overlapped with Data Warehouse. Some of my clients had used word ODS instead of Data Warehouse, which got me confused on number of occasion. As per my understanding & research, ODS is used to integrate data from multiple systems and feed it to Data Warehouse. Data Warehouse consist of complete history of data, whereas ODS contains latest or recent data (short window of data). Data load frequency in ODS is mostly hourly whereas data load frequency in Data Warehouse mostly is nightly because of data volume. Most important reason to have ODS in your company is ability to run report realtime, where source system does not have required reporting capabilities.
3. Data Mart
Data warehouse can contain many Data Marts. Mostly Data mart is created per business line or system that needs data from Data Warehouse. Indirectly we can say, Data Mart is access layer used to get data out of Data Warehouse by other systems.
4. Data Lake
Term Data Lake was coined by James Dixon, CTO of Pentaho to compare with Data Mart. As per James, Data Mart have several problems mostly related to data silos. Data Lake is method of storing data from sources in its actual or raw format that could be Relational Data, XML, flat files or even binary files. Other tools like ETL, access Data Lake as per need for reporting or analysis purposes.
Its not education , technical or Salesforce related post. Its source code of Family fun game Tambola.
Free online Tambola Game
This weekend, we all friends gathered together to play one of most famous Indian game Tambola. This game is also known as Housie or Bingo. Normally there is central pot where numbers between 1 to 90 placed in piece of paper or plastic balls. Host reveals numbers one by one by randomly selecting those balls or piece of papers from pot. Problem for us were, we all have children and there is no way they will allow us to play with this way. Instead of host picking numbers, our kids would have been played with those. So what I did is, simple HTML page with random number generator between 1 to 90. Plugged my laptop to chromecast and voila, everyone in room playing traditional game in modern way 😉
Recently for one of client, I was in need to setup Git on their intranet. Being healthcare industry and compliance issue, source code could not leave company network and therefore needed in-house solution for Source Code Management (SCM) and automate builds.
In this blog post, I will explain how did we setup Git server that is accessible from anywhere in company network.
We need below two software installed on system which will act as a Git Server.
Few months back I bought a new high end laptop with i7 processor and 16GB of RAM. Decided to give a shot to “SSD hard disc” over conventional hard disc. Performance of my system is incredibly fast, I have SQL Server, Jenkins, Command line Dataloader jobs, Apex static code analyzer all running almost at same time. Windows OS boots up in almost 2-3 seconds only, as compared to 15-25 sec previously. However because of decision of “SSD Drive” I had to compromise storage capacity. My “C” drive is only around 150 GB however D drive has lot of space. After analyzing many folders, I found that google chrome browser creates its temporary folder in “C” drive even though I have installed it explicitly in “D” drive. I wanted to move “Appdata” folder of google chrome to “D” drive to make sure I have enough space in “c” drive.
I came across “Symbolic link” concept in operating System. In “Symbolic link” folder points to other location and its very useful technique to solve problem of storage. We can create symbolic link of many folder from “c” drive to any other location where we have good enough space.
Recently, I came across few errors of Git and found very time consuming to fix those. Let’s discuss what are those errors and how we can fix it.
Error : Permission denied (publickey). fatal : could not read from remote repository
Git permission denied error
This error came while trying to push changes to remote repository using ssh keys. This error means we need to provide information about SSH key. it can be done by setting environment variable GIT_SSH.
We have already discussed basics of selenium and how we can use this tool for automated testing. Here we will see how we can take advantage of Selenium to test workflow field update. in this article we will create a simple workflow rule on Lead object and update “Description” field by adding fields “Number of Employees” and “Number of Locations”. We can use selenium to test if workflow is working or not ?