Dynamics Custom Jobs – Supercharged – Overview | Mage Series
Dynamics Custom Jobs is a robust engine that runs custom-scheduled jobs in CRM and provides a centralised view of all jobs.
Dynamics has out-of-the-box recurring system jobs. It provides the ability to create a scheduled deletion job, but that’s the extent of it for on-premise deployments.
It is possible to create a recurring job on the Power Platform using Power Automate or a web job; however, if you want to create a scheduled job in an on-premise deployment, you need to install or develop a custom solution.
Custom Jobs Solution
I created a solution that defines the job parameters, centralises control and logging, and has a much higher degree of robustness than most common methods.
It includes a solution installed in CRM and an optional Windows service.
- Recurrence: timer or detailed schedule, with an option to take into account working hours
- Platforms/engines: integrated into CRM (WFs) or Windows Service
- Load balancing and contingency processes for failure recovery
- Action: either an action or a workflow
- Failure handling: retry schedule, and failure actions
- Target of execution: define a query (FetchXML)
- Supports recurrence exclusions (holidays and such)
- Detailed, centralised logging
Import solution found at Dynamics365-YsCommonSolution.
Optionally, install the service found at DynamicsCrm-CustomJobs on one or more of the servers. Configure the
NLog.config files parameters.
Defining the job
Navigate to the
Yagasoft app, and then go to
Create an engine instance, and then navigate to the Custom Jobs table.
Create a new job.
Let’s go through the most notable configuration parameters.
Either define a CRM WF or Action to call, each record returned by the query will be passed to each run of the WF or action.
If a URL is defined, the following is added as query string parameters: jobId and targetId; and the
Serialised Input Params are sent as the request body. The URL is called using the
Post HTTP verb.
A single target can be defined through its GUID.
Multiple targets can be defined using a FetchXML query. The built-in editor can be used to aid in defining the query. I recommend using the ‘minimal editor’.
In case of failure, we can do the following:
Failure Action: define an action to be executed
- Continue On Error: ignore errors and move on to the next record (in case of multiple records)
- Ignore Sub-Jobs Failure: ignore recurrent execution errors and reschedule as usual
- Retry Schedule: how often to retry the failed job
- Max Retry Count: how many times to retry the job
- Retry Expiry Action: define an action to be executed when the retry count is consumed
Define schedules to run the job. The schedules are many-to-many, as in they can be linked to other jobs as well.
Multiple schedules can be defined, which executes the job at all those intervals combined.
It supports the following:
- Per minute, hour, day, week, and month execution
- For the weekly pattern, choose the days
- For the monthly pattern, choose the months, and days or weekly occurrence
- Define execution exceptions; for example, holidays and such
- Define exception groups; for example, group holidays with server maintenance
Some parameters need to be defined for the Custom Job solution in the
Common Configuration table.
The custom jobs can be run in two modes: CRM or Service.
I do not recommend running in this mode unless absolutely necessary. It is an old mode that uses WFs to stay in a loop and execute jobs using Dynamics’ async process.
I explained it in detail previously in the General Approach section.
When you choose this mode, you have to create an ‘engine instance’.
After creating the instance (only one is supported), make sure to start the ‘engine’.
This is the preferred mode.
After installing the service on a server, configure the
- ServiceId: give the installed service a unique ID, in case you intend to use more than one service installation to load balance the executions
- Connections (nodes):
- List the connections to use if you want to load balance on multiple CRM front-end servers
- Or, simply point to CRM’s load-balancer and use one connection
- How many internal connections to create per node
- The more connections, the better the performance
- I recommend a max of 20 connections
- The percentage of ‘waiting’ jobs to grab from CRM for this service
- Those jobs are reserved for this service for a time, after which they timeout and become free for other services
Next, configure the parameters in CRM in the
Common Configuration table:
- Job Check Interval: the interval, in minutes, to check for ‘waiting’ jobs in CRM
- Max Jobs in Parallel: max number of jobs to run in parallel
- Job Timeout: time, in minutes, to wait for a job to execute, after which it is released into the queue again
- Target Execution Mode: it is used when a query is defined in the job
- Sequential: trigger the action on each record one by one
- ExecuteMultiple (recommended): build an ExecuteMultiple request for each group of 1000 records
- Threaded: trigger the action on each record using threads to speed things up (similar to Sequential, but threaded)
- Max Degree of Parallelism: max number of records to run the action in parallel (in threaded mode)
The maximum number of parallel calls to CRM will always be the minimum value between
Max Degree of Parallelism and
Availability and load-balancing
To achieve higher availability or load balance among servers:
- Install more than one service on different servers
- Give each service a unique ID by setting the
- Allocate each service a percentage of the jobs by setting the
One way to execute recurring logic is to write completely external code outside of Dynamics in a console app. The code can be triggered using any method; for example, the Windows task scheduler. The downside is not using Dynamics’ UI for defining logic, and not centralising control over the jobs.
There are three main ways to implement a job scheduler in Dynamics, but all require the following:
- A table to hold the job info
- How often to execute or when to execute
- What to execute
- Execute on which data
- How to handle errors
- An engine to run the job with the info above
- It has to be robust enough to guarantee execution
- It has to log somewhere to aid in troubleshooting
- A dashboard or view to monitor execution
This is the classic, simplest approach.
Create a WF that waits at least 1 hour, looks for jobs to run, and then re-triggers itself to stay in a 1-hour loop.
Dynamics has an ‘infinite loop’ validation for custom code. It throws an error if the same code has been triggering itself for 8 or more times. The reason for the 1-hour wait is that after 1 hour of idle time, CRM resets the ‘infinite loop’ validation — the counter.
To execute jobs at a finer interval, you can use two or more WFs that loop around each other.
The downside to this approach is that it depends on Dynamics’ Windows process, which has the tendency to blow up at random and cause the scheduler engine you created to crash. This requires manual intervention to restart it. In addition, it puts a load on the async service.
Windows Task Scheduler
In this approach, we simply create code that is triggered by the Windows task scheduler to read and execute jobs defined in CRM. It has the downside of requiring initialisation and paying that price in performance.
The final approach is to create a full-on service in Windows.
The service keeps checking CRM for jobs and optimising its resources to execute those jobs.
This is, by far, the most optimal approach with balanced pros and cons.
The Custom Jobs solution fills a gap between running decentralised periodic external custom code that calls CRM and defining scheduled jobs on the Power Platform.