Salesforce Lifecycle and Tooling: Testing on Multiple Org Types with Salesforce DX and CircleCI Workflows

Let’s suppose you’re running a successful continuous integration program, using Salesforce DX and CircleCI or another continuous integration provider. Your automated testing is in place, and working well. But the code you’re building has to work in a number of different environments. You might be an ISV, an open-source project, or an organization with multiple Salesforce instances and a shared codebase, and you need to make sure your tests pass in both a standard Enterprise edition and a Person Accounts instance, or in Multi-Currency, or a Professional edition, or any number of other combinations of Salesforce editions and features.

Salesforce DX and CircleCI make it very easy to automate running tests against these different Salesforce environments, and to do so in efficient, parallel, isolated testing streams. The process is built in three steps:

  1. Define organization types and features in Salesforce DX Scratch Org Definition Files in JSON format.
    • Optionally, define additional Metadata API or package installation steps to complete preparation of a specific org for testing.
  2. Define jobs in CircleCI configuration file, either by scripting each environment’s unique setup individually or by referencing a common build sequence in the context of each org type.
  3. Define a workflow in CircleCI configuration file that runs these jobs in parallel.

This article assumes that you’ve followed Salesforce Lifecycle and Tooling: CircleCI and Salesforce DX and are using a fairly similar config.yml. However, the principles are transferable to other continuous integration environments and build sequences.

Defining Organization Types and Features

Salesforce DX scratch org definitions don’t need to be complex, and are defined in JSON. This is a simple example that adds a feature (Sites) to the default configuration:

    "orgName": "David Reed",
    "edition": "Enterprise",
    "features": ["Sites"],
    "orgPreferences" : {
        "enabled": ["S1DesktopEnabled"]

The feature set that is accessible through the org definition file is still somewhat in flux. New features are being added, and some important facets are still not available. The best references for what is available are the Salesforce DX Developer Guide and the Salesforce DX group in the Trailblazer Community.

Org definition files live in the config directory in a DX project. When you create a scratch org, you provide a definition file with the -f switch; you’re free to add multiple definition files to your repository.

Note that we’re not here discussing the Org Shape feature, which is currently in pilot. Once Org Shape becomes publicly available, more capabilities will become available for defining and creating types of environment.

Define Jobs in CircleCI

Each organization definition we want to test against is represented as a job entry in the CircleCI config.yml.

version: 2
  - build-enterprise
  - build-developer

We can define an arbitrary number of these jobs.

If we define jobs by copying and pasting the core SFDX build job, our config.yml can become unwieldy and difficult to maintain. If there’s a lot of setup work that significantly differs between the org definitions, it might be necessary nonetheless.

However, if the job definitions vary by little more than the name of the scratch org definition file, we can take advantage of YAML’s aliasing feature to template our core build instructions into each job, while using environment variables to define the differences between them.

Here’s what it looks like. (The complete config.yml file is available on GitHub).

job-definition: &jobdef
        - image: circleci/node:latest

&jobdef defines an alias, a name to which we can refer to include the following material, which we’ve factored out from the core config.yml developed previously. To that core build sequence, we make just one change, in the “Create Scratch Org” step:

    - run: 
        name: Create Scratch Org
        command: |
            node_modules/sfdx-cli/bin/run force:auth:jwt:grant --clientid $CONSUMERKEY --jwtkeyfile assets/server.key --username $USERNAME --setdefaultdevhubusername -a DevHub
            echo "Creating scratch org with definition $SCRATCH_DEF"
            node_modules/sfdx-cli/bin/run force:org:create -v DevHub -s -f "config/$SCRATCH_DEF" -a scratch

Note that we’re using a new environment variable, $SCRATCH_DEF, to store the name of the definition file we want to use. We’ll take advantage of that when we template this core mechanic into the individual jobs that define builds for each type of org.

Below this alias definition, at the top level of config.yml, we’ll start our jobs list:

version: 2
     <<: *jobdef
        SCRATCH_DEF: project-scratch-def.json
     <<: *jobdef
        SCRATCH_DEF: developer.json

Here, we define two jobs, one per scratch org. Each one includes the entire core build sequence &jobdef, including all of the build steps we’ve defined. Within each job, we assign a value to the environment variable $SCRATCH_DEF, which the build will use to create its unique scratch org.

Each of these jobs will run in a separate, isolated container, and each will use its own scratch org. We’ll get independent test results for each org definition, ensuring that our code’s behavior is correct in each org separately from the others.

This form can be extended even if your different org definitions require more configuration than is possible through the definition file. For example, each org might require installation of a different managed package with (for example) sfdx force:package:install -i $PACKAGE_ID. Or you might need to perform a different Metadata API deployment with sfdx force:mdapi:deploy -d "$MD_API_SRC_DIR" -w 10. Provided the build processes are structurally similar, templating and environment variables can help express them concisely and make the build easy to maintain.

There’s always the option, though, of copying and modifying our core build sequence into any individual job or set of jobs, making as many modifications as necessary. CircleCI will run them all the same, whichever route we take.

Complete the Process with CircleCI Workflow

The final step is to create a workflow entry in config.yml. The workflow ties together the different build jobs and expresses any dependencies between them. Lacking dependencies, the jobs will run in parallel, using as many containers as you have available.

  version: 2
      - build-enterprise
      - build-developer
      - static-analysis

Here, we define a three-job workflow - one each for the two org definitions against which we want to test, and a third job for our PMD static analysis (see Integrating Static Analysis with PMD in the Salesforce Development Lifecycle). When we push to Git, CircleCI will initiate these three jobs in parallel. Each will succeed or fail individually, and you’ll get status indicators in GitHub for each job.

GitHub Results

The workflow as a whole shows success or failure aggregated from its component jobs, and you can rerun the entire workflow or individual failed jobs as needed.

So there we have it: our code, tested instantly and efficiently against as many different Salesforce orgs as we need - subject, of course, to your organization’s scratch org limits!

Everyday Salesforce Patterns: Child-Parent SOQL on Task and Event

Performing child-parent SOQL is more complex than usual when the Task and Event objects are involved. That’s because these objects include polymorphic lookup fields, WhoId and WhatId, which can point to any one of a number of different objects.

While a feature called SOQL polymorphism is in Developer Preview and would offer a SOQL-only way to obtain details about Task and Event parents in pure SOQL, unless and until it’s made generally available, Apex is required to query parent details for these objects other than a tiny subset of Name-related fields. This is an example of this pattern as it might be applied in a trigger. The core of the pattern is the following steps:

  1. Iterating over the Task or Event records and accumulating the WhatId or WhoId values on a per-object basis;
  2. performing a single SOQL query per parent object type;
  3. creating a Map<Id, sObject> to allowing indexing the WhoId/WhatId into query results;
  4. and finally iterating a second time over the Task or Event records to perform work with the parent information available through the Map.

This skeleton implementation sets a checkbox field called High_Priority__c on the Task when its WhatId is either an open Opportunity or an Account whose AnnualRevenue is greater than one million dollars. For an Account, we also set a field to indicate that a high-priority task is present on the parent. (This requirement, of course, is contrived). Note that the pattern works the same way whether we’re looking at WhoId or WhatId, and whether or not we’re in a trigger context.

trigger TaskTrigger on Task (before insert) {
    // In production, we would use a trigger framework; this is a simple example.
    // First, iterate over the set of Tasks and look at their WhatIds.
    // We can use the `What.Type` field to identify which parent object
    // it corresponds to, or cast the Id to a string and check the first three characters against the object's Key Prefix
    // We'll accumulate the WhatIds in Sets to query (1) for Account and (2) for Opportunity.
    Set<Id> accountIds, oppIds;
    accountIds = new Set<Id>();
    oppIds = new Set<Id>();
    for (Task t : {
        if (String.isNotBlank(t.WhatId)) {
            if (t.What.Type == 'Account') {
            } else if (t.What.Type == 'Opportunity') {
    // We will query into Maps so that we can easily index into the parent with our WhatIds
    Map<Id, Account> acts;
    Map<Id, Opportunity> opps;
    Map<Id, Account> actsToUpdate = new Map<Id, Account>();
    // Now we can query for the parent objects.
    // Here, the parent object logic is entirely contained in the query;
    // it could also be implemented in the loop below.
    acts = new Map<Id, Account>([SELECT Id FROM Account WHERE Id IN :accountIds AND AnnualRevenue > 1000000]);
    opps = new Map<Id, Opportunity>([SELECT Id FROM Opportunity WHERE Id IN :oppIds AND IsClosed = false]);
    // We re-iterate over the Tasks in the trigger set and alter their fields based on the information
    // queried from their parents. Note that this is a before insert trigger so no DML is required.
    for (Task t : {
        if (acts.containsKey(t.WhatId) || opps.containsKey(t.WhatId)) {
            // With more complex requirements, we could source data from the parent object
            // Rather than simply making a decision based upon the logic in the parent queries.
            t.High_Priority__c = true;

            // We also want to update the parent object if it's an Account.
            if (t.What.Type == 'Account') {
                Account a = acts.get(t.WhatId);

                a.Has_High_Priority_Task__c = true;
                actsToUpdate.put(a.Id, a);

    update actsToUpdate.values();

This example of the pattern assumes we’re starting from the Task and making some decision based on information in the parent. In other situations, we might query first for a set of Tasks in which we’re interested (perhaps applying a filter on WhatId or WhoId, or What.Type or Who.Type), follow a similar pattern to source parent information, and then update the parent records - or a different object entirely. The skeleton of the solution, however, will remain the same.

Continuous Integration Talk at PhillyForce

I’m delighted to announce that I will be presenting a talk on continuous integration practices and principles using Salesforce DX at the 6th annual PhillyForce conference, part of Philly Tech Week 2018!

Registration for the free conference is available now. I’m very excited to be part of this great event.

Deduplicating File Trees with Python

I have three computers and two phones, and I use Dropbox to sync photos between all of my devices. But from time to time I’ve broken the Dropbox sync for some reason, or changed my partition table, or set up a machine not to sync, or I’d start (and perhaps not finish) an organization project. The end result was a solid half-dozen different versions of my photo archive, which had diverged from one another not only in content and in editing status but in organization too.

I needed a way to merge these archives together without losing edited versions of photos or undoing the album organization that was present in some, but not all, of the directories. To solve that issue, I wrote dedupe_trees.

This Python tool walks a set of directories and deduplicates them in a configurable way, letting you prefer newer or older file versions, versions from one tree over another, versions located deeper or shallower in your folder hierarchy, and drop versions that appear to be simple copies. It can remove duplicates, label them for your handling, or move them to a separate file tree. Most importantly, you can apply a sequence of duplicate resolvers - like “first remove all obvious copies, then prefer the version sorted deeper in the tree, then choose the most recently modified version”.

dedupe_trees works on Linux and Mac OS X (provided Python 3 is installed) and is under the MIT License. And yes, I did test it on my photo archive (after all the unit tests passed). It worked.

Everyday Salesforce Patterns: The Wrapper Class

The Salesforce platform has great reference documentation, great intro training through Trailhead, and some excellent books and resources on enterprise design patterns. What’s less-canonically covered, in the resources I’m familiar with, are everyday patterns: the idiomatic implementation tools that are used and adapted every day by experienced developers. I want to make a contribution to filling this void with this series, starting with some discussion of wrapper classes.

It’s extremely common in Visualforce development to need to perform some transformation on data queried out of Salesforce before displaying that data. This can include things like enriching one object with data from another (which is not its parent or child), ‘framing’ or presenting multiple unrelated objects in a single flat list, such as an <apex:pageBlockTable>, applying a mapping table to values in an object’s fields, or appending summary data calculated in Apex.

An apt solution to all of these needs is the wrapper class pattern. Every wrapper class looks a little different, because it’s highly specific to the individual use case and Visualforce page. The overall pattern often looks rather like this example, which wraps either a Contact or a Lead in an inner class, called Wrapper, of the page controller. Wrapper classes do not have to be inner classes, but the use of an inner class is common and effective.

Wrapper classes are in some ways similar to structures, union types, or algebraic types provided by other languages, to the extent such patterns are possible with the limited introspection and dynamism available in Apex.

public with sharing class ApexController {
    public class Wrapper implements Comparable {
        public Lead ld { get; private set; }
        public Contact ct { get; private set; }
        // Included for easier conditional rendering.
        public String dataType { get; private set; }
        public String calculatedTitle { get; private set; } 
        // and so on... include more calculated fields here.

        public Wrapper(Lead l) {
            ld = l;
            dataType = 'Lead';
            // Note that we are assuming the Name field is queried.
            // We must enforce Field-Level Security ourselves here (see below).
            if (Schema.sObjectType.Lead.fields.Name.isAccessible()) {
                calculatedTitle = 'Lead: ' + ld.Name; 
            } else {
                calculatedTitle = 'Lead';

        public Wrapper(Contact c) {
            ct = c;
            dataType = 'Contact';
            if (Schema.sObjectType.Contact.fields.Name.isAccessible()) {
                calculatedTitle = 'Contact: ' + ct.Name;
            } else {
                calculatedTitle = 'Contact';

        public Integer compareTo(Object other) {
            Wrapper o = (Wrapper)other;

            // Perform some comparison logic here, such as comparing 
            // the `SystemModStamp` of the embedded sObjects, or 
            // comparing the `calculatedTitle` properties.
            // You can even sort by type by inspecting dataType field, 
            // or, if storing sObject instances, with their 
            // `sobjectType` field.

            return 0;

    // Declare our public property for Visualforce as a list of 
    // Wrappers (not List<sObject>)
    public List<Wrapper> wrappers { get; private set; }

    public ApexController() {
        List<Contact> cts = [SELECT Id, Name, Account.Name 
                             FROM Contact 
                             WHERE LastName LIKE 'Test%'];

        wrappers = new List<Wrapper>();

        for (Contact ct : cts) {
            wrappers.add(new Wrapper(ct));

        List<Lead> leads = [SELECT Id, Name, LeadSource 
                            FROM Lead
                            WHERE LeadSource = 'Web'];

        for (Lead l : leads) {
            wrappers.add(new Wrapper(l));

In your Visualforce page, you can use conditional rendering to select which set of data points to show based on whether you have an Contact or a Lead inside each wrapper. For example:

<apex:repeat value="{! wrappers }" var="w">
    <!-- We can dynamically select CSS classes based on what each wrapper contains -->
    <div class="{! IF(w.dataType = 'Lead', 'div-class-lead', 'div-class-contact') }">
        <apex:outputText value="{! w.calculatedTitle }" /><br />
        <apex:outputText rendered="{! w.dataType = 'Lead' }" value="{! 'Lead Source: ' + w.ld.LeadSource }" />
        <apex:outputText rendered="{! w.dataType = 'Contact' }" value="{! 'Account: ' + w.ct.Account.Name }" />

This is a very simple example; you can do quite a bit with conditional rendering to present the wrapper’s information most appropriately.

In the case that you’re using an <apex:pageBlockTable> with <apex:column> entries, you might simplify your presentation by creating more calculated properties within your wrapper and keying your columns directly to those Wrapper instance variables, rather than using extremely complex conditional rendering. For example, if you were presenting a synthetic Contact “timeline” composed of Activities and Campaign Member records, your wrapper object might consist almost entirely of a set of calculated properties like Title, Date, and Description - you might not even store the sObjects at all!

It’s important to bear in mind that processing sObject results into other data structures, like wrapper classes, makes it necessary to manually enforce CRUD and FLS permissions. The automatic support provided by Visualforce relies upon using sObjects and fields directly, so wrapper objects that restructure this data come with a requirement to enforce these permissions appropriately. An example of this enforcement is present on the calculatedTitle field of the wrapper object in this pattern.