Three Routes to Time-Based Rollup Summary Fields

Rollup Summary Fields are great. Users love them because they offer at-a-glance insight at the top level of the data hierarchy, without needing to run a report or drill down in the user interface to child records. Unfortunately, native Rollup Summary Fields come with a variety of limitations on what data you can roll up, where, and applying which criteria. In particular, time-based rollup summary fields are a common need that’s tricky to meet with this native functionality.

These fields come from business requirements like this:

As a Sales representative, I need to see on the Account a running total of Opportunities on a year-to-date and month-to-date basis.

Naïve Solutions and Why They Don’t Work

Most naïve solutions to this class of requirements fail because they’re based on formula fields or date literals. One might try creating a formula like this, for example, for a Checkbox field denoting whether or not to include a specific record:

MONTH(CloseDate) = MONTH(TODAY()) && YEAR(CloseDate) = YEAR(TODAY())

Immediately, though, we find that this new field isn’t available for selection in our Rollup Summary Field criteria screen.

Likewise, we can’t save a Rollup Summary Field where we write criteria like


using a date literal like we might in a report.

The same underlying limitation gives rise to both of these restrictions: native Rollup Summary Fields require a trigger event on the rolled-up object in order to update the rolled-up value. Rollup Summary Fields are evaluated and updated as part of the Trigger Order of Execution (steps 16 and 17).

Formula fields, though, don’t have any underlying storage, and their values are not persisted to the database. Rather, their values are generated at the time the record is viewed. For this reason, there’s no event when a formula field’s value “changes”. In fact, some formula fields can be nondeterministic and incorporate date or time fields, such that their value is essentially always in flux.

λέγει που Ἡράκλειτος ὅτι ‘πάντα χωρεῖ καὶ οὐδὲν μένει’
Herakleitos says that ‘everything changes, and nothing stands still’
— Plato, Cratylos 402a

Likewise, there’s no trigger event, on any record when the current day changes from one to the next. That’s why we cannot embed date literals in our Rollup Summary Field criteria - when today ticks over to tomorrow, not only is there no trigger event, but a recalculate job could be very computationally expensive.

The approaches we will take to solve this requirement ultimately take both tacks: creating a trigger event based on the date, and arranging for broad-based recalculation on a schedule.

Approach 1: Native Rollup Summary Fields with Time-Based Workflow or Process

Our first approach maximizes use of out-of-the-box functionality, requiring no Apex and no additional packages. However, it can scale poorly and is best suited for smaller Salesforce orgs.

We can’t create a native Rollup Summary Field, as we’ve seen, based on a formula field. But we can rollup based upon a Checkbox field, and we can use Time-Based Workflow and Processes to update that Checkbox on a schedule, using formula fields to get the dates right.

Here’s how it works:

  1. We create two Date formula fields on the child object (here, Opportunity). One defines the date when the record enters rollup scope and the other the date when it exits rollup scope.
    First Day in Rollup Scope Last Day in Rollup Scope
  2. We create a Checkbox field on the child object. This is what our Time-Based Workflow Actions will set and unset, giving us a triggerable event for the Rollup Summary Field.
  3. We create a Rollup Summary Field on the parent object (here, Account). We use the criterion that our Checkbox field is set to True.
    Rollup Summary Field

  4. We create a Workflow Rule with two Time-Based Actions attached to it.
    Workflow Rule

This approach works well for most sorts of time-based rollup requirements. Because it uses formula fields to define when a record enters and exits the rolled-up period, it’s not limited to “this month”, “this year”, and other simple definitions. The time period doesn’t even need to be the same for each record!

Costs and Limitations

This solution, as noted above, is best suited for smaller Salesforce orgs with moderate data volume and small numbers of time-based rollups. Each time-based rollup will consume three custom fields on the child and one Rollup Summary Field on the parent (the limit for which is 25 per object), as well as a Process or a Workflow Rule with two attached Actions.

Because of the mechanics of time-based workflow actions, updates made won’t roll up instantaneously, like with a trigger. Rather, they’ll be on a small delay as the workflow actions are enqueued, batched, and then processed:

Time-dependent actions aren’t executed independently. They’re grouped into a single batch that starts executing within one hour after the first action enters the batch.

More important, though, is the platform limitation on throughput of time-based workflow actions. From the same document:

Salesforce limits the number of time triggers an organization can execute per hour. If an organization exceeds the limits for its Edition, Salesforce defers the execution of the additional time triggers to the next hour. For example, if an Unlimited Edition organization has 1,200 time triggers scheduled to execute between 4:00 PM and 5:00 PM, Salesforce processes 1,000 time triggers between 4:00 PM and 5:00 PM and the remaining 200 time triggers between 5:00 PM and 6:00 PM.

Data volume is therefore the death of this solution. Imagine, for example, that your org is rolling up Activities on a timed basis - say “Activities Logged This Month”. Your email-marketing solution logs Tasks for various events, like emails being sent, opened, or links clicked. Your marketing campaigns do well, and around 30,000 emails or related events are logged each month.

That immediately poses us a problem. When the clock ticks over to the first of the month, for example, we suddenly have 30,000 emails that just went out of scope for our “Activities Logged This Month” rollup. Salesforce now begins processing the 30,000 enqueued workflow actions that have accumulated in the queue for this date, but it can only process 1,000 per hour. Worse, the limit is 1,000 per hour across the whole org - not per workflow. So we now have a backlog 30 hours deep that impacts both our Rollup Summary Field and any other functionality based on timed actions across our org.

If you have a relatively small number of rollups to implement and relatively little data volume, this solution is both effective and expedient. Other organizations should read on for more scalable solutions.

Approach 2: DLRS in Realtime + Scheduled Mode

Declarative Lookup Rollup Summaries can help address many limitations of native Rollup Summary Fields. DLRS doesn’t have inherent support for time-based rollups, but it does have features we can combine to achieve that effect.

Instead of using a native Rollup Summary Field, we start by defining a DLRS Rollup Summary. We can use any SOQL criteria to limit our rolled-up objects, including date literals. As a result, we don’t need formula fields on the child object - and we also win freedom from some of the other limitations of native Rollup Summary Fields.

Here’s how we might configure our putative Opportunity rollup in DLRS:


To start with, this Rollup Summary will only work partially. It’ll yield correct results if we run a full calculate job by clicking Calculate. If we configure it to run in Realtime mode and deploy DLRS’s triggers, we’ll see our rollup update as we add and delete Opportunities.

What won’t work, though, is the shift from one time period to the next. On the first of the month, all of our Opportunities from last month are still rolled up - there was no trigger event that would cause them to be reevaluated against our criteria.

With DLRS, rather than using time-based workflow actions to create a trigger event by updating a field, we recalculate the rollup value across every parent record when each time period rolls over. Here, we’ve scheduled a full recalculation of the rollup for the first day of each month.

DLRS Scheduler

Because DLRS also deploys triggers dynamically to react to record changes in real time, we get in some sense the best of both worlds: instant updates when we add and change records on a day-to-day basis, with recalculation taking place at time-period boundaries.

Costs and Limitations

Each DLRS rollup that’s run in Scheduled mode consumes a scheduled Apex job, the limit for which is 100 across the org - and that limit encompasses all Scheduled Apex, not just DLRS jobs.

Running rollups in real time requires the deployment of dynamically-generated triggers for the parent and child objects. This may challenge deployment processes or even cause issues with existing Apex development in rare cases.

Processing rollup recalculations can be expensive, long-running operations. A full rollup recalculation must touch each and every parent object in your org.

In general, this solution offers the greatest breadth and flexibility, but also demands the greatest awareness of your org’s unique challenges and performance characteristics.

Approach 3: Native Rollup Summary Fields with Scheduled Batch Apex

We can scale up Approach 1 by replacing the limited Time-Based Workflow Actions with a scheduled Batch Apex class. We retain the native Rollup Summary Field based on a Checkbox on the child record, but we don’t require the formula fields on the child.

The Batch Apex class we use is very simple: all it does it query for records whose Checkbox value does not match the position of the relevant Date field vis-a-vis the rollup’s defined time interval, at the time the scheduled class is executed.

Continuing the example developed above, where we’re rolling up Opportunities for the current month only, our batch class’s start() method would run a query like this:

SELECT Id, This_Month_Batch__c
FROM Opportunity
WHERE (CloseDate = THIS_MONTH AND This_Month_Batch__c = false) 
      OR (CloseDate != THIS_MONTH AND This_Month_Batch__c = true)

Here we just locate those Opportunities that are not marked as in-scope but should be (where CloseDate = THIS_MONTH - note that we have the freedom to use date literals here), or which are marked as in scope but should not be.

Then, the work of our execute() method is extremely simple: all it does is reverse the value of the Checkbox field This_Month_Batch__c on each record:

public void execute(Database.BatchableContext bc, List<Opportunity> scope) {
    for (Opportunity o : scope) {
        o.This_Month_Batch__c = !o.This_Month_Batch__c;
    update scope;

We’d schedule this batch class to run every night after midnight. When executed, it’ll update any records moving into or out of rollup scope that day, allowing the native Rollup Summary Field machinery to do the work of recalculating the Account-level totals.

Costs and Limitations

While it’s not Apex-intensive, this is the only solution of the three that is not declarative in nature, and all of the attendant costs of code-based solutions are present.

Organizations with high data volume or data skew may need to experiment carefully to ensure queries are properly optimized.

As with DLRS, each rollup using this strategy consumes one scheduled Apex job. Unlike DLRS, we need to run our scheduled class nightly, because we’re not combining it with a trigger for (partial) real-time operation. We could add more Apex to do so if desired.


Time-based rollup summary fields are something many organizations need, and there’s a lot of ways to get there. Beyond the three approaches discussed here, one could explore other, more customized and site-specific options — like a full-scale Apex implementation using triggers and Scheduled Apex, or applying a data warehouse or middleware solution to handle rollups and analytics. Each org’s ideal approach will depend on the resources available, preference for code, declarative, or hybrid solutions, and important considerations around data volume and performance.

Amaxa: A Multi-Object Data Loader for Salesforce

I’ve just released Amaxa, an open-source project I’ve been working on for several months. Amaxa is a multi-object ETL tool/data loader for Salesforce. It’s designed to extract and load whole networks of records, like a selected set of Accounts with all of their Contacts, Opportunities, Contact Roles, and Campaigns, in a single repeatable operation while preserving the relationships between those records.

Core use cases for Amaxa include sandbox seeding, data migration, and retrieving connected data sets. The current release of Amaxa is version v0.9.2, in beta. Feedback and bug reports, via GitLab Issues, are welcome. There’s a lot more to come.

Amaxa is built in Python with Salesforce DX, simple_salesforce, and salesforce_bulk. Instructions and examples are included in the GitLab repository.

Locating Salesforce Compound and Component Fields in Apex and Python

One of the odder corners of the Salesforce data model is the compound fields. Coming in three main varieties (Name fields, Address fields, and Geolocation fields), these fields are accessible both under their own API names and in the forms of their component fields, which have their own API names. The compound field itself is always read-only, but the components may be writeable.

For example, on the Contact object is a compound address field OtherAddress. (There are a total of four standard Address fields spread across the Contact and Account objects, with a handful of others across Lead, Order, and so on). The components of OtherAddress are

  • OtherStreet
  • OtherCity
  • OtherState
  • OtherPostalCode
  • OtherCountry
  • OtherStateCode
  • OtherCountryCode
  • OtherLatitude
  • OtherLongitude
  • OtherGeocodeAccuracy.

Similarly, Contact has a compound Name field, as do Person Accounts, with components like FirstName and LastName.

So, if we’re working in dynamic Apex or building an API client, how do we acquire and understand the relationships between these compound and component fields?


In the REST API, the Describe resource for the sObject returns metadata for the object’s fields as well. This makes it easy to acquire all the data we need in one go.

GET /services/data/v43.0/sobjects/Contact/describe

yields, on a lightly customized Developer Edition, about 250KB of JSON. Included is a list under the key "fields", which contains the data we need (abbreviated here to omit irrelevant data points):

"fields": [
        "compoundFieldName": null,
        "label": "Contact Id",
        "name": "Id"
        "compoundFieldName": "null",
        "label": "Name",
        "name": "Name"
        "compoundFieldName": "Name",
        "label": "First Name",
        "name": "FirstName"

Each field includes its API name ("name"), its label, other metadata, and "compoundFieldName". The value of this last key is either null, meaning that the field we’re looking at is not a component field, or the API name of the parent compound field. There’s no marker indicating that a field is compound.

This structure can be processed easily enough in Python or other API client languages to yield compound/component mappings. Given some JSON response (parsed with json.loads()), we can do

def get_compound_fields(response):
    return {
        field["compoundFieldName"] for field in response["fields"] if field["compoundFieldName"] is not None

Likewise, we can get the components of any given field:

def get_component_fields(response, api_name):
    return [field["name"] for field in response["fields"] if field["compoundFieldName"] == api_name]

Both operations can be expressed in various ways, including uses of map() and filter(), or can be implemented at a higher level if the describe response is processed into a structure, such as a dict keyed on field API name.


The situation in Apex is rather different because of the way Describe information is returned to us. Rather than a single, large blob of information covering an sObject and all of its fields, we get back individual describes for an sObject (Schema.DescribeSobjectResult) and each field (Schema.DescribeFieldResult). (We can, of course, call out to the REST Describe API in Apex, but this requires additional work and an API-enabled Session Id).

Moreover, Schema.DescribeFieldResult does not include the critical compoundFieldName property.

… or rather, it isn’t documented to include it. In point of fact, it does contain the same data returned for a field in the API Describe call, as we can discover by inspecting the JSON result of serializing a Schema.DescribeFieldResult record.

Unlike some JSON-enabled Apex magic, we can get to this hidden value without actually using serialization. Even though it’s undocumented, these references compile and execute as expected:




This makes it possible to construct Apex utilities like we did in Python to source compound fields and compound field components. In Apex, we’ll necessarily be a bit more verbose than Python, and performance is a concern in broad-based searches. Both finding compound fields on one sObject and locating component fields for one compound field take between 0.07 and 0.1 second in unscientific testing. Your performance may vary.

public class CompoundFieldUtil {
    public static List<SObjectField> getCompoundFields(SObjectType objectType) {
        Map<String, SObjectField> fieldMap = objectType.getDescribe().fields.getMap();
        List<SObjectField> compoundFields = new List<SObjectField>();
        Set<String> compoundFieldNames = new Set<String>();

        for (String s : fieldMap.keySet()) {
            Schema.DescribeFieldResult dfr = fieldMap.get(s).getDescribe();

            if (dfr.compoundFieldName != null && !compoundFieldNames.contains(dfr.compoundFieldName)) {

        return compoundFields;

    public static List<SObjectField> getComponentFields(SObjectType objectType, SObjectField field) {
        Map<String, SObjectField> fieldMap = objectType.getDescribe().fields.getMap();
        List<SObjectField> components = new List<SObjectField>();
        String thisFieldName = field.getDescribe().getName();
        for (String s : fieldMap.keySet()) {
            if (fieldMap.get(s).getDescribe().compoundFieldName == thisFieldName) {
        return components;


System.debug(CompoundFieldUtil.getComponentFields(Contact.sObjectType, Contact.OtherAddress));


14:15:14:523 USER_DEBUG [1] DEBUG (OtherStreet, OtherCity, OtherState, OtherPostalCode, OtherCountry, OtherStateCode, OtherCountryCode, OtherLatitude, OtherLongitude, OtherGeocodeAccuracy)




22:15:30:089 USER_DEBUG [1] DEBUG (Name, OtherAddress, MailingAddress)

Simple modifications could support the use of API names rather than SobjectField tokens, building maps between compound field and components, and similar workflows.

This post developed out of a Salesforce Stack Exchange answer I wrote, along with work on a soon-to-be-released data loader project.

Building Unit Test Learning Resources on Salesforce Stack Exchange

Unit testing and code coverage are perennial topics on Salesforce Stack Exchange. It’s not hard to see why: unit testing can be profoundly unintuitive for new developers (on any platform), and involves mastering techniques and concepts that are quite different from those needed to be productive in building applications.

Growing out of discussion about community challenges in managing lots of duplicative basic questions, I started a project to write up canonical answers to common questions on unit tests, and bring together the most important resources.

Here’s the first-fruits of that project, all of which were posted (by me and other writers) to StackExchange as community wikis. These articles can be expanded and evolved by the Salesforce Stack Exchange community to give learners a basic conceptual framework and connect them to the best resources available on how to test effectively on the Salesforce platform.

Community Wiki Questions on Unit Testing

Running Reports as Selected Users with JWT OAuth and the Reports and Dashboards API

Salesforce reporting introduces some fascinating complexities to data visibility and exposure, particularly for organizations using Private Organization-Wide Defaults.

The key complicating factor is this: when a Salesforce report is run, it’s run in the context of some user or another, and the records that are shown on the report are the ones that are visible to that user. This means that report distribution solutions have to be very careful to only show each user a report run in their own context - not someone else’s.

Suppose your organization builds a critical report that many users will need to review. It’s built to show “My Opportunities”, so each user will see only their own Opportunities, and the Opportunity Organization-Wide Default is Private. You add a criterion to the report to only show Opportunities that have your internal “Needs Attention” Checkbox set. Now: how do you make sure your users are regularly updated when they have Opportunities that require their review?

A naive solution would create one subscription to this report, say for Frank Q. Exec, and add all of the users who need to receive it as recipients:

Lightning Subscription

But this runs afoul of the principle mentioned above: the report’s context user is Frank, and the recipients of the report will see data as if they were Frank. From Salesforce:

IMPORTANT Recipients see emailed report data as the person running the report. Consider that they may see more or less data than they normally see in Salesforce.

This is unlikely to be an acceptable outcome.

Further, we can’t simply have Frank create many subscriptions to the same report, adding one user as both the recipient and the running user to each: Frank only gets five total report subscriptions, and he can only have one subscription to each report.

Of course, users can schedule reports themselves, in their own context, and they can run them manually, and we can build dynamic dashboards (which come with their own limits). But what if we really need to create these subscriptions for our users automatically, or allow our admins to manage them for thousands of users at a time? What if, in fact, we want to offer the users a bespoke user interface to let them select subscriptions to standard corporate reports, or run reports in their contexts to feed into an external reporting or business intelligence solution?

This is a question I’ve struggled with before, and I was excited to see Martin Borthiry propose the issue on Salesforce Stack Exchange. Here, I’d like to expand on the solution I sketched out in response to Martin’s question.


There are two report subscription functionalities on Salesforce, and they work rather differently. Report subscriptions are summarized in the Salesforce documentation under Schedule and Subscribe to Reports.

On Classic, one can “Subscribe” to a report, and one can “Schedule Future Runs”. The nomenclature here is confusing: a Classic “Subscribe” asks Salesforce to notify us if the report’s results meet certain thresholds, but it’s not for regularly receiving copies of the report. We’re not going to look at this feature. “Schedule Future Runs” is equivalent to a report subscription in Lightning and is the feature corresponding to the business problem discussed above.

Classic Subscription

On Lightning, we simply have an option to Subscribe, as we saw above. There’s no Lightning equivalent to the Classic “Subscribe” feature.

So what happens when we subscribe to a report?

The Classic Schedule Future Runs and the Lightning Subscribe functionality is represented under the hood as CronTrigger and CronJobDetail records with the CronJobDetail.JobType field set to 'A', for Analytics Notification. You can find them in queries from the Developer Console or Workbench via queries like

SELECT CronExpression, OwnerId, CronJobDetail.Name FROM CronTrigger WHERE CronJobDetail.JobType = 'A'

Unfortunately, knowing this doesn’t help us very much. Neither CronTrigger nor CronJobDetail can be created directly in Apex or via the API, and the objects provide very little detail about existing report subscriptions. The Report Id, for example, is notable by its absence, and the Name field is just a UUID.

A more promising avenue for our use case is the Reports and Dashboards API, because it offers an endpoint to create an Analytics Notification.

POST /services/data/vXX.0/analytics/notifications

with a JSON body like this

  "active" : true,
  "createdDate" : "",
  "deactivateOnTrigger" : false,
  "id" : "",
  "lastModifiedDate" : "",
  "name" : "New Notification",
  "recordId" : "00OXXXXXXXXXXXXXXX",
  "schedule" : {
    "details" : {
      "time" : 3
    "frequency" : "daily"
  "source" : "lightningReportSubscribe",
  "thresholds" : [ {
    "actions" : [ {
      "configuration" : {
        "recipients" : [ ]
      "type" : "sendEmail"
    } ],
    "conditions" : null,
    "type" : "always"
  } ]

The feature set shown here in JSON is at parity with the user interface, and has the same limitations. Adding a recipient for the subscription over the API, for example, suffers from the same visibility flaws as doing so in the UI. And the API doesn’t let us do what we truly want to - create report subscriptions for other users that run as those other users - because we cannot set the owner of the subscription programmatically.

… or can we?

While the Reporting and Analytics API doesn’t support setting the context user for a subscription, it always takes action as the user as whom we authenticate to the API. And that we can control.

While an admin can Login As a user to create a one-off subscription, we’re more interested here in industrial-strength solutions that can support thousands of users. So we’re going to build a script to create subscriptions by talking to the Reports and Dashboards API, using the Javascript Web Token (JWT) OAuth authentication mechanism. Why? Because the JWT flow is our only route to seamlessly authenticating as any (admin-approved) user, with no manual intervention or setup required on a per-user basis.

Setup: Connected Apps and Certificates

Setting up the JWT flow involves building a Connected App in Salesforce, under which our scripts will authenticate. JWT is secured using a certificate and associated public key/private key pair - Salesforce holds the public key, our script holds the private key.

This is the same mechanism used for authentication in many Continuous Integration solutions. I’m not going to rehash all of the details here, because they’re well-covered elsewhere. You can follow Salesforce’s steps in using SFDX for continuous integration, or read through my own article about setting up CircleCI with Salesforce DX.

When you’re finished building the Connected App, add the Profiles of each of the users who are to be subscribed to reports to the Connected App as a pre-approved Profile, or assign all of those users a Permission Set and assign that Permission Set as pre-approved on the Connected App. This ensures that we can authenticate to the API as those users without any intervention.

Building the Scripts

We’re going to stick to sketching out a solution here that can be adapted to many different business problems, as we discussed earlier. For simplicity, we’ll use Salesforce DX to handle the JWT authentication, even though we’re not using SFDX for development here. Because it’s my preferred scripting workflow, I’ll be using Python with simple_salesforce, but you could just as easily achieve this in Ruby, Java, JavaScript, or even just bash and curl.

The main job of our script is to login as a user and create a report subscription for them. We might build this towards a specific business process by adding scaffolding to, for example, query a custom object out of Salesforce to define which reports should be subscribed automatically for which users, but we’ll leave that elaboration to a later date. Once we’ve got that core functionality achieved, we can wrap it in the logic we need for specific applications.

Let’s put the key field (private key) from our JWT setup in a file called server.key. Put the username of the user we want to subscribe (who must be pre-authorized to the Connected App) in the environment variable $USERNAME and the Connected App’s Consumer Key in $CONSUMERKEY.

Then we can get an Access Token to make an API call into Salesforce, letting SFDX do the heavy lifting:

sfdx force:auth:jwt:grant --clientid $CONSUMERKEY --jwtkeyfile server.key --username $USERNAME -a reports-test
export INSTANCE_URL=$(sfdx force:org:display --json -u reports-test | python -c "import json; import sys; print(json.load(sys.stdin)['result']['instanceUrl'])")
export ACCESS_TOKEN=$(sfdx force:org:display --json -u reports-test | python -c "import json; import sys; print(json.load(sys.stdin)['result']['accessToken'])")

(If you have jq installed, you can simplify these one-liners).

Now we’ve established an authenticated session as $USERNAME, even though we do not have that user’s credentials or any setup for that user besides preauthorizing their profile on the Connected App, and we have the values we need (the Access Token and Instance URL) stored in our environment.

Now we’ll switch over to Python. A quick script grabs those environment variables and uses simple_salesforce to make an API call to generate the report subscription.

import simple_salesforce
import os
import sys

outbound_json = """
  "active" : true,
  "createdDate" : "",
  "deactivateOnTrigger" : false,
  "id" : "",
  "lastModifiedDate" : "",
  "name" : "New Notification",
  "recordId" : "%s",
  "schedule" : {
    "details" : {
      "time" : 3
    "frequency" : "daily"
  "source" : "lightningReportSubscribe",
  "thresholds" : [ {
    "actions" : [ {
      "configuration" : {
        "recipients" : [ ]
      "type" : "sendEmail"
    } ],
    "conditions" : null,
    "type" : "always"
  } ]

# Use an Access Token and Report Id to add a Lightning report subscription for this user
# such that the report will run as that user.

access_token = os.environ['ACCESS_TOKEN']
instance_url = os.environ['INSTANCE_URL']

report_id = sys.argv[1]

sf = simple_salesforce.Salesforce(session_id=access_token, instance_url=instance_url)

    data=outbound_json % report_id

Execute the script

python $REPORTID

where $REPORTID is the Salesforce Id of the report you wish to subscribe the user for, and then if we log in as that user in the UI, we’ll find a shiny new Lightning report subscription established for them.

Lightning Final Subscription

Note that it’s set for daily at 0300, as specified in the example JSON.

Next Steps

We’ve got a proof-of-concept in place showing that we can in fact schedule results for users run as those users. In an article to follow soon, we’ll look at operationalizing this approach and building out business processes atop it.