Publications Round-Up

I’ve been part of a number of articles and presentations over the last few months that have not so far appeared here. Want to catch up with the latest in free and open source Salesforce DevOps? Here’s some pieces I’d love to share.

  • Find Bugs Earlier with Second-Generation Packaging, on the Salesforce Architect Blog (with Brandon Parker). At Salesforce.org Release Engineering, we’ve come with a strategy to use the flexibility of second-generation packaging (2GP) to cure many of the pains of first-generation packaging (1GP), without package migration!
  • I chatted with the fantastic Josh Birk about CumulusCI, broadening the value proposition of DevOps, and my journey in the Salesforce landscape on the Salesforce Developer Podcast.
  • I was also interviewed by Atlas Can for SalesforceBen about How Salesforce.org Uses DevOps, where we dug into the details of how and why to build a DevOps practice on CumulusCI.
  • I joined Mohith Shrivastava in the Salesforce Interchange series to explore how to Extend NPSP with 2GP Unlocked Packages using CumulusCI.

"What's the best translation of...?"

I’ve launched a new app, Bibliothekai, to help answer this question.

I’ve been wanting to build this for many years, stemming from my time as a desk librarian hearing the question “What’s the best translation of X?” all the time, and my delight in collecting unusual translations of Homer.

Bibliothekai is my contribution towards an answer. Bibliothekai currently tracks over 900 translations of more than 275 ancient source texts. Coverage of extant translations is nearly complete on Homer (20th cent. and later), Herodotus, and Thucydides, and growing for authors from Plato to Virgil to Apollonius (of Rhodes and of Perga!) There’s much, much more to do, though.

Bibliothekai makes it easy to sort and filter translations (like those of the Iliad) for specific qualities and resources, to evaluate translations through professional and user reviews, and to compare selected translations with one another and with the original source text. For example, here’s a side-by-side comparison of two prominent translations of the Iliad, those of Richmond Lattimore and Robert Fagles, or one can compare the Pamela Mensch translation of Herodotus with the Greek.

Bibliothekai is built on a Django backend, with a hybrid frontend using both Django templating and open source Lightning Web Components. I’ll be writing more about how the site’s been built in the coming weeks:

  • Adding LWCs to a templated Django site running on Heroku.
  • Creating your own LWC base components and using Bootstrap styling with open source LWCs.
  • Building custom wire adapters to source data from Django Rest Framework and Graphene-Django for GraphQL.
  • GraphQL integrations with Lightning Web Components.

Bibliothekai (which means “bookcases” in Ancient Greek) has been a fantastic opportunity to build something I’ve been passionate about for a long time, while teaching myself new technologies. I’m excited to share more about how it’s built and the new features I’m working on.

Working with JSON in Apex: A Primer

This post is adapted from a community wiki I created for Salesforce Stack Exchange.

Apex provides multiple routes to achieving JSON serialization and deserialization of data structures. This post summarizes use cases and capabilities of untyped deserialization, typed (de)serialization, manual implementations using JSONGenerator and JSONParser, and tools available to help support these uses. The objective is to provide an introduction and overview of routes to working effectively with JSON in Apex, and links to other resources.

Summary

Apex can serialize and deserialize JSON to strongly-typed Apex classes and also to generic collections like Map<String, Object> and List<Object>. In most cases, it’s preferable to define Apex classes that represent data structures and utilize typed serialization and deserialization with JSON.serialize()/JSON.deserialize(). However, some use cases require applying untyped deserialization with JSON.deserializeUntyped().

The JSONGenerator and JSONParser classes are available for manual implementations and should be used only where automatic (de)serialization is not practicable, such as when keys in JSON are reserved words in Apex, or when low-level access is required.

The key documentation references are the JSON class in the Apex Developer Guide and the section JSON Support. Other relevant documentation is linked from those pages.

Complex Types in Apex and JSON

JSON offers maps (or objects) and lists as its complex types. JSON lists map to Apex List objects. JSON objects can map to either Apex classes, with keys mapping to instance variables, or Apex Map objects. Apex classes and collections can be intermixed freely to construct the right data structures for any particular JSON objective.

Throughout this post, we’ll use the following JSON as an example:

{
	"errors": [ "Data failed validation rules" ],
	"message": "Please edit and retry",
	"details": {
		"record": "001000000000001",
		"record_type": "Account"
	}
}

This JSON includes two levels of nested objects, as well as a list of primitive values.

Typed Serialization with JSON.serialize() and JSON.deserialize()

The methods JSON.serialize() and JSON.deserialize() convert between JSON and typed Apex values. When using JSON.deserialize(), you must specify the type of value you expect the JSON to yield, and Apex will attempt to deserialize to that type. JSON.serialize() accepts both Apex collections and objects, in any combination that’s convertible to legal JSON.

These methods are particularly useful when converting JSON to and from Apex classes, which is in most circumstances the preferred implementation pattern. The JSON example above can be represented with the following Apex class:


public class Example {
	public List<String> errors;
	public String message;
	
	public class ExampleDetail {
		Id record;
		String record_type;
	}
	
	public ExampleDetail details;
}

To parse JSON into an Example instance, execute

Example ex = (Example)JSON.deserialize(jsonString, Example.class);

Alternately, to convert an Example instance into JSON, execute

String jsonString = JSON.serialize(ex);

Note that nested JSON objects are modeled with one Apex class per level of structure. It’s not required for these classes to be inner classes, but it is a common implementation pattern. Apex only allows one level of nesting for inner classes, so deeply-nested JSON structures often convert to Apex classes with all levels of structure defined in inner classes at the top level.

JSON.serialize() and JSON.deserialize() can be used with Apex collections and classes in combination to represent complex JSON data structures. For example, JSON that stored Example instances as the values for higher-level keys:

{
	"first": { /* Example instance */ },
	"second": { /* Example instance */},
	/* ... and so on... */
}

can be serialized from, and deserialized to, a Map<String, Example> value in Apex.

For more depth on typed serialization and deserialization, review the JSON class documentation. Options are available for:

  • Suppression of null values
  • Pretty-printing generated JSON
  • Strict deserialization, which fails on unexpected attributes

Untyped Deserialization with JSON.deserializeUntyped()

In some situations, it’s most beneficial to deserialize JSON into Apex collections of primitive values, rather than into strongly-typed Apex classes. For example, this can be a valuable approach when the structure of the JSON may change in ways that aren’t compatible with typed deserialization, or which would require features that Apex does not offer like algebraic or union types.

Using the JSON.deserializeUntyped() method yields an Object value, because Apex doesn’t know at compile time what type of value the JSON will produce. It’s necessary when using this method to typecast values pervasively.

Take, for example, this JSON, which comes in multiple variants tagged by a "scope" value:

{
	"scope": "Accounts",
	"data": {
		"payable": 100000,
		"receivable": 40000
	}
}

or

{
	"scope": {
		"division": "Sales",
		"organization": "International"
	},
	"data": {
		"closed": 400000
	}
}

JSON input that varies in this way cannot be handled with strongly-typed Apex classes because its structure is not uniform. The values for the keys scope and data have different types.

This kind of JSON structure can be deserialized using JSON.deserializeUntyped(). That method returns an Object, an untyped value whose actual type at runtime will reflect the structure of the JSON. In this case, that type would be Map<String, Object>, because the top level of our JSON is an object. We could deserialize this JSON via

Map<String, Object> result = (Map<String, Object>)JSON.deserializeUntyped(jsonString);

The untyped nature of the value we get in return cascades throughout the structure, because Apex doesn’t know the type at compile time of any of the values (which may, as seen above, be heterogenous) in this JSON object.

As a result, to access nested values, we must write defensive code that inspects values and typecasts at each level. The example above will throw a TypeException if the resulting type is not what is expected.

To access the data for the first element in the above JSON, we might do something like this:

Object result = JSON.deserializeUntyped(jsonString);

if (result instanceof Map<String, Object>) {
    Map<String, Object> resultMap = (Map<String, Object>)result;
	if (resultMap.get('scope') == 'Accounts' &&
	    resultMap.get('data') instanceof Map<String, Object>) {
		Map<String, Object> data = (Map<String, Object>)resultMap.get('data');
	
		if (data.get('payable') instanceof Integer) {
			Integer payable = (Integer)data.get('payable');
			
			AccountsService.handlePayables(payable);
		} else {
			// handle error
		}
	} else {
		// handle error
	}
} else {
	// handle error
}

While there are other ways of structuring such code, including catching JSONException and TypeException, the need to be defensive is a constant. Code that fails to be defensive while working with untyped values is vulnerable to JSON changes that produce exceptions and failure modes that won’t manifest in many testing practices. Common exceptions include NullPointerException, when carelessly accessing nested values, and TypeException, when casting a value to the wrong type.

Manual Implementation with JSONGenerator and JSONParser

The JSONGenerator and JSONParser classes allow your application to manually construct and parse JSON.

Using these classes entails writing explicit code to handle each element of the JSON. Using JSONGenerator and JSONParser typically yields much more complex (and much longer) code than using the built-in serialization and deserialization tools. However, it may be required in some specific applications. For example, JSON that includes Apex reserved words as keys may be handled using these classes, but cannot be deserialized to native classes because reserved words (like type and class) cannot be used as identifiers.

As a general guide, use JSONGenerator and JSONParser only when you have a specific reason for doing so. Otherwise, strive to use native serialization and deserialization, or use external tooling to generate parsing code for you (see below).

Generating Code with JSON2Apex

JSON2Apex is an open source Heroku application. JSON2Apex allows you to paste in JSON and generates corresponding Apex code to parse that JSON. The tool defaults to creating native classes for serialization and deserialization. It automatically detects many situations where explicit parsing is required and generates JSONParser code to deserialize JSON to native Apex objects.

JSON2Apex does not solve every problem related to using JSON, and generated code may require revision and tuning. However, it’s a good place to start an implementation, particularly for users who are just getting started with JSON in Apex.

Automate the App Lifecycle with CumulusCI

Recently, I’ve been delighted to visit the Salesforce Developer Groups in Denver, Colorado, where I live, and in Kitchener, Ontario, to talk about Salesforce.org’s Portable Automation toolchain for continuous integration and automation throughout the application lifecycle.

Thanks to Sudipta Deb, leader of the Kitchener group, I’m able to share video from “Automate the App Lifecycle with CumulusCI” on April 9.

I’m so excited about the tools I get to work on, apply to nonprofit solutions, and share with the community. CumulusCI, MetaCI, Metecho, and MetaDeploy can transform the way development teams work on the Salesforce platform, and they’re all free and open source software! If your Salesforce community group is interested in learning about practicing continuous integration, application lifecycle management, automated testing, and more using the Portable Automation toolchain, please get in touch with me.

Below are some resources for getting into our tooling. I’m especially proud of our Trailhead content, for which I was a contributing writer and editor.

Future Talks

  • I’ll be visiting the Munich, Germany developer group to talk about continuous integration with CumulusCI on May 6. (Link forthcoming).
  • I’ll also be speaking at the inaugural Virtual Dreamin’ conference about CumulusCI and Portable Automation.

Resources for Learning More

Salesforce.org Products and the Open Source Commons

On Writing Good Exception Handlers

This post is adapted from an answer I wrote on Salesforce Stack Exchange.

This post follows a previous discussion about what makes a bad exception handler. I’d like to talk a little bit about what good exception handling patterns look like and where in an application one ought to use them.

What it means to handle an exception is to take an exceptional situation - something bad and out of the ordinary happened - and allow the application to safely move back into an anticipated pathway of operation, preserving

  • The integrity of the data involved.
  • The experience of the user.
  • The outcome of the process, if possible.

Exceptional situations, as befits them, are often very case-specific, which can make it challenging to talk about principles that apply broadly. I like to approach it with two lenses in mind: one is understanding where we’re operating in the application relative to the user and the user’s understanding of what operations are taking place, and the other is situating the error relative to the commitments we must make to data integrity and user trust.

Let’s look at a couple of examples of how this plays out in different layers of a Salesforce implementation.

Triggers

Suppose we’re writing a trigger. The trigger takes data modified by the user, processes it, and makes updates elsewhere. It performs DML and does not use partial-success methods, such as Database.update(records, false). This means that failures will throw a DmlException. (If we did use partial-success methods, the same principles apply, they just play out differently because errors return to us in Result objects instead of exceptions).

Here, we have to answer at least two critical questions:

  • Are the failures we encounter amenable to being fixed?
  • What is the nature of the data manipulation we’re doing? If it fails, does that mean that the entire operation (including the change the user made) is invalid? That is, if we allow the user changes to go through without the work we’re doing, have we harmed the integrity of the user data?

These questions determine how we’ll respond to the exception.

If we know a particular exception can be thrown in a way that we can fix, our handler should just fix it and try again. That would be a genuine “handling” of the exception. In Apex, where exceptions aren’t typically used as flow control, this situation is somewhat less common than in a language like Python, but it does occur. For example, if we’re building a Queueable Apex class that’s designed to gain exclusive access to a particular record using a FOR UPDATE clause, we might catch a QueryException (indicating that we weren’t able to obtain the lock) and handle that exception by chaining into another Queueable, allowing us to successfully complete processing once the record becomes available.

But in most cases, that’s not our situation when building in Apex. It’s the second question, about data integrity, that tends to be determinative of the appropriate implementation pattern, and it’s why I advocate for eschewing exception handlers in many cases.

The most important job our code has is not to damage the integrity of the user’s data. For that reason, in most cases where an exception is related to data manipulation, I advocate for not catching it in backend code at all unless it can be meaningfully handled. Otherwise, it’s best to let higher-level functionality (below) catch it, or allow the exception to remain unhandled and to cause the whole transaction to be rolled back, preserving data integrity.

To make this concrete: suppose we’re building a trigger whose job is to update the dollar value of an Opportunity when the user updates a related Payment. Our Opportunity update might throw a DmlException; what do we do?

Ask the questions: Can we fix the problem in Apex alone? No, we can’t.

If we let the Opportunity update fail while the Payment update succeeds, do we lose data integrity? Yes. The data will be wrong, and invariants that our business users rely upon will be violated.

Let the exception be raised and dealt with at a higher level, or allowed to cause a rollback.

ἐὰν μὴ ἔλπηται ἀνέλπιστον, οὐκ ἐξευρήσει, ἀνεξερεύνητον ἐὸν καὶ ἄπορον
Should one not expect the unexpected, one shan’t find it, as it is hard-sought and trackless.
— Heraclitus, Fragment 18

Non-Critical Backend Functionality

There are other cases where we’ll want to catch, log, and suppress an exception. Take for example code that sends out emails in response to data changes (I’ll save for another time why I think that’s a terrible pattern). Again, we’ll look to the questions above:

  • Can we fix the problem? No.
  • Does the problem impact data integrity if we let it go forward? Also no.

So here is a situation where it might make sense to wrap the sending code in a try/catch block, and record email-related exceptions using a logging framework so that system administrators can review and act upon them. Then, we don’t re-raise the exception - we consume it and allow the transaction to continue.

We probably don’t want to block a Case update because some User in the system has a bad email address!

User-Facing Functionality

Now, turn the page to the frontend - Visualforce and Lightning. Here, we’re building in the controller layer, mediating between user input and the database.

We present a button that allows the user to perform some complex operation that can throw multiple species of exceptions. What’s our handling strategy?

Here, it is much more common, and even preferable, to use broader catch clauses that don’t fix the error, but do handle the exception in the sense of returning the application to a safe operating path. They do this by performing an explicit rollback to preserve data integrity and then surfacing a friendly error message to the user - an error message that helps to contextualize what may be a quite low-level exception.

For example, in Visualforce, you might do something like this:

Database.Savepoint sp = Database.setSavepoint();
try {
    doSomeVeryComplexOperation(myInputData);
} catch (Exception e) { // Never otherwise catch the generic `Exception`!
    Database.rollback(sp); // Preserve integrity of the database.
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'An exception happened while trying to execute the operation.'));
}

That’s friendly to the user, applying that lens of understanding how they scope an operation that may fail or succeed: it shows them that the higher-level, semantic operation they attempted failed (and we might want to include the actual failure message too to give them a shot at fixing it, if we do not otherwise log the failure for the system administrators). But it’s also friendly to the database, because we make sure our handling of the exception doesn’t impact data integrity.

Even better would be to be specific about the failure using multiple catch blocks (where applicable):

Database.Savepoint sp = Database.setSavepoint();
try {
    doSomeVeryComplexOperation(myInputData);
} catch (DmlException e) { 
    Database.rollback(sp);
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'Unable to save the data. The following error occurred: ' + e.getMessage()));
} catch (CalloutException e) {
    Database.rollback(sp); 
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'We could not reach the remote system. Please try again in an hour.');
}

In Lightning (Aura) controllers, we’d re-throw an AuraHandledException instead of using ApexPages.addMessage(), but the same structural principles apply to how we think about these handlers.

The One Thing Not To Do

This:

try {
    // do stuff
} catch (Exception e) {
    System.debug(e);
}

I’ve written previously about what a dangerous and bad practice this is. To put it in the context of this discussion, let’s consider how we’d have to answer the questions above to make this pattern a good response.

You’d want to swallow an exception like this when:

  • We cannot fix the error.
  • What we’re doing has no impact on data integrity.

and

  • No one is going to notice or care that the functionality isn’t working (since we’re not telling anyone).

If these three statements are all true, I suspect a design fault in the system!