"What's the best translation of...?"

I’ve launched a new app, Bibliothekai, to help answer this question.

I’ve been wanting to build this for many years, stemming from my time as a desk librarian hearing the question “What’s the best translation of X?” all the time, and my delight in collecting unusual translations of Homer.

Bibliothekai is my contribution towards an answer. Bibliothekai currently tracks over 900 translations of more than 275 ancient source texts. Coverage of extant translations is nearly complete on Homer (20th cent. and later), Herodotus, and Thucydides, and growing for authors from Plato to Virgil to Apollonius (of Rhodes and of Perga!) There’s much, much more to do, though.

Bibliothekai makes it easy to sort and filter translations (like those of the Iliad) for specific qualities and resources, to evaluate translations through professional and user reviews, and to compare selected translations with one another and with the original source text. For example, here’s a side-by-side comparison of two prominent translations of the Iliad, those of Richmond Lattimore and Robert Fagles, or one can compare the Pamela Mensch translation of Herodotus with the Greek.

Bibliothekai is built on a Django backend, with a hybrid frontend using both Django templating and open source Lightning Web Components. I’ll be writing more about how the site’s been built in the coming weeks:

  • Adding LWCs to a templated Django site running on Heroku.
  • Creating your own LWC base components and using Bootstrap styling with open source LWCs.
  • Building custom wire adapters to source data from Django Rest Framework and Graphene-Django for GraphQL.
  • GraphQL integrations with Lightning Web Components.

Bibliothekai (which means “bookcases” in Ancient Greek) has been a fantastic opportunity to build something I’ve been passionate about for a long time, while teaching myself new technologies. I’m excited to share more about how it’s built and the new features I’m working on.

Working with JSON in Apex: A Primer

This post is adapted from a community wiki I created for Salesforce Stack Exchange.

Apex provides multiple routes to achieving JSON serialization and deserialization of data structures. This post summarizes use cases and capabilities of untyped deserialization, typed (de)serialization, manual implementations using JSONGenerator and JSONParser, and tools available to help support these uses. The objective is to provide an introduction and overview of routes to working effectively with JSON in Apex, and links to other resources.


Apex can serialize and deserialize JSON to strongly-typed Apex classes and also to generic collections like Map<String, Object> and List<Object>. In most cases, it’s preferable to define Apex classes that represent data structures and utilize typed serialization and deserialization with JSON.serialize()/JSON.deserialize(). However, some use cases require applying untyped deserialization with JSON.deserializeUntyped().

The JSONGenerator and JSONParser classes are available for manual implementations and should be used only where automatic (de)serialization is not practicable, such as when keys in JSON are reserved words in Apex, or when low-level access is required.

The key documentation references are the JSON class in the Apex Developer Guide and the section JSON Support. Other relevant documentation is linked from those pages.

Complex Types in Apex and JSON

JSON offers maps (or objects) and lists as its complex types. JSON lists map to Apex List objects. JSON objects can map to either Apex classes, with keys mapping to instance variables, or Apex Map objects. Apex classes and collections can be intermixed freely to construct the right data structures for any particular JSON objective.

Throughout this post, we’ll use the following JSON as an example:

	"errors": [ "Data failed validation rules" ],
	"message": "Please edit and retry",
	"details": {
		"record": "001000000000001",
		"record_type": "Account"

This JSON includes two levels of nested objects, as well as a list of primitive values.

Typed Serialization with JSON.serialize() and JSON.deserialize()

The methods JSON.serialize() and JSON.deserialize() convert between JSON and typed Apex values. When using JSON.deserialize(), you must specify the type of value you expect the JSON to yield, and Apex will attempt to deserialize to that type. JSON.serialize() accepts both Apex collections and objects, in any combination that’s convertible to legal JSON.

These methods are particularly useful when converting JSON to and from Apex classes, which is in most circumstances the preferred implementation pattern. The JSON example above can be represented with the following Apex class:

public class Example {
	public List<String> errors;
	public String message;
	public class ExampleDetail {
		Id record;
		String record_type;
	public ExampleDetail details;

To parse JSON into an Example instance, execute

Example ex = (Example)JSON.deserialize(jsonString, Example.class);

Alternately, to convert an Example instance into JSON, execute

String jsonString = JSON.serialize(ex);

Note that nested JSON objects are modeled with one Apex class per level of structure. It’s not required for these classes to be inner classes, but it is a common implementation pattern. Apex only allows one level of nesting for inner classes, so deeply-nested JSON structures often convert to Apex classes with all levels of structure defined in inner classes at the top level.

JSON.serialize() and JSON.deserialize() can be used with Apex collections and classes in combination to represent complex JSON data structures. For example, JSON that stored Example instances as the values for higher-level keys:

	"first": { /* Example instance */ },
	"second": { /* Example instance */},
	/* ... and so on... */

can be serialized from, and deserialized to, a Map<String, Example> value in Apex.

For more depth on typed serialization and deserialization, review the JSON class documentation. Options are available for:

  • Suppression of null values
  • Pretty-printing generated JSON
  • Strict deserialization, which fails on unexpected attributes

Untyped Deserialization with JSON.deserializeUntyped()

In some situations, it’s most beneficial to deserialize JSON into Apex collections of primitive values, rather than into strongly-typed Apex classes. For example, this can be a valuable approach when the structure of the JSON may change in ways that aren’t compatible with typed deserialization, or which would require features that Apex does not offer like algebraic or union types.

Using the JSON.deserializeUntyped() method yields an Object value, because Apex doesn’t know at compile time what type of value the JSON will produce. It’s necessary when using this method to typecast values pervasively.

Take, for example, this JSON, which comes in multiple variants tagged by a "scope" value:

	"scope": "Accounts",
	"data": {
		"payable": 100000,
		"receivable": 40000


	"scope": {
		"division": "Sales",
		"organization": "International"
	"data": {
		"closed": 400000

JSON input that varies in this way cannot be handled with strongly-typed Apex classes because its structure is not uniform. The values for the keys scope and data have different types.

This kind of JSON structure can be deserialized using JSON.deserializeUntyped(). That method returns an Object, an untyped value whose actual type at runtime will reflect the structure of the JSON. In this case, that type would be Map<String, Object>, because the top level of our JSON is an object. We could deserialize this JSON via

Map<String, Object> result = (Map<String, Object>)JSON.deserializeUntyped(jsonString);

The untyped nature of the value we get in return cascades throughout the structure, because Apex doesn’t know the type at compile time of any of the values (which may, as seen above, be heterogenous) in this JSON object.

As a result, to access nested values, we must write defensive code that inspects values and typecasts at each level. The example above will throw a TypeException if the resulting type is not what is expected.

To access the data for the first element in the above JSON, we might do something like this:

Object result = JSON.deserializeUntyped(jsonString);

if (result instanceof Map<String, Object>) {
    Map<String, Object> resultMap = (Map<String, Object>)result;
	if (resultMap.get('scope') == 'Accounts' &&
	    resultMap.get('data') instanceof Map<String, Object>) {
		Map<String, Object> data = (Map<String, Object>)resultMap.get('data');
		if (data.get('payable') instanceof Integer) {
			Integer payable = (Integer)data.get('payable');
		} else {
			// handle error
	} else {
		// handle error
} else {
	// handle error

While there are other ways of structuring such code, including catching JSONException and TypeException, the need to be defensive is a constant. Code that fails to be defensive while working with untyped values is vulnerable to JSON changes that produce exceptions and failure modes that won’t manifest in many testing practices. Common exceptions include NullPointerException, when carelessly accessing nested values, and TypeException, when casting a value to the wrong type.

Manual Implementation with JSONGenerator and JSONParser

The JSONGenerator and JSONParser classes allow your application to manually construct and parse JSON.

Using these classes entails writing explicit code to handle each element of the JSON. Using JSONGenerator and JSONParser typically yields much more complex (and much longer) code than using the built-in serialization and deserialization tools. However, it may be required in some specific applications. For example, JSON that includes Apex reserved words as keys may be handled using these classes, but cannot be deserialized to native classes because reserved words (like type and class) cannot be used as identifiers.

As a general guide, use JSONGenerator and JSONParser only when you have a specific reason for doing so. Otherwise, strive to use native serialization and deserialization, or use external tooling to generate parsing code for you (see below).

Generating Code with JSON2Apex

JSON2Apex is an open source Heroku application. JSON2Apex allows you to paste in JSON and generates corresponding Apex code to parse that JSON. The tool defaults to creating native classes for serialization and deserialization. It automatically detects many situations where explicit parsing is required and generates JSONParser code to deserialize JSON to native Apex objects.

JSON2Apex does not solve every problem related to using JSON, and generated code may require revision and tuning. However, it’s a good place to start an implementation, particularly for users who are just getting started with JSON in Apex.

Automate the App Lifecycle with CumulusCI

Recently, I’ve been delighted to visit the Salesforce Developer Groups in Denver, Colorado, where I live, and in Kitchener, Ontario, to talk about Salesforce.org’s Portable Automation toolchain for continuous integration and automation throughout the application lifecycle.

Thanks to Sudipta Deb, leader of the Kitchener group, I’m able to share video from “Automate the App Lifecycle with CumulusCI” on April 9.

I’m so excited about the tools I get to work on, apply to nonprofit solutions, and share with the community. CumulusCI, MetaCI, Metecho, and MetaDeploy can transform the way development teams work on the Salesforce platform, and they’re all free and open source software! If your Salesforce community group is interested in learning about practicing continuous integration, application lifecycle management, automated testing, and more using the Portable Automation toolchain, please get in touch with me.

Below are some resources for getting into our tooling. I’m especially proud of our Trailhead content, for which I was a contributing writer and editor.

Future Talks

  • I’ll be visiting the Munich, Germany developer group to talk about continuous integration with CumulusCI on May 6. (Link forthcoming).
  • I’ll also be speaking at the inaugural Virtual Dreamin’ conference about CumulusCI and Portable Automation.

Resources for Learning More

Salesforce.org Products and the Open Source Commons

On Writing Good Exception Handlers

This post is adapted from an answer I wrote on Salesforce Stack Exchange.

This post follows a previous discussion about what makes a bad exception handler. I’d like to talk a little bit about what good exception handling patterns look like and where in an application one ought to use them.

What it means to handle an exception is to take an exceptional situation - something bad and out of the ordinary happened - and allow the application to safely move back into an anticipated pathway of operation, preserving

  • The integrity of the data involved.
  • The experience of the user.
  • The outcome of the process, if possible.

Exceptional situations, as befits them, are often very case-specific, which can make it challenging to talk about principles that apply broadly. I like to approach it with two lenses in mind: one is understanding where we’re operating in the application relative to the user and the user’s understanding of what operations are taking place, and the other is situating the error relative to the commitments we must make to data integrity and user trust.

Let’s look at a couple of examples of how this plays out in different layers of a Salesforce implementation.


Suppose we’re writing a trigger. The trigger takes data modified by the user, processes it, and makes updates elsewhere. It performs DML and does not use partial-success methods, such as Database.update(records, false). This means that failures will throw a DmlException. (If we did use partial-success methods, the same principles apply, they just play out differently because errors return to us in Result objects instead of exceptions).

Here, we have to answer at least two critical questions:

  • Are the failures we encounter amenable to being fixed?
  • What is the nature of the data manipulation we’re doing? If it fails, does that mean that the entire operation (including the change the user made) is invalid? That is, if we allow the user changes to go through without the work we’re doing, have we harmed the integrity of the user data?

These questions determine how we’ll respond to the exception.

If we know a particular exception can be thrown in a way that we can fix, our handler should just fix it and try again. That would be a genuine “handling” of the exception. In Apex, where exceptions aren’t typically used as flow control, this situation is somewhat less common than in a language like Python, but it does occur. For example, if we’re building a Queueable Apex class that’s designed to gain exclusive access to a particular record using a FOR UPDATE clause, we might catch a QueryException (indicating that we weren’t able to obtain the lock) and handle that exception by chaining into another Queueable, allowing us to successfully complete processing once the record becomes available.

But in most cases, that’s not our situation when building in Apex. It’s the second question, about data integrity, that tends to be determinative of the appropriate implementation pattern, and it’s why I advocate for eschewing exception handlers in many cases.

The most important job our code has is not to damage the integrity of the user’s data. For that reason, in most cases where an exception is related to data manipulation, I advocate for not catching it in backend code at all unless it can be meaningfully handled. Otherwise, it’s best to let higher-level functionality (below) catch it, or allow the exception to remain unhandled and to cause the whole transaction to be rolled back, preserving data integrity.

To make this concrete: suppose we’re building a trigger whose job is to update the dollar value of an Opportunity when the user updates a related Payment. Our Opportunity update might throw a DmlException; what do we do?

Ask the questions: Can we fix the problem in Apex alone? No, we can’t.

If we let the Opportunity update fail while the Payment update succeeds, do we lose data integrity? Yes. The data will be wrong, and invariants that our business users rely upon will be violated.

Let the exception be raised and dealt with at a higher level, or allowed to cause a rollback.

ἐὰν μὴ ἔλπηται ἀνέλπιστον, οὐκ ἐξευρήσει, ἀνεξερεύνητον ἐὸν καὶ ἄπορον
Should one not expect the unexpected, one shan’t find it, as it is hard-sought and trackless.
— Heraclitus, Fragment 18

Non-Critical Backend Functionality

There are other cases where we’ll want to catch, log, and suppress an exception. Take for example code that sends out emails in response to data changes (I’ll save for another time why I think that’s a terrible pattern). Again, we’ll look to the questions above:

  • Can we fix the problem? No.
  • Does the problem impact data integrity if we let it go forward? Also no.

So here is a situation where it might make sense to wrap the sending code in a try/catch block, and record email-related exceptions using a logging framework so that system administrators can review and act upon them. Then, we don’t re-raise the exception - we consume it and allow the transaction to continue.

We probably don’t want to block a Case update because some User in the system has a bad email address!

User-Facing Functionality

Now, turn the page to the frontend - Visualforce and Lightning. Here, we’re building in the controller layer, mediating between user input and the database.

We present a button that allows the user to perform some complex operation that can throw multiple species of exceptions. What’s our handling strategy?

Here, it is much more common, and even preferable, to use broader catch clauses that don’t fix the error, but do handle the exception in the sense of returning the application to a safe operating path. They do this by performing an explicit rollback to preserve data integrity and then surfacing a friendly error message to the user - an error message that helps to contextualize what may be a quite low-level exception.

For example, in Visualforce, you might do something like this:

Database.Savepoint sp = Database.setSavepoint();
try {
} catch (Exception e) { // Never otherwise catch the generic `Exception`!
    Database.rollback(sp); // Preserve integrity of the database.
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'An exception happened while trying to execute the operation.'));

That’s friendly to the user, applying that lens of understanding how they scope an operation that may fail or succeed: it shows them that the higher-level, semantic operation they attempted failed (and we might want to include the actual failure message too to give them a shot at fixing it, if we do not otherwise log the failure for the system administrators). But it’s also friendly to the database, because we make sure our handling of the exception doesn’t impact data integrity.

Even better would be to be specific about the failure using multiple catch blocks (where applicable):

Database.Savepoint sp = Database.setSavepoint();
try {
} catch (DmlException e) { 
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'Unable to save the data. The following error occurred: ' + e.getMessage()));
} catch (CalloutException e) {
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'We could not reach the remote system. Please try again in an hour.');

In Lightning (Aura) controllers, we’d re-throw an AuraHandledException instead of using ApexPages.addMessage(), but the same structural principles apply to how we think about these handlers.

The One Thing Not To Do


try {
    // do stuff
} catch (Exception e) {

I’ve written previously about what a dangerous and bad practice this is. To put it in the context of this discussion, let’s consider how we’d have to answer the questions above to make this pattern a good response.

You’d want to swallow an exception like this when:

  • We cannot fix the error.
  • What we’re doing has no impact on data integrity.


  • No one is going to notice or care that the functionality isn’t working (since we’re not telling anyone).

If these three statements are all true, I suspect a design fault in the system!

On Not Writing Bad Exception Handlers

Unlike some languages (such as Python) where exceptions have a greater role in flow control, in Apex, exceptions generally are exceptional: they connote some relatively uncommon bad state in the application that prevents the normal path of execution from continuing without some special handling. An exception is a message from the application to you, and one that you ignore at your, and your users’, peril!

Because exceptions are exceptional in Apex, they’re often frustrating companions early on in the lifecycle of an application, and bugbears that haunt newer developers who haven’t yet internalized the patterns that cause them. It’s easy to react to those situations by writing defensive Apex: code designed less to behave as expected under the myriad of conditions encountered in production and more to make the compiler and the runtime be quiet.

This post aims to diagnose several of these patterns (ranging from the unidiomatic and confusing through to those that cause failure and loss of data integrity), draw out their pathologies, and help developers understand why they’re not quite right, arming them to write better Apex and better confront failures that do occur.

Exception Handling over Logic

In Python, this kind of logic is fairly normal:

except KeyError:

That is, we try something first, catch a (specific) exception if thrown, and either continue execution or perform some other logic to handle the failure.

In Apex, this kind of structure can work, but isn’t very idiomatic. Rather than, for example, doing

try {
catch (NullPointerException e) {
    // do nothing

check the state before executing the risky operation - here, accessing a property of a value (the return value of Map#get()) that may be null:

if (someMap.containsKey(c.AccountId)) {

It’s often easier to write unit tests for Apex code that’s built this way. And, since Apex does not use exceptions as pervasively for flow control as some languages do, logic of this form is more consistent with the remainder of the application’s code.

Unneeded Exception Handlers

It’s easy to get overzealous about handling exceptions in an attempt either to be proactive or simply to get some troublesome code across the finish line. Doing so, however, can actually worsen code. Take these lines, for example:

List<Account> accountList;

try {
    accountList = [SELECT Id, Name FROM Account];
} catch (Exception e) {
    accountList = new List<Account>();

At first blush, the handler looks like it could make sense. It’s providing a default value for the following logic if the SOQL query should fail, which is what a good exception handler might in fact do.

But the problem here is that the SOQL query shown cannot throw any catchable exception. It won’t throw a QueryException regardless of data volume, because we’re assigning to a List<Account> rather than an Account sObject, and we’re not using any of the other SOQL clauses (such as FOR UPDATE or WITH SECURITY_ENFORCED) that might cause an exception to be thrown. It can’t throw a NullPointerException. And if a LimitException were thrown, we could not catch it anyway.

Using this pattern worsens our code in several ways:

  1. It increases the complexity of the code while decreasing readability.
  2. It suggests that error handling is taking place, but no error is actually possible.
  3. It reduces unit test code coverage by introducing logic paths that cannot be executed.

(3) is the most pernicious of these effects: the author of this code will never be able to cover the exception handler in a unit test, because the SOQL query cannot be made to throw an exception. As a result, their code coverage will go down, and they’ll leave the impression (per (2)) that there are genuine logic paths not being evaluated by their tests.

The solution, of course, is simple: remove the useless exception handler.

Overbroad Exception Handlers

A similar mistake, but one with a trickier downside, is writing genuine exception handlers that are overbroad in their catch blocks. Take the above example with a twist: we add a couple of reasons why the code could throw a real exception.

List<Account> accountList;

try {
    accountList = [
        SELECT Id, Name, Parent.Name
        FROM Account 
        WHERE Name = :someAccountName 
        FOR UPDATE
    if (accountList[0].Parent.Name.contains('ACME')) {
        accountList[0].Description = 'ACME Subsidiary';
    update accountList;
} catch (Exception e) {
    throw new AuraHandledException('Could not acquire a lock on this account');

This code can throw two specific exceptions, aside from LimitException: QueryException, if it tries and fails to obtain a lock on the records that are responsive to the query (for FOR UPDATE), and NullPointerException, if the first responsive Account does not have ParentId populated.

Will this exception handler catch those exceptions? Yes, it will. But it comes with some risks to do so, and it’s a better pattern to catch the specific exception you know may be thrown. Catching specific exceptions that you know about prevents your code from silently hiding, or incorrectly handling, extra exceptions you don’t know about. In the above example, the exception handler is written to make a user aware of the QueryException, but it will silently mask the NullPointerException. Other forms of this issue might handle the expected exception but incorrectly handle the other exception thrown, resulting in a difficult-to-debug fault in the application.

There are some contexts where fairly broad exception handlers are desirable. In a controller for a Lightning component (like this example) or Visualforce page, we might wish to catch any species of exception and present a failure message to the user. Even there, however, it’s best to keep exception handlers as specific as possible. For example, an update DML statement should be wrapped in a catch (DmlException e) handler, rather than enclosing a much larger block of code in a fully-generic catch (Exception e) block.

Keeping exception handlers focused also makes it easier to define the relevant failure modes and logical paths, and can facilitate construction of unit tests. Since tests that exercise exception handlers are often tricky to build in the first place, it’s a net gain to write code that’s as testable as possible - even if writing very broad exception handlers can sometimes make it easier to force an exception to be thrown in test context.

… τὰς μὲν ἐλλείπειν τὰς δ᾽ ὑπερβάλλειν τοῦ δέοντος ἔν τε τοῖς πάθεσι καὶ ἐν ταῖς πράξεσι, τὴν δ᾽ ἀρετὴν τὸ μέσον καὶ εὑρίσκειν καὶ αἱρεῖσθαι
Some vices fall short, while others overreach what is needed, in both feelings and in deeds, but virtue both finds and selects the mean.
— Aristotle, Nicomachean Ethics 1107a

Failing to Roll Back Side Effects

Unhandled exceptions cause Salesforce to rollback the entire transaction. This rollback ensures that inconsistent data are not committed to the database. Handling an exception prevents this rollback from occurring - but code which handles the exception is then responsible for maintaining database integrity.

Here’s a pathological example:

public static void commitRecords(List<Account> accounts, List<Opportunity> opportunities) {
    try {
        insert accounts;
        insert opportunities;
    } catch (DmlException e) {
        ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'Unable to save the records');

This code attempts to do the right thing. It handles a single, specific exception across a narrow scope of operations, and it presents a message to the user (in this case, in a Visualforce context) to indicate what happened.

But there’s a subtle issue here. If the DmlException is thrown by the second DML operation (insert opportunities), the Accounts inserted will not be rolled back. They’ll remain committed to the database, even though the user was told the operation failed, and if the user should retry will be inserted again. Depending on the implementation of the Visualforce page, other exceptions could occur due to the stored sObjects being in an unexpected state.

The solution is to use a savepoint/rollback structure to maintain database integrity, since we’re not allowing Salesforce to rollback the entire transaction:

public static void commitRecords(List<Account> accounts, List<Opportunity> opportunities) {
    Database.Savepoint sp = Database.setSavepoint();
    try {
        insert accounts;
        insert opportunities;
    } catch (DmlException e) {
        ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.FATAL, 'Unable to save the records');

Now, our handler properly maintains database integrity, and does not allow partial results to be committed.

Swallowing Exceptions

This pattern is far too common, and it’s pure poison.

try {
    // Do some complex code here
} catch (Exception e) {

Exceptions in Apex, again, connote exceptional circumstances: something went wrong. The state of the application or the local context is no longer in a cogent state, and can’t continue normally. This usually means one of three things, exclusive of LimitException (which we can’t catch anyway):

  1. The logic is incorrect.
  2. The logic fails to handle a potential data state.
  3. The logic fails to guard against an issue in some external component upon which it relies.

When such exceptions occur, your code is no longer in a position to guarantee to the user that their data will be complete, consistent, and correct after it is saved to the database. Suppressing those exceptions by catching them and writing them to the debug log - where no one in production is likely to see them - allows the transaction to complete successfully, even though an unknown failure has occurred and the results can no longer be reasoned about.

That’s really, really bad. Swallowing exceptions can cause loss of data, corruption of data, desynchronization between Salesforce and integrated external systems, violation of expected invariants in your data, and more and more failures downstream as the application and its users continue to interact with the damaged data set.

Swallowing exceptions violates user trust and creates subtle, difficult-to-debug problems that often manifest in real-world usage but evade some unit tests - which may show a false positive because exceptions are suppressed! - and simple user acceptance testing. Don’t use this pattern.

A Brief Word on Writing Good Exception Handlers

We’ve seen four examples of patterns that create poor exception handlers. How can you recognize a good one?

My rubric is pretty straightforward. A good exception handler:

  1. Handles a specific exception that can be thrown by the code in its try block.
  2. Handles an exception that cannot be averted by reasonable logic, or which is the documented failure mode of a specific action.
  3. Handles the exception in such a way as to maintain the integrity of the transaction, using rollbacks as necessary.

Ultimately, it’s key to remember what exception handlers are made to do: they’re not to suppress or hide errors, but to provide a cogent logical path to either recovering from an error or protecting the database from its effects. All else follows from rigorous evaluation of this principle.