Apex Triggers in Salesforce: The Complete Developer Guide (2026)

If you’ve spent any real time building on Salesforce, you know that declarative tools only take you so far. Flows and Process Builder handle the straightforward stuff — but when your business logic gets complex, stateful, or needs to interact with external systems mid-transaction, you end up in Apex. And inside Apex, triggers are where a lot of that work lives.

This guide covers everything you need to write, structure, and maintain Apex triggers at scale. It’s aimed at developers and architects who are past the basics — you’ve written triggers before, but you want a solid reference for patterns, frameworks, and the decisions that actually matter in production.


Table of Contents


What are Apex Triggers?

An Apex trigger is a block of code that executes automatically before or after data manipulation language (DML) events on a Salesforce object — things like inserts, updates, deletes, and undeletes. Unlike scheduled jobs or platform events, triggers run synchronously within the same transaction as the originating DML operation.

The short version: when a record changes, your trigger code runs. Every time, automatically, as part of the same database transaction.

Triggers sit inside the Salesforce governor limit boundary, which means they share heap, CPU, and query limits with everything else running in that transaction. That detail shapes almost every architectural decision you’ll make.

salesforce apex trigger execution flow
Salesforce Apex Trigger execution flow

When to use triggers vs other automation

This question comes up constantly, and the honest answer is that declarative tools have gotten good enough that you should reach for them first. Flows in particular can handle field updates, related record creation, cross-object logic, and even some external callouts — without a deployment and without touching Apex.

That said, triggers are the right choice when you need:

  • Complex conditional logic that would turn a Flow into an unreadable nest of decision elements
  • Bulk processing guarantees — writing Apex specifically designed for sets of records
  • Callouts or platform event publishing that need precise transaction control
  • Access to Trigger context variables like Trigger.old, Trigger.new, and Trigger.oldMap — which Flows can’t touch
  • Performance-critical paths where Flow execution overhead matters

Practical rule: if you can reasonably do it in a Flow and it won’t become a maintenance nightmare, use the Flow. If you’re spending more time working around Flow limitations than building the feature, write the trigger.


Trigger Syntax and Structure

A trigger declaration tells Salesforce which object it applies to, which DML events fire it, and whether it runs before or after the operation commits to the database.

Basic syntax

Apex

trigger TriggerName on ObjectName (trigger_events) {
    // your logic here
}

A concrete example on the Account object:

Apex

trigger AccountTrigger on Account (before insert, before update, after insert, after update, before delete, after delete, after undelete) {
    AccountTriggerHandler.run();
}

Notice the trigger body itself is almost empty — one line. This is intentional, and it’s the handler pattern we’ll cover in detail later.

Trigger events

Salesforce gives you seven events to work with:

EventTimingCommon use case
before insertBefore record saves to DBDefaulting fields, validation
after insertAfter record saves to DBCreating related records
before updateBefore record updates in DBField-level comparisons, validation
after updateAfter record updates in DBPropagating changes to related records
before deleteBefore record is deletedPreventing deletion, cleanup
after deleteAfter record is deletedArchiving, notifications
after undeleteAfter record is restoredRe-linking related records

You don’t need to subscribe to all seven events in every trigger — only declare the ones your logic actually uses. Subscribing to events you don’t handle adds unnecessary overhead and makes the trigger harder to reason about.

Trigger context variables

Context variables are what make triggers useful. They give you access to the records involved in the current operation — both the new state and the old state — without requiring additional queries.

  • Trigger.new — a list of the new versions of records (available in insert, update, undelete)
  • Trigger.old — a list of the old versions of records (available in update, delete)
  • Trigger.newMap — a Map<Id, SObject> of new records — much faster for lookups by Id
  • Trigger.oldMap — a Map<Id, SObject> of old records — essential for detecting changes
  • Trigger.isInsert, Trigger.isUpdate, Trigger.isDelete, Trigger.isUndelete — Boolean flags for the current event
  • Trigger.isBefore, Trigger.isAfter — flags for the current phase
  • Trigger.size — count of records in the current batch

The combination of Trigger.newMap and Trigger.oldMap is how you detect whether a specific field actually changed:

Apex

for (Account acc : Trigger.new) {
    Account oldAcc = Trigger.oldMap.get(acc.Id);
    if (acc.Industry != oldAcc.Industry) {
        // Industry changed — do something
    }
}
salesforce

Read the official Salesforce documentation on different Trigger Context variables here.


Before vs After Triggers

The choice between before and after isn’t arbitrary — it has concrete consequences for what you can do and how expensive the operation is.

Before triggers

Before triggers fire before the record is committed to the database. This means:

  • You can modify Trigger.new directly — field changes take effect without an additional DML call
  • Records in Trigger.new don’t have an Id yet as the record is not yet inserted in the Database (for inserts)
  • You can throw a validation error via record.addError() and stop the entire transaction

Here you go — two clean sections, ready to paste into WordPress:


Modifying Field Values in Before Insert / Before Update Triggers

One of the most practical uses of a before trigger is setting or modifying field values before a record hits the database. Because the trigger fires before the commit, any changes you make to Trigger.new records save automatically — no extra DML needed.

Here’s a real scenario: you want to auto-populate a Description field if the user left it blank, and normalize the Phone field format on every Account save.

Apex

trigger AccountTrigger on Account (before insert, before update) {
    for (Account acc : Trigger.new) {

        // Set a default description if none provided
        if (String.isBlank(acc.Description)) {
            acc.Description = 'No description provided. Please update.';
        }

        // Strip non-numeric characters from Phone before saving
        if (!String.isBlank(acc.Phone)) {
            acc.Phone = acc.Phone.replaceAll('[^0-9]', '');
        }
    }
}

A few things worth noting here. First, you’re modifying Trigger.new records directly — no update DML statement anywhere. That’s the whole point of before triggers. Second, this runs on both insert and update, so new records and edits both get the same treatment. If you only want the default description on new records, scope it with Trigger.isInsert:

Apex

trigger AccountTrigger on Account (before insert, before update) {
    for (Account acc : Trigger.new) {

        // Only default the description on brand new records
        if (Trigger.isInsert && String.isBlank(acc.Description)) {
            acc.Description = 'No description provided. Please update.';
        }

        // Normalize phone on both insert and update
        if (!String.isBlank(acc.Phone)) {
            acc.Phone = acc.Phone.replaceAll('[^0-9]', '');
        }
    }
}

For update triggers specifically, you’ll often want to check whether a field actually changed before acting on it. Reprocessing unchanged data wastes CPU and can cause subtle bugs. Use Trigger.oldMap to compare:

Apex

trigger AccountTrigger on Account (before insert, before update) {
    for (Account acc : Trigger.new) {

        // On update, only reformat Phone if it actually changed
        if (Trigger.isUpdate) {
            Account oldAcc = Trigger.oldMap.get(acc.Id);
            if (acc.Phone != oldAcc.Phone && !String.isBlank(acc.Phone)) {
                acc.Phone = acc.Phone.replaceAll('[^0-9]', '');
            }
        }

        // On insert, always normalize
        if (Trigger.isInsert && !String.isBlank(acc.Phone)) {
            acc.Phone = acc.Phone.replaceAll('[^0-9]', '');
        }
    }
}

Remember: Trigger.oldMap is only available in update and delete contexts. Trying to access it in a before insert trigger will throw a null pointer exception.


Using addError() to Block Record Insert or Update

addError() is how you stop a transaction in its tracks from inside a trigger. Call it on a record in Trigger.new and Salesforce rolls back the entire operation and surfaces your message to the user — in the UI, in the API response, and in Data Loader error logs.

Here’s a straightforward example: blocking an Account insert if the Industry field is blank, and preventing an update that would set AnnualRevenue to a negative number.

Apex

trigger AccountTrigger on Account (before insert, before update) {
    for (Account acc : Trigger.new) {

        // Block insert if Industry is not set
        if (Trigger.isInsert && String.isBlank(acc.Industry)) {
            acc.addError('Industry is required. Please select an industry before saving.');
        }

        // Block negative revenue on both insert and update
        if (acc.AnnualRevenue != null && acc.AnnualRevenue < 0) {
            acc.addError(
                acc.AnnualRevenue,
                'Annual Revenue cannot be negative. Enter a valid revenue amount.'
            );
        }
    }
}

Notice the second addError() call takes the field as the first argument — acc.AnnualRevenue. This is the field-level variant, and it highlights the specific field in the UI instead of showing a generic page-level error. Use it whenever the error is clearly tied to one field. Use the single-argument version for errors that involve multiple fields or broader logic.

Here’s a slightly more realistic example — preventing a Closed Won opportunity from being deleted, and blocking a stage change back to Prospecting once a deal has moved past Proposal:

Apex

trigger OpportunityTrigger on Opportunity (before update, before delete) {

    // Prevent deletion of Closed Won opportunities
    if (Trigger.isDelete) {
        for (Opportunity opp : Trigger.old) {
            if (opp.StageName == 'Closed Won') {
                opp.addError('Closed Won opportunities cannot be deleted. Archive them instead.');
            }
        }
    }

    // Prevent moving stage backwards past Proposal
    if (Trigger.isUpdate) {
        List<String> advancedStages = new List<String>{
            'Proposal/Price Quote', 'Value Proposition', 'Perception Analysis',
            'Id. Decision Makers', 'Negotiation/Review', 'Closed Won', 'Closed Lost'
        };

        for (Opportunity opp : Trigger.new) {
            Opportunity oldOpp = Trigger.oldMap.get(opp.Id);
            Boolean wasAdvanced = advancedStages.contains(oldOpp.StageName);
            Boolean movingBack  = opp.StageName == 'Prospecting' || opp.StageName == 'Needs Analysis';

            if (wasAdvanced && movingBack) {
                opp.addError(
                    opp.StageName,
                    'You cannot move an opportunity back to ' + opp.StageName + ' once it has passed the Proposal stage.'
                );
            }
        }
    }
}

A couple of things to keep in mind with addError():

Error messages show up in the UI exactly as written, so write them like a human will read them. “Field required” is not helpful. “Industry is required. Please select an industry before saving.” is.

Calling it on any record in the batch rolls back the entire transaction in most contexts. In Salesforce APIs that support partial success (like the REST API with allOrNone=false), only the offending record fails and others can still save.

addError() only works in before triggers for blocking saves. Calling it in an after trigger still rolls back the transaction, but by that point the record has already committed — it’s messier and harder to reason about. Keep validation in before triggers.

Use before triggers for field defaulting, data normalization, and validation. If you’re setting a field value and that value needs to save with the record, do it in a before trigger. Doing it in an after trigger means a separate DML update, which costs a governor limit operation and fires triggers again.

After triggers

After triggers fire once the record has been committed. Records in Trigger.new now have Ids. This matters because you often need those Ids to create related records or update junction objects.

  • Use after triggers when you need the record’s Id (creating child records, building relationships)
  • Use after triggers for callouts or platform event publishing
  • Use after triggers when your logic needs to query the freshly committed state

One thing to watch: if you update a field in an after trigger, you can’t modify Trigger.new — those records are already committed. You have to issue a DML update, which means writing a separate list, populating it with just the Ids and fields you’re changing, and calling update. This is the correct pattern and prevents inadvertently triggering cascade logic by only touching the fields you mean to.

Rule of thumb: default to before triggers for record modifications, after triggers for everything else. If you find yourself writing a DML update on the same object type inside an after trigger, make sure you really can’t accomplish the same thing in before.

Performance considerations

Before triggers are cheaper by default for field modifications — no extra DML, no extra trigger execution. The gap matters at scale. If you’re running a data migration of 100,000 records and your logic does an unnecessary after-trigger DML update on each batch of 200, you’re consuming double the DML operations for no reason.


Trigger Best Practices

One trigger per object

This is the most widely cited Apex trigger rule, and it’s worth understanding why it exists rather than just following it.

When multiple triggers exist on the same object, Salesforce doesn’t guarantee their execution order. Two developers independently write a trigger on Account, each making assumptions about the starting field values — and now those assumptions are wrong depending on which trigger ran first. The behavior becomes non-deterministic and nearly impossible to debug.

One trigger per object, delegating to a handler class, gives you a single ordered entry point for all logic. You control the execution sequence. You can add conditions, short-circuits, and bypass flags in one place.

Bulkification

Governor limits exist per transaction, not per record. Salesforce can pass up to 200 records to a trigger in a single batch — and if your trigger makes a SOQL query for each one, you’ll hit the 100-query limit on batch 1.

This pattern will cause problems:

Apex

for (Contact con : Trigger.new) {
    Account acc = [SELECT Id, Name FROM Account WHERE Id = :con.AccountId]; // Query inside loop!
}

This is what bulkified code looks like:

Apex

Set<Id> accountIds = new Set<Id>();
for (Contact con : Trigger.new) {
    if (con.AccountId != null) accountIds.add(con.AccountId);
}

Map<Id, Account> accountMap = new Map<Id, Account>(
    [SELECT Id, Name FROM Account WHERE Id IN :accountIds]
);

for (Contact con : Trigger.new) {
    Account acc = accountMap.get(con.AccountId);
    // process each record using the pre-fetched map
}

The pattern is: collect Ids from the trigger set, query once into a Map, then loop and look up from the map. No queries inside loops.

Avoiding recursive triggers

When your trigger updates a record, that update can fire the same trigger again. If you’re not careful, you get an infinite loop — or more precisely, you hit the governor limit for recursive trigger depth (16 levels) and get an error.

The standard solution is a static boolean flag in a separate class:

Apex

public class TriggerRunOnce {
    public static Boolean hasRun = false;
}

Apex

// In your trigger handler:
if (!TriggerRunOnce.hasRun) {
    TriggerRunOnce.hasRun = true;
    // your logic
}

Be thoughtful about where you set this flag. If you set it too early, you might accidentally prevent legitimate re-executions. The right place is typically just before the DML operation that would cause re-entry.

Governor limit considerations

A few limits that frequently bite trigger code:

  • 100 SOQL queries per transaction — query outside loops, use Trigger.newMap for lookups
  • 150 DML operations per transaction — accumulate all changes into lists, then do a single insert/update/delete per object type
  • 50,000 rows returned by SOQL — watch for unbounded queries in after triggers where related-record sets can be large
  • 10MB heap limit — avoid storing full SObject lists when you only need specific fields
salesforce

Read this Salesforce documentation on different Governor Limits here.


Trigger Handler Framework

Raw triggers — logic-heavy, monolithic blocks in the trigger body itself — are hard to test, harder to extend, and a headache for anyone who has to maintain them six months later.

The handler pattern solves this by keeping the trigger body thin (just an entry point) and putting all actual logic in a separate Apex class. That class is testable in isolation, can be mocked, and gives you a clear place to add new behavior without touching the trigger file itself.

The basic handler pattern

Apex

trigger AccountTrigger on Account (before insert, before update, after insert, after update) {
    AccountTriggerHandler handler = new AccountTriggerHandler();
    if (Trigger.isBefore) {
        if (Trigger.isInsert) handler.beforeInsert(Trigger.new);
        if (Trigger.isUpdate) handler.beforeUpdate(Trigger.new, Trigger.oldMap);
    } else if (Trigger.isAfter) {
        if (Trigger.isInsert) handler.afterInsert(Trigger.new);
        if (Trigger.isUpdate) handler.afterUpdate(Trigger.new, Trigger.oldMap);
    }
}

Apex

public class AccountTriggerHandler {
    public void beforeInsert(List<Account> newAccounts) {
        AccountService.setDefaultRating(newAccounts);
    }
    public void afterInsert(List<Account> newAccounts) {
        AccountService.createDefaultContacts(newAccounts);
    }
    // ... other methods
}

Separation of concerns

Notice the handler doesn’t contain the actual business logic — it delegates further to a service class (AccountService in the example above). This is intentional.

The handler knows about trigger context. The service class knows about the domain logic. Neither knows about the other’s concerns. When you need to call the same logic from a batch class or a REST endpoint, you call AccountService directly — the same tested, working code — without any trigger dependency.

Framework options

If you want more structure, there are established open-source frameworks worth knowing:

  • fflib Apex Commons — provides a base TriggerHandler class with virtual methods for each event, bypass mechanisms, and unit of work patterns for DML accumulation
  • Apex Trigger Actions Framework — metadata-driven framework where trigger actions are configured in custom metadata, allowing non-developers to control execution order and enable/disable actions without deploying code
  • Kevin O’Hara’s Trigger Handler — lightweight base class pattern, widely adopted and easy to understand

For enterprise orgs, a metadata-driven approach is worth the upfront investment. Being able to disable a specific trigger action during a data migration without deploying code is genuinely useful.


Common Trigger Patterns

Field validation

Validation in triggers — rather than validation rules — makes sense when your logic depends on external data you’d otherwise need a SOQL query to retrieve, or when you need to validate relationships between fields across multiple records in the same DML batch.

Apex

for (Opportunity opp : Trigger.new) {
    if (opp.CloseDate < Date.today() && opp.StageName != 'Closed Lost') {
        opp.addError('Past close dates are only allowed for Closed Lost opportunities.');
    }
}

addError() on a before trigger prevents the record from saving and rolls back the transaction. On after triggers, it still rolls back — but you can’t call it on Trigger.new records since they’re already committed, so you’d throw a custom exception instead.

Related record creation

After-insert triggers are the standard place to create child records. Always do this in bulk — accumulate the records to insert in a list, then do a single DML insert at the end.

Apex

List<Contact> contactsToInsert = new List<Contact>();
for (Account acc : Trigger.new) {
    if (acc.Type == 'Partner') {
        contactsToInsert.add(new Contact(
            LastName = 'Partner Contact',
            AccountId = acc.Id,
            Email = 'contact@' + acc.Website
        ));
    }
}
if (!contactsToInsert.isEmpty()) insert contactsToInsert;

Rollup calculations without Roll-Up Summary fields

Sometimes you need to recalculate a parent field when a child record changes — the same thing DLRS or a roll-up summary field would do, but with custom logic. The pattern: collect parent Ids from the trigger set, query parent records, compute the aggregate, update.

Keep the DML at the end. One update statement for all parent records, not one per loop iteration.

Integration triggers

When a trigger needs to kick off an external callout, you can’t do it synchronously in the trigger body — Salesforce doesn’t allow callouts in the same synchronous transaction as DML. The right pattern is to publish a Platform Event or enqueue a Queueable from the trigger, then handle the callout in the async context.

Apex

System.enqueueJob(new AccountSyncQueueable(accountIds));

This keeps your trigger fast and governor-safe, and lets the async job retry if the external system is unavailable.


Testing Triggers

What to test

The trigger itself is just an entry point — you’re really testing the handler and service logic. That said, you should have at least one test that fires the actual trigger (i.e., performs DML in a test context) to confirm the wiring is correct.

Bulk test data

The most important test you can write is a bulk test: insert or update 200 records in a single DML call. This is how Salesforce Data Loader and integrations will actually call your trigger, and it’s how governor limit issues surface. A test with a single record will pass even if your trigger has a SOQL query in a loop — the bulk test won’t.

Apex

@isTest static void testBulkInsert() {
    List<Account> accounts = new List<Account>();
    for (Integer i = 0; i < 200; i++) {
        accounts.add(new Account(Name = 'Test Account ' + i, Type = 'Partner'));
    }
    Test.startTest();
    insert accounts;
    Test.stopTest();
    // assert expected outcomes
}

Test.startTest() and Test.stopTest()

Wrapping your DML in Test.startTest() / Test.stopTest() resets governor limits for that block, which more accurately simulates a real transaction. It also forces async operations (queueables, future methods) to complete before your assertions run.

Using @TestSetup

For tests that share common data — like an Account that multiple test methods reference — use @TestSetup to create that data once. It runs before each test method but only executes the setup DML once per test class, which speeds things up significantly for classes with many test methods.


Troubleshooting and Debugging

Debug logs

The Developer Console log viewer is fine for quick checks, but the logs get unwieldy fast in complex transactions. The Apex log level for triggers is set on the Apex Code category — setting it to FINE or FINER gives you method entry/exit and variable values.

One practical habit: add System.debug() calls at the start of each handler method with the trigger size and the event type. You’ll thank yourself when investigating a production issue at 2am.

Apex

System.debug('AccountTriggerHandler.beforeUpdate - size: ' + accounts.size() + ', event: ' + Trigger.operationType);

Identifying recursive trigger loops

If you’re hitting the “Maximum trigger depth exceeded” error, the stack trace will show you the same trigger appearing multiple times. Work backwards through the trace to find which DML operation in your code is causing re-entry, and add a recursion guard at that point.

Checking governor limit consumption

In tests, you can check limits at any point with Limits.getQueries() and Limits.getLimitQueries(). Adding assertions on these after a bulk test is a lightweight way to catch regressions before they become production problems.

Apex

System.assert(Limits.getQueries() < 10, 'Too many queries: ' + Limits.getQueries());

Bypass mechanisms

Sometimes you need to disable a trigger for a specific operation — a data migration, a setup script, or a test that’s testing something else entirely. A static bypass flag in a utility class gives you that control:

Apex

public class TriggerBypass {
    public static Set<String> bypassedTriggers = new Set<String>();
    public static Boolean isBypassed(String triggerName) {
        return bypassedTriggers.contains(triggerName);
    }
}

Apex

// In your trigger handler:
if (TriggerBypass.isBypassed('AccountTrigger')) return;

For production bypass needs — like letting an admin disable a trigger for a data load without deploying code — a Custom Setting or Custom Metadata record is the more robust approach.


FAQ

Can I have more than one Apex trigger on the same object?

Technically yes, but you shouldn’t. Salesforce doesn’t guarantee execution order between multiple triggers on the same object. Use one trigger per object and delegate to a handler class that manages the execution sequence explicitly.

What’s the difference between Trigger.new and Trigger.newMap?

Trigger.new is a List<SObject> of records in their new state. Trigger.newMap is a Map<Id, SObject> of the same records, keyed by Id. The map is faster when you need to look up a specific record by Id — O(1) versus iterating through the list. Use Trigger.newMap whenever you need to correlate records across relationships.

When should I use a trigger vs a Flow?

Flows are the better default for most automation. Use triggers when you need complex Apex logic that would be difficult to maintain in Flow, when you need to guarantee bulk-safe behavior across large data sets, when you need full control over transaction timing, or when your logic is too performance-sensitive for Flow overhead.

How do I prevent a trigger from running during a data migration?

The cleanest approach is a bypass mechanism using a Custom Setting or Custom Metadata record that an admin can toggle without deploying code. During migration, flip the bypass on. Alternatively, a static boolean flag works if you control the migration code, but it won’t persist across transaction boundaries in bulk loads.

Why is my after trigger causing recursion?

An after trigger that updates the same object type it’s watching will re-fire the trigger. Add a recursion guard — a static boolean class that flips once the trigger has run — and check it at the top of your handler before doing any work. Make sure the guard is set before the DML call that would cause re-entry, not after.

What’s the maximum number of records in Trigger.new?

Salesforce processes records in batches of up to 200 per trigger invocation. If a DML statement affects 400 records, your trigger fires twice — once with 200, once with 200. Design your logic to handle any number from 1 to 200, and always test with the full 200.

Can I make callouts from a trigger?

Not directly from a synchronous trigger — Salesforce prohibits synchronous callouts in transactions that contain uncommitted DML. The standard approach is to enqueue a Queueable or publish a Platform Event from the trigger, then make the callout in the async context where you have full callout permissions.


What to read next

If you’re building out a full trigger architecture, the natural next step is understanding Apex patterns for enterprise orgs — specifically the Unit of Work pattern from fflib, which solves the problem of accumulating DML across multiple service calls without duplicating insert/update lists everywhere.

For the testing side, Apex mocking patterns (using the Stub API or a mocking library like ApexMocks) let you unit-test handler and service classes without hitting the database at all — much faster test runs and more precise failure isolation.

And if you’re evaluating whether to migrate existing trigger logic to Flows as part of a modernization effort, Salesforce’s own documentation on Flow scalability limits and transaction control is worth reading before you start — the boundaries have moved considerably in recent releases.

Add a Comment

Your email address will not be published. Required fields are marked *