Apex Flashcards

1
Q

Apex is

A

Hosted—Apex is saved, compiled, and executed on the server—the Lightning Platform.
Object oriented—Apex supports classes, interfaces, and inheritance.
Strongly typed—Apex validates references to objects at compile time.
Multitenant aware—Because Apex runs in a multitenant platform, it guards closely against runaway code by enforcing limits, which prevent code from monopolizing shared resources.
Integrated with the database—It is straightforward to access and manipulate records. Apex provides direct access to records and their fields, and provides statements and query languages to manipulate those records.
Data focused—Apex provides transactional access to the database, allowing you to roll back operations.
Easy to use—Apex is based on familiar Java idioms.
Easy to test—Apex provides built-in support for unit test creation, execution, and code coverage. Salesforce ensures that all custom Apex code works as expected by executing all unit tests prior to any platform upgrades.
Versioned—Custom Apex code can be saved against different versions of the API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Unlike other object-oriented programming languages, Apex supports

A

Unlike other object-oriented programming languages, Apex supports:

  • Cloud development as Apex is stored, compiled, and executed in the cloud.
  • Triggers, which are similar to triggers in database systems.
  • Database statements that allow you to make direct database calls and query languages to query and search data.
  • Transactions and rollbacks.
  • The global access modifier, which is more permissive than the public modifier and allows access across namespaces and applications.
  • Versioning of custom code.
  • In addition, Apex is a case-insensitive language.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Choose a Salesforce Org for Apex Development

A

You can develop Apex in a sandbox, scratch org, or Developer Edition org, but not directly in a production org. With so many choices, here’s some help to determine which org type is right for you and how to create it.

Sandboxes (Recommended)
A sandbox is a copy of your production org’s metadata in a separate environment, with varying amounts of data depending on the sandbox type. A sandbox provides a safe space for developers and admins to experiment with new features and validate changes before deploying code to production. Developer and Developer Pro sandboxes with source tracking enabled can take advantage of many of the features of our Salesforce DX source-driven development tools, including Salesforce CLI, Code Builder, and DevOps Center.

Scratch Orgs (Recommended)
A scratch org is a source-driven and temporary deployment of Salesforce code and metadata. A scratch org is fully configurable, allowing you to emulate different Salesforce editions with different features and settings. Scratch orgs have a maximum 30-day lifespan, with the default set at 7 days.

Developer Edition (DE) Orgs
A DE org is a free org that provides access to many of the features available in an Enterprise Edition org. Developer Edition orgs can become out-of-date over time and have limited storage. Developer Edition orgs don’t have source tracking enabled and can’t be used as development environments in DevOps Center. Developer Edition orgs expire if they aren’t logged into regularly. You can sign up for as many Developer Edition orgs as you like on the Developer Edition Signup page.

Trial Edition Orgs
Trial editions usually expire after 30 days, so they’re great for evaluating Salesforce functionality but aren’t intended for use as a permanent development environment. Although Apex triggers are available in trial editions, they’re disabled when you convert to any other edition. Deploy your code to another org before conversion to retain your Apex triggers. Salesforce offers several product- and industry-specific free trial orgs.

Production Orgs (Not Supported)
A production org is the final destination for your code and applications, and has live users accessing your data. You can’t develop Apex in your Salesforce production org, and we recommend that you avoid directly modifying any code or metadata directly in production. Live users accessing the system while you’re developing can destabilize your data or corrupt your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Developer Console

A

The Developer Console is an integrated development environment (IDE) built into Salesforce. Use it to create, debug, and test Apex classes and triggers.

To open the Developer Console from Lightning Experience: Click the quick access menu (Gear icon in upper right of Salesforce org), then click Developer Console.

To open the Developer Console from Salesforce Classic: Click Your Name | Developer Console.

The Developer Console supports these tasks:

  1. Writing code—You can add code using the source code editor. Also, you can browse packages in your organization.
  2. Compiling code—When you save a trigger or class, the code is automatically compiled. Any compilation errors are reported.
  3. Debugging—You can view debug logs and set checkpoints that aid in debugging.
  4. Testing—You can execute tests of specific test classes or all tests in your organization, and you can view test results. Also, you can inspect code coverage.
  5. Checking performance—You can inspect debug logs to locate performance bottlenecks.
  6. SOQL queries—You can query data in your organization and view the results using the Query Editor.
  7. Color coding and autocomplete—The source code editor uses a color scheme for easier readability of code elements and provides autocompletion for class and method names.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Data Types

A

In Apex, all variables and expressions have a data type, such as sObject, primitive, or enum.
1. A primitive, such as an Integer, Double, Long, Date, Datetime, String, ID, or Boolean (see Primitive Data Types)
2. An sObject, either as a generic sObject or as a specific sObject, such as an Account, Contact, or MyCustomObject_c (see Working with sObjects in Chapter 4.)
A collection, including:
3. A list (or array) of primitives, sObjects, user defined objects, objects created from Apex classes, or collections (see Lists)
4. A set of primitives (see Sets)
5. A map from a primitive to a primitive, sObject, or collection (see Maps)
6. A typed list of values, also known as an enum (see Enums)
7. Objects created from user-defined Apex classes (see Classes, Objects, and Interfaces)
8. Objects created from system supplied Apex classes
9. Null (for the null constant, which can be assigned to any variable)
Methods can return values of any of the listed types, or return no value and be of type Void.

Type checking is strictly enforced at compile time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

sObject Types

A
Account a = new Account();
MyCustomObjectc co = new MyCustomObject\_\_c();
// Cast the generic variable s from the example above
// into a specific account and account variable a
Account a = (Account) s;
// The following generates a runtime error
Contact c = (Contact) s;

Custom Labels
Custom labels aren’t standard sObjects. You can’t create a new instance of a custom label. You can only access the value of a custom label using system.label.label_name. For example:
String errorMsg = System.Label.genericerror;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The following example shows how you can use SOSL over a set of records to determine their object types. Once you have converted the generic SObject record into a Contact, Lead, or Account, you can modify its fields accordingly

A
public class convertToCLA {
    List<Contact> contacts = new List<Contact>();
    List<Lead> leads = new List<Lead>();
    List<Account> accounts = new List<Account>();
 
    public void convertType(String phoneNumber) {
        List<List<SObject>> results = [FIND :phoneNumber 
            IN Phone FIELDS 
            RETURNING Contact(Id, Phone, FirstName, LastName), 
            Lead(Id, Phone, FirstName, LastName), 
            Account(Id, Phone, Name)];
        List<SObject> records = new List<SObject>();
        records.addAll(results[0]); //add Contact results to our results super-set
        records.addAll(results[1]); //add Lead results
        records.addAll(results[2]); //add Account results
 
        if (!records.isEmpty()) { 
            for (Integer i = 0; i < records.size(); i++) { 
                SObject record = records[i];
                if (record.getSObjectType() == Contact.sObjectType) { 
                    contacts.add((Contact) record);
                } else if (record.getSObjectType() == Lead.sObjectType){ 
                    leads.add((Lead) record);
                } else if (record.getSObjectType() == Account.sObjectType) { 
                    accounts.add((Account) record); 
                }
            }
        }
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Using SObject Fields

A

SObject fields can be initially set or not set (unset); unset fields are not the same as null or blank fields. When you perform a DML operation on an SObject, you can change a field that is set; you can’t change unset fields.

Note

To erase the current value of a field, set the field to null.

If an Apex method takes an SObject parameter, you can use the System.isSet() method to identify the set fields. If you want to unset any fields to retain their values, first create an SObject instance. Then apply only the fields you want to be part of the DML operation.

An expression with SObject fields of type Boolean evaluates to true only if the SObject field is true. If the field is false or null, the expression evaluates to false.
This example code shows an expression that checks if the IsActive field of a Campaign object is null. Because this expression always evaluates to false, the code in the if statement is never executed.

Campaign cObj= new Campaign(); 
...
   if (cObj.IsActive == null) {
  ... // IsActive is evaluated to false and this code block is not executed.
   }
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Single vs. Bulk DML Operations

A

You can perform DML operations either on a single sObject, or in bulk on a list of sObjects. Performing bulk DML operations is the recommended way because it helps avoid hitting governor limits, such as the DML limit of 150 statements per Apex transaction.
Another DML governor limit is the total number of rows that can be processed by DML operations in a single transaction, which is 10,000. All rows processed by all DML calls in the same transaction count incrementally toward this limit. For example, if you insert 100 contacts and update 50 contacts in the same transaction, your total DML processed rows are 150. You still have 9,850 rows left (10,000 - 150).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

System Context and Sharing Rules

A

Most DML operations execute in system context, ignoring the current user’s permissions, field-level security, organization-wide defaults, position in the role hierarchy, and sharing rules.

Note

If you execute DML operations within an anonymous block, they execute using the current user’s object and field-level permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

DML Operations, except create

A

Among the operations you can perform are record updates, deletions, restoring records from the Recycle Bin, merging records, or converting leads. After querying for records, you get sObject instances that you can modify and then persist the changes of.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

DML Statements vs. Database Class Methods

A

Apex offers two ways to perform DML operations: using DML statements or Database class methods. This provides flexibility in how you perform data operations. DML statements are more straightforward to use and result in exceptions that you can handle in your code.
One difference between the two options is that by using the Database class method, you can specify whether or not to allow for partial record processing if errors are encountered. You can do so by passing an additional second Boolean parameter. If you specify false for this parameter and if a record fails, the remainder of DML operations can still succeed. Also, instead of exceptions, a result object array (or one result object if only one sObject was passed in) is returned containing the status of each operation and any errors encountered. By default, this optional parameter is true, which means that if at least one sObject can’t be processed, all remaining sObjects won’t and an exception will be thrown for the record that causes a failure.

The following helps you decide when you want to use DML statements or Database class methods.

Use DML statements if you want any error that occurs during bulk DML processing to be thrown as an Apex exception that immediately interrupts control flow (by using try. . .catch blocks). This behavior is similar to the way exceptions are handled in most database procedural languages.
Use Database class methods if you want to allow partial success of a bulk DML operation—if a record fails, the remainder of the DML operation can still succeed. Your application can then inspect the rejected records and possibly retry the operation. When using this form, you can write code that never throws DML exception errors. Instead, your code can use the appropriate results array to judge success or failure. Note that Database methods also include a syntax that supports thrown exceptions, similar to DML statements.

Most operations overlap between the two, except for a few.

The convertLead operation is only available as a Database class method, not as a DML statement.
The Database class also provides methods not available as DML statements, such as methods transaction control and rollback, emptying the Recycle Bin, and methods related to SOQL queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DML Operations As Atomic Transactions

A

DML operations execute within a transaction. All DML operations in a transaction either complete successfully, or if an error occurs in one operation, the entire transaction is rolled back and no data is committed to the database. The boundary of a transaction can be a trigger, a class method, an anonymous block of code, an Apex page, or a custom Web service method.

All operations that occur inside the transaction boundary represent a single unit of operations. This also applies to calls that are made from the transaction boundary to external code, such as classes or triggers that get fired as a result of the code running in the transaction boundary. For example, consider the following chain of operations: a custom Apex Web service method calls a method in a class that performs some DML operations. In this case, all changes are committed to the database only after all operations in the transaction finish executing and don’t cause any errors. If an error occurs in any of the intermediate steps, all database changes are rolled back and the transaction isn’t committed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

DML: Inserting and Updating Records

A
Account[] accts = new List<Account>();
for(Integer i=0;i<3;i++) {
    Account a = new Account(Name='Acme' + i, 
                            BillingCity='San Francisco');
    accts.add(a);
}
Account accountToUpdate;
try {
    insert accts;        
    
    // Update account Acme2.
    accountToUpdate = 
        [SELECT BillingCity FROM Account 
         WHERE Name='Acme2' AND BillingCity='San Francisco'
         LIMIT 1];
    // Update the billing city.
    accountToUpdate.BillingCity = 'New York';
    // Make the update call.
    update accountToUpdate;
} catch(DmlException e) {
    System.debug('An unexpected error has occurred: ' + e.getMessage());
}

// Verify that the billing city was updated to New York.
Account afterUpdate = 
    [SELECT BillingCity FROM Account WHERE Id=:accountToUpdate.Id];
System.assertEquals('New York', afterUpdate.BillingCity);
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DML: Inserting Related Records

A

You can insert records related to existing records if a relationship has already been defined between the two objects, such as a lookup or master-detail relationship. A record is associated with a related record through a foreign key ID. For example, when inserting a new contact, you can specify the contact’s related account record by setting the value of the AccountId field.
~~~
try {
Account acct = new Account(Name=’SFDC Account’);
insert acct;

// Once the account is inserted, the sObject will be 
// populated with an ID.
// Get this ID.
ID acctID = acct.ID;

// Add a contact to this account.
Contact con = new Contact(
    FirstName='Joe',
    LastName='Smith',
    Phone='415.555.1212',
    AccountId=acctID);
insert con; } catch(DmlException e) {
System.debug('An unexpected error has occurred: ' + e.getMessage()); } ~~~
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

DML: Updating Related Records

A

Fields on related records can’t be updated with the same call to the DML operation and require a separate DML call. For example, if inserting a new contact, you can specify the contact’s related account record by setting the value of the AccountId field. However, you can’t change the account’s name without updating the account itself with a separate DML call. Similarly, when updating a contact, if you also want to update the contact’s related account, you must make two DML calls. The following example updates a contact and its related account using two update statements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

DML: Relating Records by Using an External ID

A

Add related records by using a custom external ID field on the parent record. Associating records through the external ID field is an alternative to using the record ID. You can add a related record to another record only if a relationship (such as master-detail or lookup) has been defined for the objects involved.
This example relates a new opportunity to an existing account. The Account sObject has a custom field marked as External ID. An opportunity record is associated to the account record through the custom External ID field. The example assumes that:
The Account sObject has an external ID field of type text and named MyExtID
An account record exists where MyExtID_c = ‘SAP111111’
Before the new opportunity is inserted, the account record is added to this opportunity as an sObject through the Opportunity.Account relationship field.
~~~
Opportunity newOpportunity = new Opportunity(
Name=’OpportunityWithAccountInsert’,
StageName=’Prospecting’,
CloseDate=Date.today().addDays(7));

// Create the parent record reference.
// An account with external ID = ‘SAP111111’ already exists.
// This sObject is used only for foreign key reference
// and doesn’t contain any other fields.
Account accountReference = new Account(
MyExtID__c=’SAP111111’);

// Add the account sObject to the opportunity.
newOpportunity.Account = accountReference;

// Create the opportunity.
Database.SaveResult results = Database.insert(newOpportunity);
~~~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

DML: Creating Parent and Child Records in a Single Statement Using Foreign Keys

A

You can use external ID fields as foreign keys to create parent and child records of different sObject types in a single step instead of creating the parent record first, querying its ID, and then creating the child record. To do this:
Create the child sObject and populate its required fields, and optionally other fields.
Create the parent reference sObject used only for setting the parent foreign key reference on the child sObject. This sObject has only the external ID field defined and no other fields set.
Set the foreign key field of the child sObject to the parent reference sObject you just created.
Create another parent sObject to be passed to the insert statement. This sObject must have the required fields (and optionally other fields) set in addition to the external ID field.
Call insert by passing it an array of sObjects to create. The parent sObject must precede the child sObject in the array, that is, the array index of the parent must be lower than the child’s index.
You can create related records that are up to 10 levels deep. Also, the related records created in a single call must have different sObject types. For more information, see Creating Records for Different Object Types in the SOAP API Developer Guide.

The following example shows how to create an opportunity with a parent account using the same insert statement. The example creates an Opportunity sObject and populates some of its fields, then creates two Account objects. The first account is only for the foreign key relationship, and the second is for the account creation and has the account fields set. Both accounts have the external ID field, MyExtID_c, set. Next, the sample calls Database.insert by passing it an array of sObjects. The first element in the array is the parent sObject and the second is the opportunity sObject. The Database.insert statement creates the opportunity with its parent account in a single step. Finally, the sample checks the results and writes the IDs of the created records to the debug log, or the first error if record creation fails. This sample requires an external ID text field on Account called MyExtID.

public class ParentChildSample {
    public static void InsertParentChild() {
        Date dt = Date.today();
        dt = dt.addDays(7);
        Opportunity newOpportunity = new Opportunity(
            Name='OpportunityWithAccountInsert',
            StageName='Prospecting',
            CloseDate=dt);
        
        // Create the parent reference.
        // Used only for foreign key reference
        // and doesn't contain any other fields.
        Account accountReference = new Account(
            MyExtID\_\_c='SAP111111');                
        newOpportunity.Account = accountReference;
        
        // Create the Account object to insert.
        // Same as above but has Name field.
        // Used for the insert.
        Account parentAccount = new Account(
            Name='Hallie',
            MyExtID\_\_c='SAP111111');      
        
        // Create the account and the opportunity.
        Database.SaveResult[] results = Database.insert(new SObject[] {
            parentAccount, newOpportunity });
        
        // Check results.
        for (Integer i = 0; i < results.size(); i++) {
            if (results[i].isSuccess()) {
            System.debug('Successfully created ID: '
                  \+ results[i].getId());
            } else {
            System.debug('Error: could not create sobject '
                  \+ 'for array element ' + i + '.');
            System.debug('   The error reported was: '
                  \+ results[i].getErrors()[0].getMessage() + '\n');
            }
        }
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

DML: Upserting Records

A

Using the upsert operation, you can either insert or update an existing record in one call. To determine whether a record already exists, the upsert statement or Database method uses the record’s ID as the key to match records, a custom external ID field, or a standard field with the idLookup attribute set to true.
* If the key isn’t matched, then a new object record is created.
* If the key is matched once, then the existing object record is updated.
* If the key is matched multiple times, then an error is generated and the object record is not inserted or updated.
Note
Custom field matching is case-insensitive only if the custom field has the Unique and Treat “ABC” and “abc” as duplicate values (case insensitive) attributes selected as part of the field definition. If this is the case, “ABC123” is matched with “abc123.”

Account[] acctsList = [SELECT Id, Name, BillingCity
                        FROM Account WHERE BillingCity = 'Bombay'];
for (Account a : acctsList) {
    a.BillingCity = 'Mumbai';
}
Account newAcct = new Account(Name = 'Acme', BillingCity = 'San Francisco');
acctsList.add(newAcct);
try {
    upsert acctsList;
} catch (DmlException e) {
    // Process exception here
}

Use of upsert with an external ID can reduce the number of DML statements in your code, and help you to avoid hitting governor limits (see Execution Governors and Limits).

This example uses upsert and an external ID field Line_Item_Id_c on the Asset object to maintain a one-to-one relationship between an asset and an opportunity line item. Before running the sample, create a custom text field on the Asset object named Line_Item_Id_c and mark it as an external ID.
Note

External ID fields used in upsert calls must be unique or the user must have the View All Data permission.

public void upsertExample() {
    Opportunity opp = [SELECT Id, Name, AccountId, 
                              (SELECT Id, PricebookEntry.Product2Id, PricebookEntry.Name 
                               FROM OpportunityLineItems)
                       FROM Opportunity 
                       WHERE HasOpportunityLineItem = true 
                       LIMIT 1]; 

    Asset[] assets = new Asset[]{}; 

    // Create an asset for each line item on the opportunity
    for (OpportunityLineItem lineItem:opp.OpportunityLineItems) {

        //This code populates the line item Id, AccountId, and Product2Id for each asset
        Asset asset = new Asset(Name = lineItem.PricebookEntry.Name,
                                LineItemIDc = lineItem.Id,
                                AccountId = opp.AccountId,
                                Product2Id = lineItem.PricebookEntry.Product2Id);

        assets.add(asset);
    }
 
    try {
        upsert assets LineItemIDc;  // This line upserts the assets list with
                                        // the LineItemIdc field specified as the 
                                        // Asset field that should be used for matching
                                        // the record that should be upserted. 
    } catch (DmlException e) {
        System.debug(e.getMessage());
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

DML: Merging Records

A

When you have duplicate lead, contact, case, or account records in the database, cleaning up your data and consolidating the records might be a good idea. You can merge up to three records of the same sObject type. The merge operation merges up to three records into one of the records, deletes the others, and reparents any related records.
Merge Considerations
When merging sObject records, consider the following rules and guidelines:
1. Only leads, contacts, cases, and accounts can be merged. See sObjects That Don’t Support DML Operations.
2. You can pass a master record and up to two additional sObject records to a single merge method.
3. Using the Apex merge operation, field values on the master record always supersede the corresponding field values on the records to be merged. To preserve a merged record field value, simply set this field value on the master sObject before performing the merge.
4. External ID fields can’t be used with merge.
For more information on merging leads, contacts and accounts, see the Salesforce online help.

Example
The following shows how to merge an existing Account record into a master account. The account to merge has a related contact, which is moved to the master account record after the merge operation. Also, after merging, the merge record is deleted and only one record remains in the database. This examples starts by creating a list of two accounts and inserts the list. Then it executes queries to get the new account records from the database, and adds a contact to the account to be merged. Next, it merges the two accounts. Finally, it verifies that the contact has been moved to the master account and the second account has been deleted.

// Insert new accounts
List<Account> ls = new List<Account>{
    new Account(name='Acme Inc.'),
        new Account(name='Acme')
        };                                        
insert ls;

// Queries to get the inserted accounts 
Account masterAcct = [SELECT Id, Name FROM Account WHERE Name = 'Acme Inc.' LIMIT 1];
Account mergeAcct = [SELECT Id, Name FROM Account WHERE Name = 'Acme' LIMIT 1];

// Add a contact to the account to be merged
Contact c = new Contact(FirstName='Joe',LastName='Merged');
c.AccountId = mergeAcct.Id;
insert c;

try {
    merge masterAcct mergeAcct;
} catch (DmlException e) {
    // Process exception
    System.debug('An unexpected error has occurred: ' + e.getMessage()); 
}

// Once the account is merged with the master account,
// the related contact should be moved to the master record.
masterAcct = [SELECT Id, Name, (SELECT FirstName,LastName From Contacts) 
              FROM Account WHERE Name = 'Acme Inc.' LIMIT 1];
System.assert(masterAcct.getSObjects('Contacts').size() > 0);
System.assertEquals('Joe', masterAcct.getSObjects('Contacts')[0].get('FirstName'));
System.assertEquals('Merged', masterAcct.getSObjects('Contacts')[0].get('LastName'));

// Verify that the merge record got deleted
Account[] result = [SELECT Id, Name FROM Account WHERE Id=:mergeAcct.Id];
System.assertEquals(0, result.size());
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

DML: Deleting Records

A

After you persist records in the database, you can delete those records using the delete operation. Deleted records aren’t deleted permanently from Salesforce, but they are placed in the Recycle Bin for 15 days from where they can be restored.
~~~
Account[] doomedAccts = [SELECT Id, Name FROM Account
WHERE Name = ‘DotCom’];
try {
delete doomedAccts;
} catch (DmlException e) {
// Process exception here
}
~~~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

DML: Referential Integrity When Deleting and Restoring Records

A

The delete operation supports cascading deletions. If you delete a parent object, you delete its children automatically, as long as each child record can be deleted.
For example, if you delete a case record, Apex automatically deletes any CaseComment, CaseHistory, and CaseSolution records associated with that case. However, if a particular child record is not deletable or is currently being used, then the delete operation on the parent case record fails.

The undelete operation restores the record associations for the following types of relationships:
1. Parent accounts (as specified in the Parent Account field on an account)
1. Indirect account-contact relationships (as specified on the Related Accounts related list on a contact or the Related Contacts related list on an account)
1. Parent cases (as specified in the Parent Case field on a case)
1. Master solutions for translated solutions (as specified in the Master Solution field on a solution)
1. Managers of contacts (as specified in the Reports To field on a contact)
1. Products related to assets (as specified in the Product field on an asset)
1. Opportunities related to quotes (as specified in the Opportunity field on a quote)
1. All custom lookup relationships
1. Relationship group members on accounts and relationship groups, with some exceptions
1. Tags
1. An article’s categories, publication state, and assignments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

DML: Restoring Deleted Records

A

After you have deleted records, the records are placed in the Recycle Bin for 15 days, after which they are permanently deleted. While the records are still in the Recycle Bin, you can restore them using the undelete operation. If you accidentally deleted some records that you want to keep, restore them from the Recycle Bin.

Account a = new Account(Name='Universal Containers');
insert(a);
insert(new Contact(LastName='Carter',AccountId=a.Id));
delete a;

Account[] savedAccts = [SELECT Id, Name FROM Account WHERE Name = 'Universal Containers' ALL ROWS]; 
undelete savedAccts;

Undelete Considerations
1. You can undelete records that were deleted as the result of a merge. However, the merge reparents the child objects, and that reparenting can’t be undone.
2. To identify deleted records, including records deleted as a result of a merge, use the ALL ROWS parameters with a SOQL query.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Database Class: Converting Leads

A

The convertLead DML operation converts a lead into an account and contact, as well as (optionally) an opportunity. convertLead is available only as a method on the Database class; it is not available as a DML statement.

Converting leads involves the following basic steps:
1. Your application determines the IDs of any lead(s) to be converted.
2. Optionally, your application determines the IDs of any account(s) into which to merge the lead. Your application can use SOQL to search for accounts that match the lead name.
3. Optionally, your application determines the IDs of the contact or contacts into which to merge the lead. The application can use SOQL to search for contacts that match the lead contact name.
4. Optionally, the application determines whether opportunities should be created from the leads.
5. The application uses the query (SELECT … FROM LeadStatus WHERE IsConverted=true) to obtain the leads with converted status.
6. The application calls convertLead.
7. The application iterates through the returned result or results and examines each LeadConvertResult object to determine whether conversion succeeded for each lead.
8. Optionally, when converting leads owned by a queue, the owner must be specified. This is because accounts and contacts can’t be owned by a queue. Even if you are specifying an existing account or contact, you must still specify an owner.

Example
This example shows how to use the Database.convertLead method to convert a lead. It inserts a new lead, creates a LeadConvert object, sets its status to converted, and then passes it to the Database.convertLead method. Finally, it verifies that the conversion was successful.

Lead myLead = new Lead(LastName = ‘Fry’, Company=’Fry And Sons’);
insert myLead;

Database.LeadConvert lc = new database.LeadConvert();
lc.setLeadId(myLead.id);

LeadStatus convertStatus = [SELECT Id, ApiName FROM LeadStatus WHERE IsConverted=true LIMIT 1];
lc.setConvertedStatus(convertStatus.ApiName);

Database.LeadConvertResult lcr = Database.convertLead(lc);
System.assert(lcr.isSuccess());

Convert Leads Considerations
1. Field mappings: The system automatically maps standard lead fields to standard account, contact, and opportunity fields. For custom lead fields, your Salesforce administrator can specify how they map to custom account, contact, and opportunity fields. For more information about field mappings, see Salesforce Help.
2. Merged fields: If data is merged into existing account and contact objects, only empty fields in the target object are overwritten—existing data (including IDs) are not overwritten. The only exception is if you specify setOverwriteLeadSource on the LeadConvert object to true, in which case the LeadSource field in the target contact object is overwritten with the contents of the LeadSource field in the source LeadConvert object.
3. Record types: If the organization uses record types, the default record type of the new owner is assigned to records created during lead conversion. The default record type of the user converting the lead determines the lead source values available during conversion. If the desired lead source values are not available, add the values to the default record type of the user converting the lead. For more information about record types, see Salesforce Help.
4. Picklist values: The system assigns the default picklist values for the account, contact, and opportunity when mapping any standard lead picklist fields that are blank. If your organization uses record types, blank values are replaced with the default picklist values of the new record owner.
5. Automatic feed subscriptions: When you convert a lead into a new account, contact, and opportunity, the lead owner is unsubscribed from the lead record’s Chatter feed. The lead owner, the owner of the generated records, and users that were subscribed to the lead aren’t automatically subscribed to the generated records, unless they have automatic subscriptions enabled in their Chatter feed settings. They must have automatic subscriptions enabled to see changes to the account, contact, and opportunity records in their news feed. To subscribe to records they create, users must enable the Automatically follow records that I create option in their personal settings. A user can subscribe to a record so that changes to the record display in the news feed on the user’s home page. This is a useful way to stay up-to-date with changes to records in Salesforce.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Database Class Method Result Objects

A

Database class methods return the results of the data operation. These result objects contain useful information about the data operation for each record, such as whether the operation was successful or not, and any error information. Each type of operation returns a specific result object type, as outlined below.
Operation Result Class
insert, update SaveResult Class
upsert UpsertResult Class
merge MergeResult Class
delete DeleteResult Class
undelete UndeleteResult Class
convertLead LeadConvertResult Class
emptyRecycleBin EmptyRecycleBinResult Class

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Returned Database Errors

A

While DML statements always return exceptions when an operation fails for one of the records being processed and the operation is rolled back for all records, Database class methods can either do so or allow partial success for record processing. In the latter case of partial processing, Database class methods don’t throw exceptions. Instead, they return a list of errors for any errors that occurred on failed records.

The errors provide details about the failures and are contained in the result of the Database class method. For example, a SaveResult object is returned for insert and update operations. Like all returned results, SaveResult contains a method called getErrors that returns a list of Database.Error objects, representing the errors encountered, if any.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Setting DML Options

A

You can specify DML options for insert and update operations by setting the desired options in the Database.DMLOptions object. You can set Database.DMLOptions for the operation by calling the setOptions method on the sObject, or by passing it as a parameter to the Database.insert and Database.update methods.

Using DML options, you can specify:
The truncation behavior of fields.
Assignment rule information.
Duplicate rule information.
Whether automatic emails are sent.
The user locale for labels.
Whether the operation allows for partial success.
The Database.DMLOptions class has the following properties:
allowFieldTruncation Property
assignmentRuleHeader Property
duplicateRuleHeader
emailHeader Property
localeOptions Property
optAllOrNone Property
DMLOptions is only available for Apex saved against API versions 15.0 and higher. DMLOptions settings take effect only for record operations performed using Apex DML and not through the Salesforce user interface.

Database.DMLOptions dmo = new Database.DMLOptions();
dmo.assignmentRuleHeader.useDefaultRule= true;
dmo.allowFieldTruncation = true;
Lead l = new Lead(company='ABC', lastname='Smith');
l.setOptions(dmo);
insert l;
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Setting DML Options: Examples

A

allowFieldTruncation - specifies the truncation behavior of strings. In Apex saved against API versions previous to 15.0, if you specify a value for a string and that value is too large, the value is truncated. For API version 15.0 and later, if a value is specified that is too large, the operation fails and an error message is returned. The allowFieldTruncation property allows you to specify that the previous behavior, truncation, be used instead of the new behavior in Apex saved against API versions 15.0 and later.
assignmentRuleHeader - The assignmentRuleHeader property specifies the assignment rule to be used when creating a case or lead (NOT Account)
duplicateRuleHeader - The duplicateRuleHeader property determines whether a record that’s identified as a duplicate can be saved.
emailHeader - The Salesforce user interface allows you to specify whether or not to send an email when the following events occur:
Creation of a new case or task
Conversion of a case email to a contact
New user email notification
Lead queue email notification
Password reset
In Apex saved against API version 15.0 or later, the Database.DMLOptions emailHeader property enables you to specify additional information regarding the email that gets sent when one of the events occurs because of Apex DML code execution.

Using the emailHeader property, you can set these options.
triggerAutoResponseEmail: Indicates whether to trigger auto-response rules (true) or not (false), for leads and cases. This email can be automatically triggered by a number of events, for example when creating a case or resetting a user password. If this value is set to true, when a case is created, if there is an email address for the contact specified in ContactID, the email is sent to that address. If not, the email is sent to the address specified in SuppliedEmail.
triggerOtherEmail: Indicates whether to trigger email outside the organization (true) or not (false).
triggerUserEmail: Indicates whether to trigger email that is sent to users in the organization (true) or not (false).

localeOptions - The localeOptions property specifies the language of any labels that are returned by Apex. The value must be a valid user locale (language and country), such as de_DE or en_GB. The value is a String, 2-5 characters long. The first two characters are always an ISO language code, for example ‘fr’ or ‘en.’ If the value is further qualified by a country, then the string also has an underscore (_) and another ISO country code, for example ‘US’ or ‘UK.’ For example, the string for the United States is ‘en_US’, and the string for French Canadian is ‘fr_CA’.

optAllOrNone - The optAllOrNone property specifies whether the operation allows for partial success. If optAllOrNone is set to true, all changes are rolled back if any record causes errors. The default for this property is false and successfully processed records are committed while records with errors aren’t. This property is available in Apex saved against Salesforce API version 20.0 and later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

DML: Transaction Control

A

All requests are delimited by the trigger, class method, Web Service, Visualforce page, or anonymous block that executes the Apex code. If the entire request completes successfully, all changes are committed to the database. For example, suppose a Visualforce page called an Apex controller, which in turn called an additional Apex class. Only when all the Apex code has finished running and the Visualforce page has finished running, are the changes committed to the database. If the request doesn’t complete successfully, all database changes are rolled back.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

DML: Generating Savepoints and Rolling Back Transactions

A

Sometimes during the processing of records, your business rules require that partial work (already executed DML statements) is rolled back so that the processing can continue in another direction. Apex gives you the ability to generate a savepoint, that is, a point in the request that specifies the state of the database at that time. Any DML statement that occurs after the savepoint can be discarded, restoring the database to the condition it was in when you generated the savepoint. All table and row locks acquired since the savepoint are released.

The following limitations apply to generating savepoint variables and rolling back the database:

If you set more than one savepoint, then roll back to a savepoint that isn’t the last savepoint you generated, the later savepoint variable is also rolled back and becomes invalid. For example, if you generated savepoint SP1 first, savepoint SP2 after that, and then you rolled back to SP1, the variable SP2 is no longer valid. If you try to use savepoint SP2, you receive a runtime error.
References to savepoints can’t cross-trigger invocations because each trigger invocation is a new trigger context. If you declare a savepoint as a static variable then try to use it across trigger contexts, you receive a run-time error.
Each savepoint you set counts against the governor limit for DML statements.
Static variables aren’t reverted during a rollback. If you try to run the trigger again, the static variables retain the values from the first run.

Database.rollback(Savepoint) and Database.setSavepoint()don’t count against the DML row limit, but count toward the DML statement limit. This behavior applies to all API versions.
The ID on an sObject inserted after setting a savepoint isn’t cleared after a rollback. Attempting to insert the sObject using the variable created before the rollback fails because the sObject variable has an ID. Updating or upserting the sObject using the same variable also fails because the sObject isn’t in the database and, thus, can’t be updated. To perform further DML operations, create an sObject variable without setting its ID.
The following is an example using the setSavepoint and rollback Database methods.
~~~

Account a = new Account(Name = ‘xyz’);
insert a;
Assert.isNull([SELECT AccountNumber FROM Account WHERE Id = :a.Id]. AccountNumber);
// Create a savepoint while AccountNumber is null
Savepoint sp = Database.setSavepoint();
// Change the account number
a.AccountNumber = ‘123’;
update a;
Assert.areEqual(‘123’, [SELECT AccountNumber FROM Account WHERE Id = :a.Id]. AccountNumber);
// Rollback to the previous null value
Database.rollback(sp);
Assert.isNull([SELECT AccountNumber FROM Account WHERE Id = :a.Id]. AccountNumber);
~~~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

DML: Releasing Savepoints and Using Callouts

A

To allow callouts, roll back all uncommitted DML by using a savepoint. Then use the Database.releaseSavepoint method to explicitly release savepoints before making the desired callout. When Database.releaseSavepoint() is called, SAVEPOINT_RELEASE is logged.

In this example, the makeACallout() callout succeeds because the uncommitted DML is rolled back and the savepoint is released.

Savepoint sp = Database.setSavepoint();
try {
// Try a database operation
insert new Account(name=’Foo’);
integer bang = 1 / 0;
} catch (Exception ex) {
Database.rollback(sp);
Database.releaseSavepoint(sp);
makeACallout();
}

In this example, DML is pending when the callout is made. The CalloutException informs you that you must roll back the transaction before the callout is made or the transaction must be committed.

Savepoint sp = Database.setSavepoint();
insert new Account(name=’Foo’);
Database.releaseSavepoint(sp);
try {
makeACallout();
} catch (System.CalloutException ex) {
Assert.isTrue(ex.getMessage().contains(‘You have uncommitted work pending. Please commit or rollback before calling out.’));
}

Use these guidelines for using callouts and savepoints.

If there’s uncommitted work pending when Database.releaseSavepoint() is called, the uncommitted work isn’t rolled back. It’s committed if the transaction succeeds.
Attempts to roll back to a released savepoint result in a TypeException.
Attempts to roll back after calling Database.releaseSavepoint() result in a System.InvalidOperationException.
Calling the Database.releaseSavepoint() method on a savepoint also releases nested savepoints, that is, any subsequent savepoints created after a savepoint.

For Apex tests with API version 60.0 or later, all savepoints are released when Test.startTest() and Test.stopTest() are called. If any savepoints are reset, a SAVEPOINT_RESET event is logged.

Before API version 60.0, making a callout after creating savepoints throws a CalloutException regardless of whether there was uncommitted DML or the changes were rolled back to a savepoint. Also, before API version 60.0, both Database.rollback(databaseSavepoint) and Database.setSavepoint() calls incremented the DML row usage limit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

sObjects That Can’t Be Used Together in DML Operations

A

DML operations on certain sObjects, sometimes referred to as setup objects, can’t be mixed with DML on non-setup sObjects in the same transaction. This restriction exists because some sObjects affect the user’s access to records in the org. You must insert or update these types of sObjects in a different transaction to prevent operations from happening with incorrect access-level permissions. For example, you can’t update an account and a user role in a single transaction.
Don’t include more than one of these sObjects in the same transaction when performing DML operations or when using the Metadata API. These sObjects also can’t be used with the @IsTest (IsParellel=true) annotation. Split such operations into separate transactions.
* AuthSession
* FieldPermissions
* ForecastingShare
* Group - You can only insert and update a group in a transaction with other sObjects. Other DML operations aren’t allowed.
* GroupMember
* ObjectPermissions
* ObjectTerritory2AssignmentRule
* ObjectTerritory2AssignmentRuleItem
* PermissionSet
* PermissionSetAssignment
* QueueSObject
* RuleTerritory2Association
* SetupEntityAccess
* Territory
* Territory2
* Territory2Model
* User

If you’re using a Visualforce page with a custom controller, you can’t mix sObject types with any of these special sObjects within a single request or action. However, you can perform DML operations on these different types of sObjects in subsequent requests. For example, you can create an account with a save button, and then create a user with a non-null role with a submit button.

You can perform DML operations on more than one type of sObject in a single class using the following process:
Create a method that performs a DML operation on one type of sObject.
Create a second method that uses the future annotation to manipulate a second sObject type.
This example shows how to perform mixed DML operations by using a future method to perform a DML operation on the User object.

public class MixedDMLFuture {
    public static void useFutureMethod() {
        // First DML operation
        Account a = new Account(Name='Acme');
        insert a;
        
        // This next operation (insert a user with a role) 
        // can't be mixed with the previous insert unless 
        // it is within a future method. 
        // Call future method to insert a user with a role.
        Util.insertUserWithRole(
            'mruiz@awcomputing.com', 'mruiz', 
            'mruiz@awcomputing.com', 'Ruiz');        
    }
}

public class Util {
    @future
    public static void insertUserWithRole(
        String uname, String al, String em, String lname) {

        Profile p = [SELECT Id FROM Profile WHERE Name='Standard User'];
        UserRole r = [SELECT Id FROM UserRole WHERE Name='COO'];
        // Create new user with a non-null user role ID 
        User u = new User(alias = al, email=em, 
            emailencodingkey='UTF-8', lastname=lname, 
            languagelocalekey='enUS', 
            localesidkey='enUS', profileid = p.Id, userroleid = r.Id,
            timezonesidkey='America/LosAngeles', 
            username=uname);
        insert u;
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Mixed DML Operations in Test Methods

A

Test methods allow for performing mixed Data Manipulation Language (DML) operations that include both setup sObjects and other sObjects if the code that performs the DML operations is enclosed within System.runAs method blocks. You can also perform DML in an asynchronous job that your test method calls. These techniques enable you, for example, to create a user with a role and other sObjects in the same test.
Example: Mixed DML Operations in System.runAs Blocks
~~~
@isTest
private class MixedDML {
static testMethod void mixedDMLExample() {
User u;
Account a;
User thisUser = [SELECT Id FROM User WHERE Id = :UserInfo.getUserId()];
// Insert account as current user
System.runAs (thisUser) {
Profile p = [SELECT Id FROM Profile WHERE Name=’Standard User’];
UserRole r = [SELECT Id FROM UserRole WHERE Name=’COO’];
u = new User(alias = ‘jsmith’, email=’jsmith@acme.com’,
emailencodingkey=’UTF-8’, lastname=’Smith’,
languagelocalekey=’en_US’,
localesidkey=’en_US’, profileid = p.Id, userroleid = r.Id,
timezonesidkey=’America/Los_Angeles’,
username=’jsmith@acme.com’);
insert u;
a = new Account(name=’Acme’);
insert a;
}
}
}
~~~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Use @future to Bypass the Mixed DML Error in a Test Method

A

Mixed DML operations within a single transaction aren’t allowed. You can’t perform DML on a setup sObject and another sObject in the same transaction. However, you can perform one type of DML as part of an asynchronous job and the others in other asynchronous jobs or in the original transaction. This class contains an @future method to be called by the class in the subsequent example.
~~~
public class InsertFutureUser {
@future
public static void insertUser() {
Profile p = [SELECT Id FROM Profile WHERE Name=’Standard User’];
UserRole r = [SELECT Id FROM UserRole WHERE Name=’COO’];
User futureUser = new User(firstname = ‘Future’, lastname = ‘User’,
alias = ‘future’, defaultgroupnotificationfrequency = ‘N’,
digestfrequency = ‘N’, email = ‘test@test.org’,
emailencodingkey = ‘UTF-8’, languagelocalekey=’en_US’,
localesidkey=’en_US’, profileid = p.Id,
timezonesidkey = ‘America/Los_Angeles’,
username = ‘futureuser@test.org’,
userpermissionsmarketinguser = false,
userpermissionsofflineuser = false, userroleid = r.Id);
insert(futureUser);
}
}
@isTest
public class UserAndContactTest {
public testmethod static void testUserAndContact() {
InsertFutureUser.insertUser();
Contact currentContact = new Contact(
firstName = String.valueOf(System.currentTimeMillis()),
lastName = ‘Contact’);
insert(currentContact);
}
}
~~~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

sObjects That Don’t Support DML Operations

A

Your organization contains standard objects provided by Salesforce and custom objects that you created. These objects can be accessed in Apex as instances of the sObject data type. You can query these objects and perform DML operations on them. However, some standard objects don’t support DML operations although you can still obtain them in queries. They include the following:

AccountTerritoryAssignmentRule
AccountTerritoryAssignmentRuleItem
ApexComponent
ApexPage
BusinessHours
BusinessProcess
CategoryNode
CurrencyType
DatedConversionRate
NetworkMember (allows update only)
ProcessInstance
Profile
RecordType
SelfServiceUser
StaticResource
Territory2
UserAccountTeamMember
UserPreference
UserTerritory
WebLink
If an Account record has a record type of Person Account, the Name field can’t be modified with DML operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Bulk DML Exception Handling

A

Exceptions that arise from a bulk DML call (including any recursive DML operations in triggers that are fired as a direct result of the call) are handled differently depending on where the original call came from:
1. When errors occur because of a bulk DML call that originates directly from the Apex DML statements, or if the allOrNone parameter of a Database DML method is set to true, the runtime engine follows the “all or nothing” rule: during a single operation, all records must be updated successfully or the entire operation rolls back to the point immediately preceding the DML statement. If the allOrNone parameter of a Database DML method is set to false and a record fails, the remainder of the DML operation can still succeed.
2. You must iterate through the returned results to identify which records succeeded or failed. If the allOrNone parameter of a Database DML method is set to false and a before-trigger assigns an invalid value to a field, the partial set of valid records isn’t inserted.
3. When errors occur because of a bulk DML call that originates from SOAP API with default settings, or if the allOrNone parameter of a Database DML method was specified as false, the runtime engine attempts at least a partial save:
- During the first attempt, the runtime engine processes all records. - Any record that generates an error due to issues such as validation rules or unique index violations is set aside.
- If there were errors during the first attempt, the runtime engine makes a second attempt that includes only those records that didn’t generate errors. All records that didn’t generate an error during the first attempt are processed, and if any record generates an error (perhaps because of race conditions) it’s also set aside.
- If there were additional errors during the second attempt, the runtime engine makes a third and final attempt that includes only those records that didn’t generate errors during the first and second attempts. If any record generates an error, the entire operation fails with the error message, “Too many batch retries in the presence of Apex triggers and partial failures.”

Note

During the second and third attempts, governor limits are reset to their original state before the first attempt.
Apex triggers are fired for the first save attempt, and if errors are encountered for some records and subsequent attempts are made to save the subset of successful records, triggers are refired on this subset of records.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

DML: Non-Null Required Fields Values and Null Fields

A

When inserting new records or updating required fields on existing records, you must supply non-null values for all required fields.
Unlike the SOAP API, Apex allows you to change field values to null without updating the fieldsToNull array on the sObject record. The API requires an update to this array due to the inconsistent handling of null values by many SOAP providers. Because Apex runs solely on the Lightning Platform, this workaround is unnecessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

DML: String Field Truncation and API Version

A

Apex classes and triggers saved (compiled) using API version 15.0 and higher produce a runtime error if you assign a String value that is too long for the field.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

sObject Properties to Enable DML Operations

A

To be able to insert, update, delete, or undelete an sObject record, the sObject must have the corresponding property (createable, updateable, deletable, or undeletable respectively) set to true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

DML: ID Values

A

The insert statement automatically sets the ID value of all new sObject records. Inserting a record that already has an ID—and therefore already exists in your organization’s data—produces an error.
The insert and update statements check each batch of records for duplicate ID values. If there are duplicates, the first five are processed. For the sixth and all additional duplicate IDs, the SaveResult for those entries is marked with an error similar to the following: Maximum number of duplicate updates in one batch (5 allowed). Attempt to update Id more than once in this API call: number_of_attempts.
The ID of an updated sObject record cannot be modified in an update statement, but related record IDs can.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

DML: Fields With Unique Constraints

A

For some sObjects that have fields with unique constraints, inserting duplicate sObject records results in an error. For example, inserting CollaborationGroup sObjects with the same names results in an error because CollaborationGroup records must have unique names.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

System Fields Automatically Set

A

When inserting new records, system fields such as CreatedDate, CreatedById, and SystemModstamp are automatically updated. You cannot explicitly specify these values in your Apex. Similarly, when updating records, system fields such as LastModifiedDate, LastModifiedById, and SystemModstamp are automatically updated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

DML: Maximum Number of Records Processed by DML Statement

A

You can pass a maximum of 10,000 sObject records to a single insert, update, delete, and undelete method.
Each upsert statement consists of two operations, one for inserting records and one for updating records. Each of these operations is subject to the runtime limits for insert and update, respectively. For example, if you upsert more than 10,000 records and all of them are being updated, you receive an error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

DML: Creating Records for Multiple Object Types

A

As with the SOAP API, you can create records in Apex for multiple object types, including custom objects, in one DML call with API version 20.0 and later. For example, you can create a contact and an account in one call. You can create records for up to 10 object types in one call.

Records are saved in the same order that they’re entered in the sObject input array. If you’re entering new records that have a parent-child relationship, the parent record must precede the child record in the array. For example, if you’re creating a contact that references an account that’s also being created in the same call, the account must have a smaller index in the array than the contact does. The contact references the account by using an External ID field.

You can’t add a record that references another record of the same object type in the same call. For example, the Contact object has a Reports To field that’s a reference to another contact. You can’t create two contacts in one call if one contact uses the Reports To field to reference a second contact in the input array. You can create a contact that references another contact that has been previously created.

Records for multiple object types are broken into multiple chunks by Salesforce. A chunk is a subset of the input array, and each chunk contains records of one object type. Data is committed on a chunk-by-chunk basis. Any Apex triggers that are related to the records in a chunk are invoked once per chunk. Consider an sObject input array that contains the following set of records:

account1, account2, contact1, contact2, contact3, case1, account3, account4, contact4
Salesforce splits the records into five chunks:

account1, account2
contact1, contact2, contact3
case1
account3, account4
contact4
Each call can process up to 10 chunks. If the sObject array contains more than 10 chunks, you must process the records in more than one call.

Note

For Apex, the chunking of the input array for an insert or update DML operation has two possible causes: the existence of multiple object types or the default chunk size of 200. If chunking in the input array occurs because of both of these reasons, each chunk is counted toward the limit of 10 chunks. If the input array contains only one type of sObject, you won’t hit this limit. However, if the input array contains at least two sObject types and contains a high number of objects that are chunked into groups of 200, you might hit this limit. For example, if you have an array that contains 1,001 consecutive leads followed by 1,001 consecutive contacts, the array will be chunked into 12 groups: Two groups are due to the different sObject types of Lead and Contact, and the remaining are due to the default chunking size of 200 objects. In this case, the insert or update operation returns an error because you reached the limit of 10 chunks in hybrid arrays. The workaround is to call the DML operation for each object type separately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

DML and Knowledge Objects

A

To execute DML code on knowledge articles (KnowledgeArticleVersion types such as the custom FAQ_kav article type), the running user must have the Knowledge User feature license. Otherwise, calling a class method that contains DML operations on knowledge articles results in errors. If the running user isn’t a system administrator and doesn’t have the Knowledge User feature license, calling any method in the class returns an error even if the called method doesn’t contain DML code for knowledge articles but another method in the class does.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Apex: Locking Statements

A

In Apex, you can use FOR UPDATE to lock sObject records while they’re being updated in order to prevent race conditions and other thread safety problems.
While an sObject record is locked, no other client or user is allowed to make updates either through code or the Salesforce user interface. The client locking the records can perform logic on the records and make updates with the guarantee that the locked records won’t be changed by another client during the lock period. The lock gets released when the transaction completes.

To lock a set of sObject records in Apex, embed the keywords FOR UPDATE after any inline SOQL statement. For example, the following statement, in addition to querying for two accounts, also locks the accounts that are returned:
Account [] accts = [SELECT Id FROM Account LIMIT 2 FOR UPDATE];
Note
You can’t use the ORDER BY keywords in any SOQL query that uses locking.

Locking Considerations:
1. While the records are locked by a client, the locking client can modify their field values in the database in the same transaction. Other clients have to wait until the transaction completes and the records are no longer locked before being able to update the same records. Other clients can still query the same records while they’re locked.
2. If you attempt to lock a record currently locked by another client, your process waits a maximum of 10 seconds for the lock to be released before acquiring a new lock. If the wait time exceeds 10 seconds, a QueryException is thrown. Similarly, if you attempt to update a record currently locked by another client and the lock isn’t released within a maximum of 10 seconds, a DmlException is thrown.
3. If a client attempts to modify a locked record, the update operation can succeed if the lock gets released within a short amount of time after the update call was made. In this case, it’s possible that the updates overwrite changes made by the locking client if the second client obtained an old copy of the record. To prevent the overwrite from happening, the second client must lock the record first. The locking process returns a fresh copy of the record from the database through the SELECT statement. The second client can use this copy to make new updates.
4. The record locks that are obtained in Apex via FOR UPDATE clause are automatically released when making callouts. The information is logged in the debug log and the logged message includes the most recently locked entity type. For example: FOR_UPDATE_LOCKS_RELEASE FOR UPDATE locks released due to a callout. The most recent lock was Account. Use caution while making callouts in contexts where FOR UPDATE queries could have been previously executed.
5. When you perform a DML operation on one record, related records are locked in addition to the record in question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Locking in a SOQL For Loop

A

The FOR UPDATE keywords can also be used within SOQL for loops. For example:
for (Account[] accts : [SELECT Id FROM Account
FOR UPDATE]) {
// Your code
}
As discussed in SOQL For Loops, the example above corresponds internally to calls to the query() and queryMore() methods in the SOAP API.

Note that there is no commit statement. If your Apex trigger completes successfully, any database changes are automatically committed. If your Apex trigger does not complete successfully, any changes made to the database are rolled back.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Apex: Avoiding Deadlocks

A

Apex has the possibility of deadlocks, as does any other procedural logic language involving updates to multiple database tables or rows. To avoid such deadlocks, the Apex runtime engine:
First locks sObject parent records, then children.
Locks sObject records in order of ID when multiple records of the same type are being edited.
As a developer, use care when locking rows to ensure that you are not introducing deadlocks. Verify that you are using standard deadlock avoidance techniques by accessing tables and rows in the same order from all locations in an application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Working with SOQL and SOSL Query Results

A

SOQL and SOSL queries only return data for sObject fields that are selected in the original query. If you try to access a field that was not selected in the SOQL or SOSL query (other than ID), you receive a runtime error, even if the field contains a value in the database.

Even if only one sObject field is selected, a SOQL or SOSL query always returns data as complete records. Consequently, you must dereference the field in order to access it. For example, this code retrieves an sObject list from the database with a SOQL query, accesses the first account record in the list, and then dereferences the record’s AnnualRevenue field:

Double rev = [SELECT AnnualRevenue FROM Account
WHERE Name = ‘Acme’][0].AnnualRevenue;
// When only one result is returned in a SOQL query, it is not necessary to include the list’s index.
Double rev2 = [SELECT AnnualRevenue FROM Account
WHERE Name = ‘Acme’ LIMIT 1].AnnualRevenue;

The only situation in which it is not necessary to dereference an sObject field in the result of an SOQL query, is when the query returns an Integer as the result of a COUNT operation:
Integer i = [SELECT COUNT() FROM Account];

Fields in records returned by SOSL queries must always be dereferenced.

Also note that sObject fields that contain formulas return the value of the field at the time the SOQL or SOSL query was issued. Any changes to other fields that are used within the formula are not reflected in the formula field value until the record has been saved and re-queried in Apex. Like other read-only sObject fields, the values of the formula fields themselves cannot be changed in Apex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Accessing sObject Fields Through Relationships

A

sObject records represent relationships to other records with two fields: an ID and an address that points to a representation of the associated sObject. For example, the Contact sObject has both an AccountId field of type ID, and an Account field of type Account that points to the associated sObject record itself.

The ID field can be used to change the account with which the contact is associated, while the sObject reference field can be used to access data from the account. The reference field is only populated as the result of a SOQL or SOSL query (see note).

For example, the following Apex code shows how an account and a contact can be associated with one another, and then how the contact can be used to modify a field on the account:

Account a = new Account(Name = 'Acme');
insert a;  // Inserting the record automatically assigns a 
           // value to its ID field
Contact c = new Contact(LastName = 'Weissman');
c.AccountId = a.Id;
// The new contact now points at the new account
insert c;

// A SOQL query accesses data for the inserted contact, 
// including a populated c.account field
c = [SELECT Account.Name FROM Contact WHERE Id = :c.Id];

// Now fields in both records can be changed through the contact
c.Account.Name = 'salesforce.com';
c.LastName = 'Roth';

// To update the database, the two types of records must be 
// updated separately
update c;         // This only changes the contact's last name
update c.Account; // This updates the account name

Note

The expression c.Account.Name, and any other expression that traverses a relationship, displays slightly different characteristics when it is read as a value than when it is modified:

When being read as a value, if c.Account is null, then c.Account.Name evaluates to null, but does not yield a NullPointerException. This design allows developers to navigate multiple relationships without the tedium of having to check for null values.
When being modified, if c.Account is null, then c.Account.Name does yield a NullPointerException.

In SOSL, you would access data for the inserted contact in a similar way to the SELECT statement used in the previous SOQL example.
*List<List<SObject>> searchList = [FIND 'Acme' IN ALL FIELDS RETURNING Contact(id,Account.Name)]*</SObject>

In addition, the sObject field key can be used with insert, update, or upsert to resolve foreign keys by external ID. For example this inserts a new contact with the AccountId equal to the account with the external_id equal to ‘12345’. If there is no such account, the insert fails.

Account refAcct = new Account(externalId_c = ‘12345’);

Contact c = new Contact(Account = refAcct, LastName = ‘Kay’);

insert c;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Understanding Foreign Key and Parent-Child Relationship SOQL Queries

A

The SELECT statement of a SOQL query can be any valid SOQL statement, including foreign key and parent-child record joins. If foreign key joins are included, the resulting sObjects can be referenced using normal field notation. For example:

System.debug([SELECT Account.Name FROM Contact
              WHERE FirstName = 'Caroline'].Account.Name);

Additionally, parent-child relationships in sObjects act as SOQL queries as well. For example:

for (Account a : [SELECT Id, Name, (SELECT LastName FROM Contacts)
                  FROM Account
                  WHERE Name = 'Acme']) {
     Contact[] cons = a.Contacts;
}

//The following example also works because we limit to only 1 contact
for (Account a : [SELECT Id, Name, (SELECT LastName FROM Contacts LIMIT 1)
                  FROM Account
                  WHERE Name = 'testAgg']) {
     Contact c = a.Contacts;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Working with SOQL Aggregate Functions

A

You can use aggregate functions without using a GROUP BY clause. For example, you could use the AVG() aggregate function to find the average Amount for all your opportunities.
~~~
AggregateResult[] groupedResults
= [SELECT AVG(Amount)aver FROM Opportunity];
Object avgAmount = groupedResults[0].get(‘aver’);
~~~
Note that any query that includes an aggregate function returns its results in an array of AggregateResult objects. AggregateResult is a read-only sObject and is only used for query results.

Aggregate functions become a more powerful tool to generate reports when you use them with a GROUP BY clause. For example, you could find the average Amount for all your opportunities by campaign.
~~~
AggregateResult[] groupedResults
= [SELECT CampaignId, AVG(Amount)
FROM Opportunity
GROUP BY CampaignId];
for (AggregateResult ar : groupedResults) {
System.debug(‘Campaign ID’ + ar.get(‘CampaignId’));
System.debug(‘Average amount’ + ar.get(‘expr0’));
}
~~~
Any aggregated field in a SELECT list that does not have an alias automatically gets an implied alias with a format expri, where i denotes the order of the aggregated fields with no explicit aliases. The value of i starts at 0 and increments for every aggregated field with no explicit alias.

Note

Queries that include aggregate functions are still subject to the limit on total number of query rows. All aggregate functions other than COUNT() or COUNT(fieldname) include each row used by the aggregation as a query row for the purposes of limit tracking.

For COUNT() or COUNT(fieldname) queries, limits are counted as one query row, unless the query contains a GROUP BY clause, in which case one query row per grouping is consumed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Working with Very Large SOQL Queries

A

Your SOQL query sometimes returns so many sObjects that the limit on heap size is exceeded and an error occurs. To resolve, use a SOQL query for loop instead, since it can process multiple batches of records by using internal calls to query and queryMore.

For example, if the results are too large, this syntax causes a runtime exception:
Account[] accts = [SELECT Id FROM Account];

Instead, use a SOQL query for loop as in one of the following examples:
~~~
// Use this format if you are not executing DML statements
// within the for loop
for (Account a : [SELECT Id, Name FROM Account
WHERE Name LIKE ‘Acme%’]) {
// Your code without DML statements here
}

// Use this format for efficiency if you are executing DML statements
// within the for loop
for (List<Account> accts : [SELECT Id, Name FROM Account
WHERE Name LIKE 'Acme%']) {
for (Account a : accts) {
// Your code here
}
update accts;
}
~~~</Account>

The following example demonstrates a SOQL query for loop that’s used to mass update records. Suppose that you want to change the last name of a contact in records for contacts whose first and last names match specified criteria:

public void massUpdate() {
    for (List<Contact> contacts:
      [SELECT FirstName, LastName FROM Contact]) {
        for(Contact c : contacts) {
            if (c.FirstName == 'Barbara' &&
              c.LastName == 'Gordon') {
                c.LastName = 'Wayne';
            }
        }
        update contacts;
    }
}

Instead of using a SOQL query in a for loop, the preferred method of mass updating records is to use batch Apex, which minimizes the risk of hitting governor limits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

More Efficient SOQL Queries

A

For best performance, SOQL queries must be selective, particularly for queries inside triggers. To avoid long execution times, the system can terminate nonselective SOQL queries. Developers receive an error message when a non-selective query in a trigger executes against an object that contains more than 1 million records. To avoid this error, ensure that the query is selective.

Selective SOQL Query Criteria
A query is selective when one of the query filters is on an indexed field and the query filter reduces the resulting number of rows below a system-defined threshold. The performance of the SOQL query improves when two or more filters used in the WHERE clause meet the mentioned conditions.
The selectivity threshold is 10% of the first million records and less than 5% of the records after the first million records, up to a maximum of 333,333 records. In some circumstances, for example with a query filter that is an indexed standard field, the threshold can be higher. Also, the selectivity threshold is subject to change.

Custom Index Considerations for Selective SOQL Queries
1. The following fields are indexed by default.
* Primary keys (Id, Name, and OwnerId fields)
* Foreign keys (lookup or master-detail relationship fields)
* Audit dates (CreatedDate and SystemModstamp fields)
* RecordType fields (indexed for all standard objects that feature them)
* Custom fields that are marked as External ID or Unique

  1. Fields not indexed by default are automatically indexed when the Salesforce optimizer recognizes that an index can improve performance for frequently run queries.
  2. Salesforce Support can add custom indexes on request for customers. A custom index can’t be created on these types of fields: multi-select picklists, currency fields in a multicurrency organization, long text fields, some formula fields, and binary fields (fields of type blob, file, or encrypted text.) New data types, typically complex ones, are periodically added to Salesforce, and fields of these types don’t always allow custom indexing.
  3. You can’t create custom indexes on formula fields that include invocations of the TEXT function on picklist fields.
  4. Typically, a custom index isn’t used in these cases.
    * The queried values exceed the system-defined threshold.
    * The filter operator is a negative operator such as NOT EQUAL TO (or !=), NOT CONTAINS, and NOT STARTS WITH.
    * The CONTAINS operator is used in the filter, and the number of rows to be scanned exceeds 333,333. The CONTAINS operator requires a full scan of the index. This threshold is subject to change.
    * You’re comparing with an empty value (Name != ‘’).
    However, there are other complex scenarios in which custom indexes can’t be used.

Examples of Selective SOQL Queries
To better understand whether a query on a large object is selective or not, let’s analyze some queries. For these queries, assume that there are more than 1 million records for the Account sObject. These records include soft-deleted records, that is, deleted records that are still in the Recycle Bin.

SELECT Id FROM Account WHERE Id IN (<list>)</list>

SELECT Id FROM Account WHERE Name != ‘’
(Since Account is a large object even though Name is indexed (primary key), this filter returns most of the records, making the query non-selective.)

SELECT Id FROM Account WHERE Name != ‘’ AND CustomField__c = ‘ValueA’
(If the count of records returned by SELECT COUNT() FROM Account WHERE CustomField__c = ‘ValueA’ is lower than the selectivity threshold, and CustomField__c is indexed, the query is selective.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Using SOQL Queries That Return One Record

A

SOQL queries can be used to assign a single sObject value when the result list contains only one element.
When the L-value of an expression is a single sObject type, Apex automatically assigns the single sObject record in the query result list to the L-value. A runtime exception results if zero sObjects or more than one sObject is found in the list. For example:

List<Account> accts = [SELECT Id FROM Account];

// These lines of code are only valid if one row is returned from
// the query. Notice that the second line dereferences the field from the
// query without assigning it to an intermediary sObject variable.
Account acct = [SELECT Id FROM Account];
String name = [SELECT Name FROM Account].Name;

This usage is supported with the following Apex types, methods, or operators:

Database.query method.
Safe Navigation Operator.
Null Coalescing Operator.
Map.values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Improve Performance by Avoiding Null Values

A

In your SOQL and SOSL queries, explicitly filtering out null values in the WHERE clause allows Salesforce to improve query performance. In the following example, any records where the Thread_c value is null are eliminated from the search.

Public class TagWS {

//getThreadTags - a quick method to pull tags not in the existing list

   public static webservice List<String> 
   getThreadTags(String threadId, List<String> tags) {

      system.debug(LoggingLevel.Debug,tags);

      List<String> retVals = new List<String>();
      Set<String> tagSet = new Set<String>();
      Set<String> origTagSet = new Set<String>();
      origTagSet.addAll(tags);
			// Note WHERE clause optimizes search where Thread\_\_c is not null

      for(CSOCaseThreadTagc t : 
         [SELECT Name FROM CSOCaseThreadTagc 
         WHERE Thread\_\_c = :threadId AND
         Thread\_\_c != null]) 

      {
         tagSet.add(t.Name);
      }
      for(String x : origTagSet) { 
   // return a minus version of it so the UI knows to clear it
         if(!tagSet.contains(x)) retVals.add('-' + x);
      }
      for(String x : tagSet) { 
   // return a plus version so the UI knows it's new
         if(!origTagSet.contains(x)) retvals.add('+' + x);
      }

      return retVals;
   }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Working with Polymorphic Relationships in SOQL Queries

A

A polymorphic relationship is a relationship between objects where a referenced object can be one of several different types. For example, the Who relationship field of a Task can be a Contact or a Lead.
The following describes how to use SOQL queries with polymorphic relationships in Apex.

You can use SOQL queries that reference polymorphic fields in Apex to get results that depend on the object type referenced by the polymorphic field. One approach is to filter your results using the Type qualifier. This example queries Events that are related to an Account or Opportunity via the What field.
~~~
List<Event> events = [SELECT Description FROM Event WHERE What.Type IN ('Account', 'Opportunity')];
~~~
Another approach would be to use the TYPEOF clause in the SOQL SELECT statement. This example also queries Events that are related to an Account or Opportunity via the What field.
~~~
List<Event> events = [SELECT TYPEOF What WHEN Account THEN Phone WHEN Opportunity THEN Amount END FROM Event];
~~~
These queries return a list of sObjects where the relationship field references the desired object types.
If you need to access the referenced object in a polymorphic relationship, you can use the instanceof keyword to determine the object type. The following example uses instanceof to determine whether an Account or Opportunity is related to an Event.
~~~
Event myEvent = eventFromQuery;
if (myEvent.What instanceof Account) {
// myEvent.What references an Account, so process accordingly
} else if (myEvent.What instanceof Opportunity) {
// myEvent.What references an Opportunity, so process accordingly
}
~~~</Event></Event>

Note that you must assign the referenced sObject that the query returns to a variable of the appropriate type before you can pass it to another method. The following example
Queries for User or Group owners of Merchandise_c custom objects using a SOQL query with a TYPEOF clause
Uses instanceof to determine the owner type
Assigns the owner objects to User or Group type variables before passing them to utility methods

public class PolymorphismExampleClass {

    // Utility method for a User
    public static void processUser(User theUser) {
        System.debug('Processed User');
    }
    
    // Utility method for a Group
    public static void processGroup(Group theGroup) {
        System.debug('Processed Group');
    }

    public static void processOwnersOfMerchandise() {
        // Select records based on the Owner polymorphic relationship field
        List<Merchandisec> merchandiseList = [SELECT TYPEOF Owner WHEN User THEN LastName WHEN Group THEN Email END FROM Merchandisec];	
        // We now have a list of Merchandisec records owned by either a User or Group
        for (Merchandisec merch: merchandiseList) {
            // We can use instanceof to check the polymorphic relationship type
            // Note that we have to assign the polymorphic reference to the appropriate
            // sObject type before passing to a method
            if (merch.Owner instanceof User) {
                User userOwner = merch.Owner;
                processUser(userOwner);
            } else if (merch.Owner instanceof Group) {
                Group groupOwner = merch.Owner;
                processGroup(groupOwner);
            }
        }
    }
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Using Apex Variables in SOQL and SOSL Queries

A

SOQL and SOSL statements in Apex can reference Apex code variables and expressions if they’re preceded by a colon (:). This use of a local code variable within a SOQL or SOSL statement is called a bind. The Apex parser first evaluates the local variable in code context before executing the SOQL or SOSL statement. Bind expressions can be used as:

The search string in FIND clauses.
The filter literals in WHERE clauses.
The value of the IN or NOT IN operator in WHERE clauses, allowing filtering on a dynamic set of values. Note that this is of particular use with a list of IDs or Strings, though it works with lists of any type.
The division names in WITH DIVISION clauses.
The numeric value in LIMIT clauses.
The numeric value in OFFSET clauses.
Note
Apex bind variables aren’t supported for the units parameter in the DISTANCE function. This query doesn’t work.
~~~
String units = ‘mi’;
List<Account> accountList =
[SELECT ID, Name, BillingLatitude, BillingLongitude
FROM Account
WHERE DISTANCE(My_Location_Field\_\_c, GEOLOCATION(10,10), :units) < 10];
~~~</Account>

Account A = new Account(Name='xxx');
insert A;
Account B;

// A simple bind
B = [SELECT Id FROM Account WHERE Id = :A.Id];

// A bind with arithmetic
B = [SELECT Id FROM Account 
     WHERE Name = :('x' + 'xx')];

String s = 'XXX';

// A bind with expressions
B = [SELECT Id FROM Account 
     WHERE Name = :'XXXX'.substring(0,3)];

// A bind with INCLUDES clause
B = [SELECT Id FROM Account WHERE :A.TYPE INCLUDES (‘Customer – Direct; Customer – Channel’)];

// A bind with an expression that is itself a query result
B = [SELECT Id FROM Account
     WHERE Name = :[SELECT Name FROM Account
                    WHERE Id = :A.Id].Name];

Contact C = new Contact(LastName='xxx', AccountId=A.Id);
insert new Contact[]{C, new Contact(LastName='yyy', 
                                    accountId=A.id)};

// Binds in both the parent and aggregate queries
B = [SELECT Id, (SELECT Id FROM Contacts
                 WHERE Id = :C.Id)
     FROM Account
     WHERE Id = :A.Id];

// One contact returned
Contact D = B.Contacts;
// A limit bind
Integer i = 1;
B = [SELECT Id FROM Account LIMIT :i];

// An OFFSET bind
Integer offsetVal = 10;
List<Account> offsetList = [SELECT Id FROM Account OFFSET :offsetVal];

// An IN-bind with an Id list. Note that a list of sObjects
// can also be used--the Ids of the objects are used for 
// the bind
Contact[] cc = [SELECT Id FROM Contact LIMIT 2];
Task[] tt = [SELECT Id FROM Task WHERE WhoId IN :cc];

// An IN-bind with a String list
String[] ss = new String[]{'a', 'b'};
Account[] aa = [SELECT Id FROM Account 
                WHERE AccountNumber IN :ss];

// A SOSL query with binds in all possible clauses

String myString1 = 'aaa';
String myString2 = 'bbb';
Integer myInt3 = 11;
String myString4 = 'ccc';
Integer myInt5 = 22;

List<List<SObject>> searchList = [FIND :myString1 IN ALL FIELDS 
                                  RETURNING 
                                     Account (Id, Name WHERE Name LIKE :myString2
                                              LIMIT :myInt3), 
                                     Contact, 
                                     Opportunity, 
                                     Lead 
                                  WITH DIVISION =:myString4 
                                  LIMIT :myInt5];
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Querying All Records with a SOQL Statement

A

SOQL statements can use the ALL ROWS keywords to query all records in an organization, including deleted records and archived activities.

System.assertEquals(2, [SELECT COUNT() FROM Contact WHERE AccountId = a.Id ALL ROWS]);

You can use ALL ROWS to query records in your organization’s Recycle Bin. You cannot use the ALL ROWS keywords with the FOR UPDATE keywords.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

SOQL For Loops

A

SOQL for loops iterate over all of the sObject records returned by a SOQL query.
The syntax of a SOQL for loop is either:
~~~
for (variable : [soql_query]) {
code_block
}
or
for (variable_list : [soql_query]) {
code_block
}

Examle:
String s = ‘Acme’;
for (Account a : [SELECT Id, Name from Account
where Name LIKE :(s+’%’)]) {
// Your code
}
~~~

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

SOQL For Loops Versus Standard SOQL Queries

A

SOQL for loops differ from standard SOQL statements because of the method they use to retrieve sObjects. While the standard queries discussed in SOQL and SOSL Queries can retrieve either the count of a query or a number of object records, SOQL for loops retrieve all sObjects, using efficient chunking with calls to the query and queryMore methods of SOAP API. Developers can avoid the limit on heap size by using a SOQL for loop to process query results that return multiple records. However, this approach can result in more CPU cycles being used. See Total heap size.

Queries including an aggregate function don’t support queryMore. A run-time exception occurs if you use a query containing an aggregate function that returns more than 2,000 rows in a for loop.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

SOQL for loops can process records one at a time using a single sObject variable, or in batches of 200 sObjects at a time using an sObject list

A

The single sObject format executes the for loop’s <code_block> one time per sObject record. Consequently, it’s easy to understand and use, but is grossly inefficient if you want to use data manipulation language (DML) statements within the for loop body. Each DML statement ends up processing only one sObject at a time.
The sObject list format executes the for loop's <code_block> one time per list of 200 sObjects. Consequently, it’s a little more difficult to understand and use, but is the optimal choice if you must use DML statements within the for loop body. Each DML statement can bulk process a list of sObjects at a time.</code_block></code_block>

// Create a savepoint because the data should not be committed to the database
Savepoint sp = Database.setSavepoint(); 

insert new Account[]{new Account(Name = 'yyy'), 
                     new Account(Name = 'yyy'), 
                     new Account(Name = 'yyy')};

// The single sObject format executes the for loop once per returned record
Integer i = 0;
for (Account tmp : [SELECT Id FROM Account WHERE Name = 'yyy']) {
    i++;
}
System.assert(i == 3); // Since there were three accounts named 'yyy' in the
                       // database, the loop executed three times

// The sObject list format executes the for loop once per returned batch
// of records
i = 0;
Integer j;
for (Account[] tmp : [SELECT Id FROM Account WHERE Name = 'yyy']) {
    j = tmp.size();
    i++;
}
System.assert(j == 3); // The lt should have contained the three accounts
                       // named 'yyy'
System.assert(i == 1); // Since a single batch can hold up to 200 records and,
                       // only three records should have been returned, the 
                       // loop should have executed only once

// Revert the database to the original state
Database.rollback(sp);
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Adding and Retrieving List Elements

A
List<Account> myList = new List<Account>(); // Define a new list
Account a = new Account(Name='Acme'); // Create the account first
myList.add(a);                    // Add the account sObject
Account a2 = myList.get(0);      // Retrieve the element at index 0
64
Q

Using Array Notation for One-Dimensional Lists of sObjects

A

This example declares a list of accounts using the array notation.
~~~
Account[] accts = new Account[1];
Defines an Account list with no elements:
List<Account> accts = new Account[]{};</Account>

List<Account> accts = new Account[]
{new Account(), null, new Account()};</Account>

List<Contact> contacts = new List<Contact>
(otherList);
~~~</Contact></Contact>

65
Q

Sorting Lists of sObjects

A

Using the List.sort method, you can sort lists of sObjects.
For sObjects, sorting is in ascending order and uses a sequence of comparison steps outlined in the next section. You can create a custom sort order for sObjects by wrapping your sObject in an Apex class that implements the Comparable interface. You can also create a custom sort order by passing a class that implements Comparator as a parameter to the sort method.

66
Q

Default Sort Order of sObjects

A

The List.sort method sorts sObjects in ascending order and compares sObjects using an ordered sequence of steps that specify the labels or fields used. The comparison starts with the first step in the sequence and ends when two sObjects are sorted using specified labels or fields. The following is the comparison sequence used:
1. The label of the sObject type.
For example, an Account sObject appears before a Contact.

  1. The Name field, if applicable.
    For example, if the list contains two accounts named Alpha and Beta, account Alpha comes before account Beta.
  2. Standard fields, starting with the fields that come first in alphabetical order, except for the Id and Name fields.
    For example, if two accounts have the same name, the first standard field used for sorting is AccountNumber.
  3. Custom fields, starting with the fields that come first in alphabetical order.
    For example, suppose two accounts have the same name and identical standard fields, and there are two custom fields, FieldA and FieldB, the value of FieldA is used first for sorting.

For text fields, the sort algorithm uses the Unicode sort order. Also, empty fields precede non-empty fields in the sort order.

67
Q

Custom Sort Order of sObjects

A

To create a custom sort order for sObjects in lists, implement the Comparator interface and pass it as a parameter to the List.sort method.

Alternatively, create a wrapper class for the sObject and implement the Comparable interface. The wrapper class contains the sObject in question and implements the Comparable.compareTo method in which you specify the sort logic.

This example implements the Comparator interface to compare two opportunities based on the Amount field.
~~~
public class OpportunityComparator implements Comparator<Opportunity> {
public Integer compare(Opportunity o1, Opportunity o2) {
// The return value of 0 indicates that both elements are equal.
Integer returnValue = 0;</Opportunity>

    if(o1 == null && o2 == null) {
        returnValue = 0;
    } else if(o1 == null) {
        // nulls-first implementation
        returnValue = -1; 
    } else if(o2 == null) {
        // nulls-first implementation
        returnValue = 1;
    } else if ((o1.Amount == null) && (o2.Amount == null)) {
        // both have null Amounts
        returnValue = 0;
    } else if (o1.Amount == null){
        // nulls-first implementation
        returnValue = -1;
    } else if (o2.Amount == null){
        // nulls-first implementation
        returnValue = 1;
    } else if (o1.Amount < o2.Amount) {
        // Set return value to a negative value.
        returnValue = -1;
    } else if (o1.Amount > o2.Amount) {
        // Set return value to a positive value.
        returnValue = 1;
    }
    return returnValue;
} }
This test sorts a list of Comparator objects and verifies that the list elements are sorted by the opportunity amount.

@isTest
private class OpportunityComparator_Test {

@isTest
static void sortViaComparator() {
    // Add the opportunity wrapper objects to a list.
    List<Opportunity> oppyList = new List<Opportunity>();
    Date closeDate = Date.today().addDays(10);
    oppyList.add( new Opportunity(
        Name='Edge Installation',
        CloseDate=closeDate,
        StageName='Prospecting',
        Amount=50000));
    oppyList.add( new Opportunity(
        Name='United Oil Installations',
        CloseDate=closeDate,
        StageName='Needs Analysis',
        Amount=100000));
    oppyList.add( new Opportunity(
        Name='Grand Hotels SLA',
        CloseDate=closeDate,
        StageName='Prospecting',
        Amount=25000));
    oppyList.add(null);
    
    // Sort the objects using the Comparator implementation
    oppyList.sort(new OpportunityComparator());
    // Verify the sort order
    Assert.isNull(oppyList[0]);
    Assert.areEqual('Grand Hotels SLA', oppyList[1].Name);
    Assert.areEqual(25000, oppyList[1].Amount);
    Assert.areEqual('Edge Installation', oppyList[2].Name);
    Assert.areEqual(50000, oppyList[2].Amount);
    Assert.areEqual('United Oil Installations', oppyList[3].Name);
    Assert.areEqual(100000, oppyList[3].Amount);
    // Write the sorted list contents to the debug log.
    System.debug(oppyList);
} } ~~~

This example shows how to create a wrapper Comparable class for Opportunity. The implementation of the compareTo method in this class compares two opportunities based on the Amount field—the class member variable contained in this instance, and the opportunity object passed into the method.
public class OpportunityWrapper implements Comparable {

public Opportunity oppy;

// Constructor
public OpportunityWrapper(Opportunity op) {
	// Guard against wrapping a null 
	if(op == null) {
		Exception ex = new NullPointerException();
		ex.setMessage('Opportunity argument cannot be null'); 
		throw ex;
	}
    oppy = op;
}

// Compare opportunities based on the opportunity amount.
public Integer compareTo(Object compareTo) {
    // Cast argument to OpportunityWrapper
    OpportunityWrapper compareToOppy = (OpportunityWrapper)compareTo;
    
    // The return value of 0 indicates that both elements are equal.
    Integer returnValue = 0;
    if ((oppy.Amount == null) && (compareToOppy.oppy.Amount == null)) {
        // both wrappers have null Amounts
        returnValue = 0;
    } else if ((oppy.Amount == null) && (compareToOppy.oppy.Amount != null)){
        // nulls-first implementation
        returnValue = -1;
    } else if ((oppy.Amount != null) && (compareToOppy.oppy.Amount == null)){
        // nulls-first implementation
        returnValue = 1;
    } else if (oppy.Amount > compareToOppy.oppy.Amount) {
        // Set return value to a positive value.
        returnValue = 1;
    } else if (oppy.Amount < compareToOppy.oppy.Amount) {
        // Set return value to a negative value.
        returnValue = -1;
    } 
    return returnValue;
} }
68
Q

Sets of Objects

A

Sets can contain sObjects among other types of elements.
Sets contain unique elements. Uniqueness of sObjects is determined by comparing the objects’ fields. For example, if you try to add two accounts with the same name to a set, with no other fields set, only one sObject is added to the set.
If you add a description to one of the accounts, it is considered unique and both accounts are added to the set.

Warning

If set elements are objects, and these objects change after being added to the collection, they won’t be found anymore when using, for example, the contains or containsAll methods, because of changed field values.

69
Q

Maps of sObjects

A

Map keys and values can be of any data type, including sObject types, such as Account.
Maps can hold sObjects both in their keys and values. A map key represents a unique value that maps to a map value. For example, a common key would be an ID that maps to an account (a specific sObject type). This example shows how to define a map whose keys are of type ID and whose values are of type Account.
Map<ID, Account> m = new Map<ID, Account>();

As with primitive types, you can populate map key-value pairs when the map is declared by using curly brace ({}) syntax. Within the curly braces, specify the key first, then specify the value for that key using =>. This example creates a map of integers to accounts lists and adds one entry using the account list created earlier.

Account[] accs = new Account[5]; // Account[] is synonymous with List<Account>
Map<Integer, List<Account>> m4 = new Map<Integer, List<Account>>{1 => accs};</Account></Account></Account>

Maps allow sObjects in their keys. You must use sObjects in the keys only when the sObject field values won’t change.

70
Q

Auto-Populating Map Entries from a SOQL Query

A

When working with SOQL queries, maps can be populated from the results returned by the SOQL query. The map key must be declared with an ID or String data type, and the map value must be declared as an sObject data type.

This example shows how to populate a new map from a query. In the example, the SOQL query returns a list of accounts with their Id and Name fields. The new operator uses the returned list of accounts to create a map.

// Populate map from SOQL query
Map<ID, Account> m = new Map<ID, Account>([SELECT Id, Name FROM Account LIMIT 10]);
// After populating the map, iterate through the map entries
for (ID idKey : m.keyset()) {
    Account a = m.get(idKey);
    System.debug(a);
}

Account myAcct = new Account();                        //Define a new account
Map<Integer, Account> m = new Map<Integer, Account>(); // Define a new map
m.put(1, myAcct);                  // Insert a new key-value pair in the map
System.assert(!m.containsKey(3));  // Assert that the map contains a key
Account a = m.get(1);              // Retrieve a value, given a particular key
Set<Integer> s = m.keySet();       // Return a set that contains all of the keys in the map
71
Q

sObject Map Considerations

A

Be cautious when using sObjects as map keys. Key matching for sObjects is based on the comparison of all sObject field values. If one or more field values change after adding an sObject to the map, attempting to retrieve this sObject from the map returns null. This is because the modified sObject isn’t found in the map due to different field values. This can occur if you explicitly change a field on the sObject, or if the sObject fields are implicitly changed by the system; for example, after inserting an sObject, the sObject variable has the ID field autofilled. Attempting to fetch this Object from a map to which it was added before the insert operation won’t yield the map entry

Another scenario where sObject fields are autofilled is in triggers, for example, when using before and after insert triggers for an sObject. If those triggers share a static map defined in a class, and the sObjects in Trigger.New are added to this map in the before trigger, the sObjects in Trigger.New in the after trigger aren’t found in the map because the two sets of sObjects differ by the fields that are autofilled. The sObjects in Trigger.New in the after trigger have system fields populated after insertion, namely: ID, CreatedDate, CreatedById, LastModifiedDate, LastModifiedById, and SystemModStamp.

72
Q

Dynamic SOQL

A

Dynamic SOQL refers to the creation of a SOQL string at run time with Apex code. Dynamic SOQL enables you to create more flexible applications. For example, you can create a search based on input from an end user or update records with varying field names.

To create a dynamic SOQL query at run time, use the Database.query or Database.queryWithBinds methods, in one of the following ways.

Return a single sObject when the query returns a single record
sObject s = Database.query(string);
Return a list of sObjects when the query returns more than a single record
List<sObject> sobjList = Database.query(string);
List<sObject> sobjList = Database.queryWithBinds(string, bindVariablesMap, accessLevel);</sObject></sObject>

With API version 55.0 and later, as part of the User Mode for Database Operations feature, use the accessLevel parameter to run the query operation in user or system mode. The accessLevel parameter specifies whether the method runs in system mode (AccessLevel.SYSTEM_MODE) or user mode (AccessLevel.USER_MODE). In system mode, the object and field-level permissions of the current user are ignored, and the record sharing rules are controlled by the class sharing keywords. In user mode, the object permissions, field-level security, and sharing rules of the current user are enforced. System mode is the default.

73
Q

Dynamic SOQL Considerations

A

You can use simple bind variables in dynamic SOQL query strings when using Database.query. Bind variables in the query must be within the scope of the database operation. The following is allowed:
~~~
String myTestString = ‘TestName’;
List<sObject> sobjList = Database.query('SELECT Id FROM MyCustomObject\_\_c WHERE Name = :myTestString');
~~~
However, unlike inline SOQL, you can’t use bind variable fields in the query string with Database.query. The following example isn’t supported and results in a Variable does not exist error.
~~~
MyCustomObject\_\_c myVariable = new MyCustomObject\_\_c(field1\_\_c ='TestField');
List<sObject> sobjList = Database.query('SELECT Id FROM MyCustomObject\_\_c WHERE field1\_\_c = :myVariable.field1\_\_c');
~~~
(API version 57.0 and later) Another option is to use the Database.queryWithBinds method. With this method, bind variables in the query are resolved from a Map parameter directly with a key, rather than from Apex code variables. This removes the need for the variables to be in scope when the query is executed.
These considerations apply when using the Map parameter in the Database.queryWithBinds method:</sObject></sObject>

Although map keys of type String are case-sensitive,the queryWithBinds method doesn’t support Map keys that differ only in case. In a queryWithBinds method, comparison of Map keys is case-insensitive. If duplicate Map keys exist, the method throws a runtime QueryException. This example throws this runtime exception: System.QueryException: The bindMap consists of duplicate case-insensitive keys: [Acctname, acctName].
Map keys must follow naming standards: they must start with an ASCII letter, can’t start with a number, must not use reserved keywords, and must adhere to variable naming requirements.
Although currently supported, Salesforce recommends against using the dot notation with Map keys.

To prevent SOQL injection, use the escapeSingleQuotes method. This method adds the escape character () to all single quotation marks in a string that is passed in from a user. The method ensures that all single quotation marks are treated as enclosing strings, instead of database commands.

The Dynamic SOQL examples in this topic show how to use the Database.query and Database.queryWithBinds methods. These methods also use Dynamic SOQL:

Database.countQuery and Database.countQueryWithBinds: Return the number of records that a dynamic SOQL query would return when executed.
Database.getQueryLocator and Database.getQueryLocatorWithBinds: Create a QueryLocator object used in batch Apex or Visualforce.

74
Q

Dynamic SOSL

A

Dynamic SOSL refers to the creation of a SOSL string at run time with Apex code. Dynamic SOSL enables you to create more flexible applications. For example, you can create a search based on input from an end user, or update records with varying field names.

To create a dynamic SOSL query at run time, use the search query method. For example:

List<List <sObject>> myQuery = search.query(SOSL_search_string);</sObject>

String searchquery=’FIND'Edge*'IN ALL FIELDS RETURNING Account(id,name),Contact, Lead’;
List<List<SObject>>searchList=search.query(searchquery);</SObject>

75
Q

Use Dynamic SOSL to Return Snippets

A

To provide more context for records in search results, use the SOSL WITH SNIPPET clause. Snippets make it easier to identify the content you’re looking for.
WITH SNIPPET is an optional clause that can be added to a SOSL query for article, case, feed, and idea searches. On the search results page, excerpts below article titles show terms matching the search query highlighted within the context of surrounding text. Snippets make it easier for users to identify the content they’re looking for.

To use the SOSL WITH SNIPPET clause in a dynamic SOSL query at run time, use the Search.find method.
~~~
Search.SearchResults searchResults = Search.find(‘FIND 'test' IN ALL FIELDS RETURNING
KnowledgeArticleVersion(id, title WHERE PublishStatus = 'Online' AND Language = 'en_US') WITH SNIPPET (target_length=120)’);

List<Search.SearchResult> articlelist = searchResults.get('KnowledgeArticleVersion');</Search.SearchResult>

for (Search.SearchResult searchResult : articleList) {
KnowledgeArticleVersion article = (KnowledgeArticleVersion) searchResult.getSObject();
System.debug(article.Title);
System.debug(searchResult.getSnippet());
}
~~~

76
Q

Dynamic DML

A

In addition to querying describe information and building SOQL queries at runtime, you can also create sObjects dynamically, and insert them into the database using DML.

To create a new sObject of a given type, use the newSObject method on an sObject token. Note that the token must be cast into a concrete sObject type (such as Account). For example:

// Get a new account
Account a = new Account();
// Get the token for the account
Schema.sObjectType tokenA = a.getSObjectType();
// The following produces an error because the token is a generic sObject, not an Account
// Account b = tokenA.newSObject();
// The following works because the token is cast back into an Account
Account b = (Account)tokenA.newSObject();

Dynamic sObject Creation Example:
This example shows how to obtain the sObject token through the Schema.getGlobalDescribe method and then creates a new sObject using the newSObject method on the token. This example also contains a test method that verifies the dynamic creation of an account.
public class DynamicSObjectCreation {
    public static sObject createObject(String typeName) {
        Schema.SObjectType targetType = Schema.getGlobalDescribe().get(typeName);
        if (targetType == null) {
            // throw an exception
        }
        
        // Instantiate an sObject with the type passed in as an argument
        //  at run time.
        return targetType.newSObject(); 
    }
}
77
Q

Understanding Apex Describe Information

A

You can describe sObjects either by using tokens or the describeSObjects Schema method.

Apex provides two data structures and a method for sObject and field describe information:

Token—a lightweight, serializable reference to an sObject or a field that is validated at compile time. This is used for token describes.
The describeSObjects method—a method in the Schema class that performs describes on one or more sObject types.
Describe result—an object of type Schema.DescribeSObjectResult that contains all the describe properties for the sObject or field. Describe result objects are not serializable, and are validated at runtime. This result object is returned when performing the describe, using either the sObject token or the describeSObjects method.

Describing sObjects Using Tokens
It is easy to move from a token to its describe result, and vice versa. Both sObject and field tokens have the method getDescribe which returns the describe result for that token. On the describe result, the getSObjectType and getSObjectField methods return the tokens for sObject and field, respectively.

Because tokens are lightweight, using them can make your code faster and more efficient. For example, use the token version of an sObject or field when you are determining the type of an sObject or field that your code needs to use. The token can be compared using the equality operator (==) to determine whether an sObject is the Account object, for example, or whether a field is the Name field or a custom calculated field.

The following code provides a general example of how to use tokens and describe results to access information about sObject and field properties:

// Create a new account as the generic type sObject
sObject s = new Account();

// Verify that the generic sObject is an Account sObject
System.assert(s.getsObjectType() == Account.sObjectType);

// Get the sObject describe result for the Account object
Schema.DescribeSObjectResult dsr = Account.sObjectType.getDescribe();

// Get the field describe result for the Name field on the Account object
Schema.DescribeFieldResult dfr = Schema.sObjectType.Account.fields.Name;

// Verify that the field token is the token for the Name field on an Account object
System.assert(dfr.getSObjectField() == Account.Name);

// Get the field describe result from the token
dfr = dfr.getSObjectField().getDescribe();
78
Q

Apex: Enforcing Sharing Rules

A

Apex generally runs in system context; that is, the current user’s permissions and field-level security aren’t taken into account during code execution. Sharing rules, however, are not always bypassed: the class must be declared with the without sharing keyword in order to ensure that sharing rules are not enforced.
Enforcing sharing rules by using the with sharing keyword doesn’t enforce the user’s permissions and field-level security. Apex always has access to all fields and objects in an organization, ensuring that code won’t fail to run because of fields or objects that are hidden from a user.
Enforcing the current user’s sharing rules can impact:
SOQL and SOSL queries. A query may return fewer rows than it would operating in system context.
DML operations. An operation may fail because the current user doesn’t have the correct permissions. For example, if the user specifies a foreign key value that exists in the organization, but which the current user doesn’t have access to.

79
Q

Enforcing Object and Field Permissions

A

To enforce field-level security (FLS) and object permissions of the running user, you can specify user-mode access for database operations. See Enforce User Mode for Database Operations. You can also enforce these permissions in your SOQL queries by using WITH SECURITY_ENFORCED. For more information, see Filter SOQL Queries Using WITH SECURITY_ENFORCED.

You can also enforce object-level and field-level permissions in your code by explicitly calling the sObject describe result methods (of Schema.DescribeSObjectResult) and the field describe result methods (of Schema.DescribeFieldResult) that check the current user’s access permission levels. In this way, you can verify if the current user has the necessary permissions, and only if he or she has sufficient permissions, you can then perform a specific DML operation or a query.

For example, you can call the isAccessible, isCreateable, or isUpdateable methods of Schema.DescribeSObjectResult to verify whether the current user has read, create, or update access to an sObject, respectively. Similarly, Schema.DescribeFieldResult exposes these access control methods that you can call to check the current user’s read, create, or update access for a field. In addition, you can call the isDeletable method provided by Schema.DescribeSObjectResult to check if the current user has permission to delete a specific sObject.

These examples call the access control methods.

To check the field-level update permission of the contact's email field before updating it:
if (Schema.sObjectType.Contact.fields.Email.isUpdateable()) {
   // Update contact
}
To check the field-level create permission of the contact's email field before creating a new contact:
if (Schema.sObjectType.Contact.fields.Email.isCreateable()) {
   // Create new contact
}
To check the field-level read permission of the contact's email field before querying for this field:
if (Schema.sObjectType.Contact.fields.Email.isAccessible()) {
   Contact c = [SELECT Email FROM Contact WHERE Id= :Id];
}
To check the object-level permission for the contact before deleting the contact:
if (Schema.sObjectType.Contact.isDeletable()) {
   // Delete contact
}

Considerations
Orgs with Experience Cloud sites enabled provide various settings to hide a user’s personal information from other users (see Manage Personal User Information Visibility and Share Personal Contact Information Within Experience Cloud Sites). These settings aren’t enforced in Apex, even with security features such as the WITH SECURITY_ENFORCED clause or the stripInaccessible method. To hide specific fields on the User object in Apex, follow the example code outlined in Comply with a User’s Personal Information Visibility Settings.
Automated Process users can’t perform Object and FLS checks in custom code unless appropriate permission sets are explicitly applied to those users.

80
Q

Enforce User Mode for Database Operations

A

List<Account> acc = [SELECT Id FROM Account WITH USER_MODE];
Apex code runs in system mode by default, which means that it runs with substantially elevated permissions over the user running the code. To enhance the security context of Apex, you can specify user-mode access for database operations. Field-level security (FLS) and object permissions of the running user are respected in user mode, unlike in system mode. User mode always applies sharing rules, but in system mode they’re controlled by sharing keywords on the class. See Using the with sharing, without sharing, and inherited sharing Keywords.</Account>

You can indicate the mode of the operation by using WITH USER_MODE or WITH SYSTEM_MODE in your SOQL or SOSL query. This example specifies user mode in SOQL.

Salesforce recommends that you enforce Field Level Security (FLS) by using WITH USER_MODE rather than WITH SECURITY-ENFORCED because of these additional advantages.

WITH USER_MODE accounts for polymorphic fields like Owner and Task.whatId.
WITH USER_MODE processes all clauses in the SOQL SELECT statement including the WHERE clause.
WITH USER_MODE finds all FLS errors in your SOQL query, while WITH SECURITY ENFORCED finds only the first error. Further, in user mode, you can use the getInaccessibleFields() method on QueryException to examine the full set of access errors.
Database operations can specify either user or system mode. This example inserts a new account in user mode.
Account acc = new Account(Name=’test’);
insert as user acc
The AccessLevel class represents the two modes in which Apex runs database operations. Use this class to define the execution mode as user mode or system mode. An optional accessLevel parameter in Database and Search methods specifies whether the method runs in system mode (AccessLevel.SYSTEM_MODE) or user mode (AccessLevel.USER_MODE). Use these overloaded methods to perform DML and query operations.

Database.query method. See Dynamic SOQL.
Database.getQueryLocator methods
Database.countQuery method
Search.query method
Database DML methods (insert, update, upsert, merge, delete, undelete, and convertLead)
Includes the *Immediate and *Async methods, such as insertImmediate and deleteAsync.

These methods require the accessLevel parameter.

Database.queryWithBinds
Database.getQueryLocatorWithBinds
Database.countQueryWithBinds

81
Q

Using Permission Sets to Enforce Security in DML and Search Operations (Developer Preview)

A

In Developer Preview, you can specify a permission set that is used to augment the field-level and object-level security for database and search operations. Run the AccessLevel.withPermissionSetId() method with a specified permission set ID. Specific user mode DML operations that are performed with that AccessLevel, respect the permissions in the specified permission set, in addition to the running user’s permissions.

This example runs the AccessLevel.withPermissionSetId() method with the specified permission set and inserts a custom object.

@isTest
public with sharing class ElevateUserModeOperations_Test {
    @isTest
    static void objectCreatePermViaPermissionSet() {
        Profile p = [SELECT Id FROM Profile WHERE Name='Minimum Access - Salesforce'];
        User u = new User(Alias = 'standt', Email='standarduser@testorg.com',
            EmailEncodingKey='UTF-8', LastName='Testing', LanguageLocaleKey='en_US',
            LocaleSidKey='en_US', ProfileId = p.Id,
            TimeZoneSidKey='America/Los_Angeles',
            UserName='standarduser' + DateTime.now().getTime() + '@testorg.com');
						System.runAs(u) {
            try { 
                Database.insert(new Account(name='foo'), AccessLevel.User_mode); 
                Assert.fail(); 
            } catch (SecurityException ex) { 
                Assert.isTrue(ex.getMessage().contains('Account'));
            }
            //Get ID of previously created permission set named 'AllowCreateToAccount'
            Id permissionSetId = [Select Id from PermissionSet 
                where Name = 'AllowCreateToAccount' limit 1].Id;
								
								Database.insert(new Account(name='foo'), AccessLevel.User_mode.withPermissionSetId(permissionSetId)); 

            // The elevated access level is not persisted to subsequent operations
            try { 
                Database.insert(new Account(name='foo2'), AccessLevel.User_mode); 
                Assert.fail(); 
            } catch (SecurityException ex) { 
                Assert.isTrue(ex.getMessage().contains('Account')); 
            } 
            
        } 
    } 
}
82
Q

Enforce Security with the stripInaccessible Method

A

Use the stripInaccessible method to enforce field-level and object-level data protection. This method can be used to strip the fields and relationship fields from query and subquery results that the user can’t access. The method can also be used to remove inaccessible sObject fields before DML operations to avoid exceptions and to sanitize sObjects that have been deserialized from an untrusted source.
This example code removes inaccessible fields from the query result. A display table for campaign data must always show the BudgetedCost. The ActualCost must be shown only to users who have permission to read that field.

This example code removes inaccessible fields from sObjects before DML operations. The user who doesn’t have permission to create Rating for an Account can still create an Account. The method ensures that no Rating is set and doesn’t throw an exception.
~~~
List<Account> newAccounts = new List<Account>();
Account a = new Account(Name='Acme Corporation');
Account b = new Account(Name='Blaze Comics', Rating=’Warm’);
newAccounts.add(a);
newAccounts.add(b);</Account></Account>

SObjectAccessDecision securityDecision = Security.stripInaccessible(
AccessType.CREATABLE, newAccounts);

// No exceptions are thrown and no rating is set
insert securityDecision.getRecords();

System.debug(securityDecision.getRemovedFields().get(‘Account’)); // Prints “Rating”
System.debug(securityDecision.getModifiedIndexes()); // Prints “1”
~~~

The field- and object-level data protection is accessed through the Security and SObjectAccessDecision classes. The access check is based on the field-level permission of the current user in the context of the specified operation—create, read, update, or upsert. The Security.stripInaccessible() method checks the source records for fields that don’t meet the field-level security check for the current user. The method also checks the source records for lookup or master-detail relationship fields to which the current user doesn’t have access. The method creates a return list of sObjects that is identical to the source records, except that the fields that are inaccessible to the current user are removed. The sObjects returned by the getRecords method contain records in the same order as the sObjects in the sourceRecords parameter of the stripInaccessible method.

As a Developer Preview feature, Security.stripInaccessible() takes a permission set ID as a parameter and enforces field-level and object-level access as per the specified permission set, in addition to the running user’s permissions.

To identify inaccessible fields that were removed, you can use the SObject.isSet() method. For example, the return list contains the Contact object and the custom field social_security_number_c is inaccessible to the user. Because this custom field fails the field-level access check, the field isn’t set and isSet returns false.

This example code sanitizes sObjects that have been deserialized from an untrusted source. The user doesn’t have permission to update the AnnualRevenue of an Account.
~~~
String jsonInput =
‘[’ +
‘{‘ +
‘“Name”: “InGen”,’ +
‘“AnnualRevenue”: “100”’ +
‘},’ +
‘{‘ +
‘“Name”: “Octan”’ +
‘}’ +
‘]’;

List<Account> accounts = (List<Account>)JSON.deserializeStrict(jsonInput, List<Account>.class);
SObjectAccessDecision securityDecision = Security.stripInaccessible(
AccessType.UPDATABLE, accounts);</Account></Account></Account>

// Secure update
update securityDecision.getRecords(); // Doesn’t update AnnualRevenue field
System.debug(String.join(securityDecision.getRemovedFields().get(‘Account’), ‘, ‘)); // Prints “AnnualRevenue”
System.debug(String.join(securityDecision.getModifiedIndexes(), ‘, ‘)); // Prints “0”
~~~

83
Q

Filter SOQL Queries Using WITH SECURITY_ENFORCED

A

List<Account> act1 = [SELECT Id, (SELECT LastName FROM Contacts)
FROM Account WHERE Name like 'Acme' **WITH SECURITY_ENFORCED**]
Use the WITH SECURITY_ENFORCED clause to enable field- and object-level security permissions checking for SOQL SELECT queries in Apex code, including subqueries and cross-object relationships.
** If any fields or objects referenced in the SOQL SELECT query using WITH SECURITY_ENFORCED are inaccessible to the user, a System.QueryException is thrown, and no data is returned.**
WITH SECURITY_ENFORCED applies field- and object-level security checks only to fields and objects referenced in SELECT or FROM SOQL clauses and not clauses like WHERE or ORDER BY. In other words, security is enforced on what the SOQL SELECT query returns, not on all the elements that go into running the query.</Account>

Insert the WITH SECURITY_ENFORCED clause:
After the WHERE clause if one exists, else after the FROM clause.
Before any ORDER BY, LIMIT, OFFSET, or aggregate function clauses.

There are some restrictions while querying polymorphic lookup fields using WITH SECURITY_ENFORCED. Polymorphic fields are relationship fields that can point to more than one entity.
Traversing a polymorphic field’s relationship is not supported in queries using WITH SECURITY_ENFORCED. For example, you cannot use WITH SECURITY_ENFORCED in this query, which returns the Id and Owner names for User and Calendar entities: SELECT Id, What.Name FROM Event WHERE What.Type IN (’User’,’Calendar’).
Using TYPEOF expressions with an ELSE clause is not supported in queries using WITH SECURITY_ENFORCED. TYPEOF is used in a SELECT query to specify the fields to be returned for a given type of a polymorphic relationship. For example, you cannot use WITH SECURITY_ENFORCED in this query. The query specifies certain fields to be returned for Account and Opportunity objects, and Name and Email fields to be returned for all other objects.
SELECT
TYPE OF What
WHEN Account THEN Phone
WHEN Opportunity THEN Amount
ELSE Name,Email
END
FROM Event
The Owner, CreatedBy, and LastModifiedBy polymorphic lookup fields are exempt from this restriction, and do allow polymorphic relationship traversal.
For AppExchange Security Review, you must use API version 48.0 or later when using WITH SECURITY_ENFORCED. You cannot use API versions where the feature was in beta or pilot.

84
Q

Salesforce recommends that you enforce Field Level Security (FLS) by using WITH USER_MODE rather than WITH SECURITY-ENFORCED because of these additional advantages.

A

WITH USER_MODE accounts for polymorphic fields like Owner and Task.whatId.
WITH USER_MODE processes all clauses in the SOQL SELECT statement including the WHERE clause.
WITH USER_MODE finds all FLS errors in your SOQL query, while WITH SECURITY ENFORCED finds only the first error. Further, in user mode, you can use the getInaccessibleFields() method on QueryException to examine the full set of access errors.

List<Account> acc = **[SELECT Id FROM Account WITH USER_MODE]**;</Account>

This feature is available in scratch orgs where the ApexUserModeWithPermset feature is enabled. If the feature isn’t enabled, Apex code with this feature can be compiled but not executed.

85
Q

User Access to specific class: Class Security

A

You can specify which users can execute methods in a particular top-level class based on their user profile or permission sets. You can only set security on Apex classes, not on triggers.

To set Apex class security from the class list page:
From Setup, enter Apex Classes in the Quick Find box, then select Apex Classes.
Next to the name of the class that you want to restrict, click Security.
Select the profiles that you want to enable from the Available Profiles list and click Add, or select the profiles that you want to disable from the Enabled Profiles list and click Remove.
Click Save

To set Apex class security from the class detail page:
From Setup, enter Apex Classes in the Quick Find box, then select Apex Classes.
Click the name of the class that you want to restrict.
Click Security.
Select the profiles that you want to enable from the Available Profiles list and click Add, or select the profiles that you want to disable from the Enabled Profiles list and click Remove.
Click Save.

To set Apex class security from a permission set:

From Setup, enter Permission Sets in the Quick Find box, then select Permission Sets.
Select a permission set.
Click Apex Class Access.
Click Edit.
Select the Apex classes that you want to enable from the Available Apex Classes list and click Add, or select the Apex classes that you want to disable from the Enabled Apex Classes list and click Remove.
Click Save.

To set Apex class security from a profile:
From Setup, enter Profiles in the Quick Find box, then select Profiles.
Select a profile.
In the Apex Class Access page or related list, click Edit.
Select the Apex classes that you want to enable from the Available Apex Classes list and click Add, or select the Apex classes that you want to disable from the Enabled Apex Classes list and click Remove.
Click Save.

86
Q

Understanding Apex Managed Sharing

A

Sharing is the act of granting a user or group of users permission to perform a set of actions on a record or set of records. Sharing access can be granted using the Salesforce user interface and Lightning Platform, or programmatically using Apex.
Sharing enables record-level access control for all custom objects, as well as many standard objects (such as Account, Contact, Opportunity and Case). Administrators first set an object’s organization-wide default sharing access level, and then grant additional access based on record ownership, the role hierarchy, sharing rules, and manual sharing. Developers can then use Apex managed sharing to grant additional access programmatically with Apex.
Most sharing for a record is maintained in a related sharing object, similar to an access control list (ACL) found in other platforms.

87
Q

Types of record Sharing

A

Salesforce has the following types of sharing:

  1. Managed Sharing
    Managed sharing involves sharing access granted by Lightning Platform based on record ownership, the role hierarchy, and sharing rules:
    1)Record Ownership
    Each record is owned by a user or optionally a queue for custom objects, cases and leads. The record owner is automatically granted Full Access, allowing them to view, edit, transfer, share, and delete the record.
    2)Role Hierarchy
    The role hierarchy enables users above another user in the hierarchy to have the same level of access to records owned by or shared with users below. Consequently, users above a record owner in the role hierarchy are also implicitly granted Full Access to the record, though this behavior can be disabled for specific custom objects. The role hierarchy is not maintained with sharing records. Instead, role hierarchy access is derived at runtime. For more information, see “Controlling Access Using Hierarchies” in the Salesforce online help.
    3)Sharing Rules
    Sharing rules are used by administrators to automatically grant users within a given group or role access to records owned by a specific group of users. Sharing rules cannot be added to a package and cannot be used to support sharing logic for apps installed from AppExchange.
    Sharing rules can be based on record ownership or other criteria. You can’t use Apex to create criteria-based sharing rules. Also, criteria-based sharing cannot be tested using Apex.

All implicit sharing added by Force.com managed sharing cannot be altered directly using the Salesforce user interface, SOAP API, or Apex.

  1. User Managed Sharing, also known as Manual Sharing
    User managed sharing allows the record owner or any user with Full Access to a record to share the record with a user or group of users. This is generally done by an end user, for a single record. Only the record owner and users above the owner in the role hierarchy are granted Full Access to the record. It is not possible to grant other users Full Access. Users with the “Modify All” object-level permission for the given object or the “Modify All Data” permission can also manually share a record. User managed sharing is removed when the record owner changes or when the access granted in the sharing does not grant additional access beyond the object’s organization-wide sharing default access level.
  2. Apex Managed Sharing
    Apex managed sharing provides developers with the ability to support an application’s particular sharing requirements programmatically through Apex or the SOAP API. This type of sharing is similar to managed sharing. Only users with “Modify All Data” permission can add or change Apex managed sharing on a record. Apex managed sharing is maintained across record owner changes.
88
Q

Sharing: Access Levels

A

When determining a user’s access to a record, the most permissive level of access is used. Most share objects support the following access levels:
Access Level API Name Description
Private None Only the record owner and users above the record owner in the role hierarchy can view and edit the record. This access level only applies to the AccountShare object.
Read Only Read The specified user or group can view the record only.
Read/Write Edit The specified user or group can view and edit the record.
Full Access All The specified user or group can view, edit, transfer, share, and delete the record. Note - This access level can only be granted with managed sharing

Apex Triggers and User Record Sharing
If a trigger changes the owner of a record, the running user must have read access to the new owner’s user record if the trigger is started through the following:
API
Standard user interface
Standard Visualforce controller
Class defined with the with sharing keyword
If a trigger is started through a class that’s not defined with the with sharing keyword, the trigger runs in system mode. In this case, the trigger doesn’t require the running user to have specific access.

89
Q

Sharing a Record Using Apex

A

To access sharing programmatically, you must use the share object associated with the standard or custom object for which you want to share. For example, AccountShare is the sharing object for the Account object, ContactShare is the sharing object for the Contact object. In addition, all custom object sharing objects are named as follows, where MyCustomObject is the name of the custom object:
MyCustomObject_Share
Objects on the detail side of a master-detail relationship don’t have an associated sharing object. The detail record’s access is determined by the master’s sharing object and the relationship’s sharing setting.
A share object includes records supporting all three types of sharing: managed sharing, user managed sharing, and Apex managed sharing. Sharing that is granted to users implicitly through organization-wide defaults, the role hierarchy, and permissions such as the “View All” and “Modify All” permissions for the given object, “View All Data,” and “Modify All Data” aren’t tracked with this object.
Every share object has the following properties:
objectNameAccessLevel
The level of access that the specified user or group has been granted for a share sObject. The name of the property is AccessLevel appended to the object name. For example, the property name for LeadShare object is LeadAccessLevel. Valid values are:
Edit
Read
All

Note - The All access level is an internal value and can’t be granted.
This field must be set to an access level that’s higher than the organization’s default access level for the parent object.

ParentID The ID of the custom object. This field can’t be updated.
RowCause The reason why the user or group is being granted access. The reason determines the type of sharing, which controls who can alter the sharing record. This field can’t be updated.
UserOrGroupId The user or group IDs to which you’re granting access. A group can be:
A public group or a sharing group associated with a role.
A territory group.
This field can’t be updated.
Note - You can’t grant access to unauthenticated guest users using Apex.

90
Q

Creating User Managed Sharing Using Apex

A

It’s possible to manually share a record to a user or a group using Apex or SOAP API. If the owner of the record changes, the sharing is automatically deleted. The following example class contains a method that shares the job specified by the job ID with the specified user or group ID with read access
~~~
public class JobSharing {

public static boolean manualShareRead(Id recordId, Id userOrGroupId){
// Create new sharing object for the custom object Job.
Job__Share jobShr = new Job__Share();

  // Set the ID of record being shared.
  jobShr.ParentId = recordId;
    
  // Set the ID of user or group being granted access.
  jobShr.UserOrGroupId = userOrGroupId;
    
  // Set the access level.
  jobShr.AccessLevel = 'Read';
    
  // Set rowCause to 'manual' for manual sharing.
  // This line can be omitted as 'manual' is the default value for sharing objects.
  jobShr.RowCause = Schema.Job\_\_Share.RowCause.Manual;
    
  // Insert the sharing record and capture the save result. 
  // The false parameter allows for partial processing if multiple records passed 
  // into the operation.
  Database.SaveResult sr = Database.insert(jobShr,false);
		// Process the save results.
  if(sr.isSuccess()){
     // Indicates success
     return true;
  }
  else {
     // Get first save result error.
     Database.Error err = sr.getErrors()[0];
     
     // Check if the error is related to trival access level.
     // Access level must be more permissive than the object's default.
     // These sharing records are not required and thus an insert exception is acceptable. 
     if(err.getStatusCode() == StatusCode.FIELD_FILTER_VALIDATION_EXCEPTION  &&  
              err.getMessage().contains('AccessLevel')){
        // Indicates success.
        return true;
     }
     else{
        // Indicates failure.
        return false;
     }
   }    }

}
@isTest
private class JobSharingTest {
// Test for the manualShareRead method
static testMethod void testManualShareRead(){
// Select users for the test.
List<User> users = [SELECT Id FROM User WHERE IsActive = true LIMIT 2];
Id User1Id = users[0].Id;
Id User2Id = users[1].Id;</User>

  // Create new job.
  Job\_\_c j = new Job\_\_c();
  j.Name = 'Test Job';
  j.OwnerId = user1Id;
  insert j;    
            
  // Insert manual share for user who is not record owner.
  System.assertEquals(JobSharing.manualShareRead(j.Id, user2Id), true);
   
  // Query job sharing records.
  List<Job\_\_Share> jShrs = [SELECT Id, UserOrGroupId, AccessLevel, 
     RowCause FROM job\_\_share WHERE ParentId = :j.Id AND UserOrGroupId= :user2Id];
  
  // Test for only one manual share on job.
  System.assertEquals(jShrs.size(), 1, 'Set the object\'s sharing model to Private.');
  
  // Test attributes of manual share.
  System.assertEquals(jShrs[0].AccessLevel, 'Read');
  System.assertEquals(jShrs[0].RowCause, 'Manual');
  System.assertEquals(jShrs[0].UserOrGroupId, user2Id);
  
  // Test invalid job Id.
  delete j;   
   
  // Insert manual share for deleted job id. 
  System.assertEquals(JobSharing.manualShareRead(j.Id, user2Id), false);    }   } ~~~
91
Q

Creating Apex Managed Sharing

A

Apex managed sharing enables developers to programmatically manipulate sharing to support their application’s behavior through either Apex or SOAP API. This type of sharing is similar to managed sharing. Only users with “Modify All Data” permission can add or change Apex managed sharing on a record. Apex managed sharing is maintained across record owner changes.

Apex managed sharing must use an Apex sharing reason. Apex sharing reasons are a way for developers to track why they shared a record with a user or group of users. Using multiple Apex sharing reasons simplifies the coding required to make updates and deletions of sharing records. They also enable developers to share with the same user or group multiple times using different reasons.

Apex sharing reasons are defined on an object’s detail page. Each Apex sharing reason has a label and a name:
The label displays in the Reason column when viewing the sharing for a record in the user interface. This label allows users and administrators to understand the source of the sharing. The label is also enabled for translation through the Translation Workbench.
The name is used when referencing the reason in the API and Apex.
All Apex sharing reason names have the following format:
~~~
MyReasonName__c
~~~
Apex sharing reasons can be referenced programmatically as follows:
~~~
Schema.CustomObject__Share.rowCause.SharingReason__c
~~~
For example, an Apex sharing reason called Recruiter for an object called Job can be referenced as follows:
~~~
Schema.Job__Share.rowCause.Recruiter__c
~~~
To create an Apex sharing reason:
From the management settings for the custom object, click New in the Apex Sharing Reasons related list.
Enter a label for the Apex sharing reason. The label displays in the Reason column when viewing the sharing for a record in the user interface. The label is also enabled for translation through the Translation Workbench.
Enter a name for the Apex sharing reason. The name is used when referencing the reason in the API and Apex. This name can contain only underscores and alphanumeric characters, and must be unique in your org. It must begin with a letter, not include spaces, not end with an underscore, and not contain two consecutive underscores.
Click Save.
Apex sharing reasons and Apex managed sharing recalculation are only available for custom objects.

92
Q

Apex Managed Sharing Example

A

For this example, suppose that you’re building a recruiting application and have an object called Job. You want to validate that the recruiter and hiring manager listed on the job have access to the record. The following trigger grants the recruiter and hiring manager access when the job record is created. This example requires a custom object called Job, with two lookup fields associated with User records called Hiring_Manager and Recruiter. Also, the Job custom object must have two sharing reasons added called Hiring_Manager and Recruiter.
~~~
trigger JobApexSharing on Job__c (after insert) {

if(trigger.isInsert){
    // Create a new list of sharing objects for Job
    List<Job\_\_Share> jobShrs  = new List<Job\_\_Share>();
    
    // Declare variables for recruiting and hiring manager sharing
    Job\_\_Share recruiterShr;
    Job\_\_Share hmShr;
    
    for(Job\_\_c job : trigger.new){
        // Instantiate the sharing objects
        recruiterShr = new Job\_\_Share();
        hmShr = new Job\_\_Share();
        
        // Set the ID of record being shared
        recruiterShr.ParentId = job.Id;
        hmShr.ParentId = job.Id;
        
        // Set the ID of user or group being granted access
        recruiterShr.UserOrGroupId = job.Recruiter\_\_c;
        hmShr.UserOrGroupId = job.Hiring_Manager\_\_c;
        
        // Set the access level
        recruiterShr.AccessLevel = 'edit';
        hmShr.AccessLevel = 'read';
        
        // Set the Apex sharing reason for hiring manager and recruiter
        recruiterShr.RowCause = Schema.Job\_\_Share.RowCause.Recruiter\_\_c;
        hmShr.RowCause = Schema.Job\_\_Share.RowCause.Hiring_Manager\_\_c;
        
        // Add objects to list for insert
        jobShrs.add(recruiterShr);
        jobShrs.add(hmShr);
    }
    
    // Insert sharing records and capture save result 
    // The false parameter allows for partial processing if multiple records are passed 
    // into the operation 
    Database.SaveResult[] lsr = Database.insert(jobShrs,false);
    
    // Create counter
    Integer i=0;
    
    // Process the save results
    for(Database.SaveResult sr : lsr){
        if(!sr.isSuccess()){
            // Get the first save result error
            Database.Error err = sr.getErrors()[0];
            
            // Check if the error is related to a trivial access level
            // Access levels equal or more permissive than the object's default 
            // access level are not allowed. 
            // These sharing records are not required and thus an insert exception is 
            // acceptable. 
            if(!(err.getStatusCode() == StatusCode.FIELD_FILTER_VALIDATION_EXCEPTION  
                                           &&  err.getMessage().contains('AccessLevel'))){
                // Throw an error when the error is not related to trivial access level.
                trigger.newMap.get(jobShrs[i].ParentId).
                  addError(
                   'Unable to grant sharing access due to following exception: '
                   \+ err.getMessage());
            }
        }
        i++;
    }   
}

}
~~~

93
Q

Recalculating Apex Managed Sharing

A

Salesforce automatically recalculates sharing for all records on an object when its organization-wide sharing default access level changes. The recalculation adds managed sharing when appropriate. In addition, all types of sharing are removed if the access they grant is considered redundant. For example, manual sharing, which grants Read Only access to a user, is deleted when the object’s sharing model changes from Private to Public Read Only.

To recalculate Apex managed sharing, you must write an Apex class that implements a Salesforce-provided interface to do the recalculation. You must then associate the class with the custom object, on the custom object’s detail page, in the Apex Sharing Recalculation related list.
You can execute this class from the custom object detail page where the Apex sharing reason is specified. An administrator might need to recalculate the Apex managed sharing for an object if a locking issue prevented Apex code from granting access to a user as defined by the application’s logic. You can also use the Database.executeBatch method to programmatically invoke an Apex managed sharing recalculation.

94
Q

Anonymous Blocks

A

An anonymous block is Apex code that doesn’t get stored in the metadata, but that can be compiled and executed.
Compile and execute anonymous blocks using one of the following:
Developer Console
Salesforce extensions for Visual Studio Code
The executeAnonymous() SOAP API call: ExecuteAnonymousResult executeAnonymous(String code)
Note the following about the content of an anonymous block (for executeAnonymous(), the code String):
Can include user-defined methods and exceptions.
User-defined methods can’t include the keyword static.
You don’t have to manually commit any database changes.
If your Apex trigger completes successfully, any database changes are automatically committed. If your Apex trigger doesn’t complete successfully, any changes made to the database are rolled back.
Unlike classes and triggers, anonymous blocks execute as the current user and can fail to compile if the code violates the user’s object- and field-level permissions.
Don’t have a scope other than local. For example, although it’s legal to use the global access modifier, it has no meaning. The scope of the method is limited to the anonymous block.
When you define a class or interface (a custom type) in an anonymous block, it’s considered virtual by default when the anonymous block executes. This fact is true even if your custom type wasn’t defined with the virtual modifier. To avoid this from happening, save your class or interface in Salesforce. (Classes and interfaces defined in an anonymous block aren’t saved in your org.)

95
Q

Executing Anonymous Apex through the API and the Author Apex Permission

A

To run any Apex code with the executeAnonymous() API call, including Apex methods saved in the org, users must have the Author Apex permission. For users who don’t have the Author Apex permission, the API allows restricted execution of anonymous Apex. This exception applies only when users execute anonymous Apex through the API, or through a tool that uses the API, but not in the Developer Console. Such users are allowed to run the following in an anonymous block.

Code that they write in the anonymous block
Web service methods (methods declared with the webservice keyword) that are saved in the org
Any built-in Apex methods that are part of the Apex language
Running any other Apex code isn’t allowed when the user doesn’t have the Author Apex permission. For example, calling methods of custom Apex classes that are saved in the org isn’t allowed nor is using custom classes as arguments to built-in methods.

When users without the Author Apex permission run DML statements in an anonymous block, triggers can get fired as a result.

96
Q

Apex Triggers

A

Apex can be invoked by using triggers. Apex triggers enable you to perform custom actions before or after changes to Salesforce records, such as insertions, updates, or deletions.
A trigger is Apex code that executes before or after the following types of operations:
1. insert
2. update
3. delete
4. merge
5. upsert
6. undelete

There are two types of triggers:
1. Before triggers are used to update or validate record values before they’re saved to the database.
2. After triggers are used to access field values that are set by the system (such as a record’s Id or LastModifiedDate field), and to affect changes in other records, such as logging into an audit table or firing asynchronous events with a queue. The records that fire the after trigger are read-only.

Triggers can also modify other records of the same type as the records that initially fired the trigger. For example, if a trigger fires after an update of contact A, the trigger can also modify contacts B, C, and D. Because triggers can cause other records to change, and because these changes can, in turn, fire more triggers, the Apex runtime engine considers all such operations a single unit of work and sets limits on the number of operations that can be performed to prevent infinite recursion.

Total stack depth for any Apex invocation that recursively fires triggers due to insert, update, or delete statements - 16

Apex trigger batch size - 200

Additionally, if you update or delete a record in its before trigger, or delete a record in its after trigger, you will receive a runtime error. This includes both direct and indirect operations. For example, if you update account A, and the before update trigger of account A inserts contact B, and the after insert trigger of contact B queries for account A and updates it using the DML update statement or database method, then you are indirectly updating account A in its before trigger, and you will receive a runtime error.

97
Q

Apex Trigger : Implementation Considerations

A
  1. upsert triggers fire both before and after insert or before and after update triggers as appropriate.
  2. merge triggers fire both before and after delete for the losing records, and both before and after update triggers for the winning record.
  3. Triggers that execute after a record has been undeleted only work with specific objects.
  4. Field history is not recorded until the end of a trigger. If you query field history in a trigger, you don’t see any history for the current transaction.
  5. Field history tracking honors the permissions of the current user. If the current user doesn’t have permission to directly edit an object or field, but the user activates a trigger that changes an object or field with history tracking enabled, no history of the change is recorded.
  6. Callouts must be made asynchronously from a trigger so that the trigger process isn’t blocked while waiting for the external service’s response. The asynchronous callout is made in a background process, and the response is received when the external service returns it. To make an asynchronous callout, use asynchronous Apex such as a future method.
    1. In API version 20.0 and earlier, if a Bulk API request causes a trigger to fire, each chunk of 200 records for the trigger to process is split into chunks of 100 records. In Salesforce API version 21.0 and later, no further splits of API chunks occur. If a Bulk API request causes a trigger to fire multiple times for chunks of 200 records, governor limits are reset between these trigger invocations for the same HTTP request.
98
Q

Bulk Triggers

A

All triggers are bulk triggers by default, and can process multiple records at a time. You should always plan on processing more than one record at a time.

Note

An Event object that is defined as recurring is not processed in bulk for insert, delete, or update triggers.

Bulk triggers can handle both single record updates and bulk operations like:
Data import
Lightning Platform Bulk API calls
Mass actions, such as record owner changes and deletes
Recursive Apex methods and triggers that invoke bulk DML statements

99
Q

Trigger Syntax

A
trigger TriggerName on ObjectName (trigger_events) {
                     code_block
                     }

```
trigger myAccountTrigger on Account (before insert, before update) {
// Your code here
}
~~~
The code block of a trigger cannot contain the static keyword. Triggers can only contain keywords applicable to an inner class. In addition, you do not have to manually commit any database changes made by a trigger. If your Apex trigger completes successfully, any database changes are automatically committed. If your Apex trigger does not complete successfully, any changes made to the database are rolled back.

100
Q

Trigger Context Variables

A

All triggers define implicit variables that allow developers to access run-time context. These variables are contained in the System.Trigger class.
Variable Usage
isExecuting Returns true if the current context for the Apex code is a trigger, not a Visualforce page, a Web service, or an executeanonymous() API call.
isInsert Returns true if this trigger was fired due to an insert operation, from the Salesforce user interface, Apex, or the API.
isUpdate Returns true if this trigger was fired due to an update operation, from the Salesforce user interface, Apex, or the API.
isDelete Returns true if this trigger was fired due to a delete operation, from the Salesforce user interface, Apex, or the API.
isBefore Returns true if this trigger was fired before any record was saved.
isAfter Returns true if this trigger was fired after all records were saved.
isUndelete Returns true if this trigger was fired after a record is recovered from the Recycle Bin. This recovery can occur after an undelete operation from the Salesforce user interface, Apex, or the API.
new Returns a list of the new versions of the sObject records.
This sObject list is only available in insert, update, and undelete triggers, and the records can only be modified in before triggers.
newMap A map of IDs to the new versions of the sObject records.
This map is only available in before update, after insert, after update, and after undelete triggers.
old Returns a list of the old versions of the sObject records.
This sObject list is only available in update and delete triggers.
oldMap A map of IDs to the old versions of the sObject records.
This map is only available in update and delete triggers.
operationType Returns an enum of type System.TriggerOperation corresponding to the current operation. Possible values of the System.TriggerOperation enum are: BEFORE_INSERT, BEFORE_UPDATE, BEFORE_DELETE,AFTER_INSERT, AFTER_UPDATE, AFTER_DELETE, and AFTER_UNDELETE. If you vary your programming logic based on different trigger types, consider using the switch statement with different permutations of unique trigger execution enum states.
size The total number of records in a trigger invocation, both old and new.

The record firing a trigger can include an invalid field value, such as a formula that divides by zero. In this case, the field value is set to null in these variables:
new
newMap
old
oldMap
~~~

Trigger simpleTrigger on Account (after insert) {
for (Account a : Trigger.new) {
// Iterate over each sObject
}

// This single query finds every contact that is associated with any of the
// triggering accounts. Note that although Trigger.new is a collection of  
// records, when used as a bind variable in a SOQL query, Apex automatically
// transforms the list of records into a list of corresponding Ids.
Contact[] cons = [SELECT LastName FROM Contact
                  WHERE AccountId IN :Trigger.new]; } ~~~
trigger myAccountTrigger on Account(before delete, before insert, before update, 
                                    after delete, after insert, after update) {
if (Trigger.isBefore) {
    if (Trigger.isDelete) {

        // In a before delete trigger, the trigger accesses the records that will be
        // deleted with the Trigger.old list.
        for (Account a : Trigger.old) {
            if (a.name != 'okToDelete') {
                a.addError('You can\'t delete this record!');
            } 
        }
    } else {

    // In before insert or before update triggers, the trigger accesses the new records
    // with the Trigger.new list.
        for (Account a : Trigger.new) {
            if (a.name == 'bad') {
                a.name.addError('Bad name');
            }
    }
    if (Trigger.isInsert) {
        for (Account a : Trigger.new) {
            System.assertEquals('xxx', a.accountNumber); 
            System.assertEquals('industry', a.industry); 
            System.assertEquals(100, a.numberofemployees);
            System.assertEquals(100.0, a.annualrevenue);
            a.accountNumber = 'yyy';
        }

// If the trigger is not a before trigger, it must be an after trigger.
} else {
    if (Trigger.isInsert) {
        List<Contact> contacts = new List<Contact>();
        for (Account a : Trigger.new) {        
            if(a.Name == 'makeContact') {
                contacts.add(new Contact (LastName = a.Name,
                                          AccountId = a.Id));
            }
        } 
      insert contacts;
    }
  }
}}}
101
Q

Apex Trigger Context Variable Considerations

A

Be aware of the following considerations for trigger context variables:
1. trigger.new and trigger.old cannot be used in Apex DML operations.
2 .You can use an object to change its own field values using trigger.new, but only in before triggers. In all after triggers, trigger.new is not saved, so a runtime exception is thrown.
3. trigger.old is always read-only.
4. You cannot delete trigger.new.
The following table lists considerations about certain actions in different trigger events:

Trigger Event Can change fields using trigger.new Can update original object using an update DML operation Can delete original object using a delete DML operation
before insert Allowed. Not applicable. Not applicable.
after insert Not allowed (A runtime error is thrown, as trigger.new is already saved). Allowed. Allowed, but unnecessary. The object is deleted immediately after being inserted.
before update Allowed. Not allowed. A runtime error is thrown. Not allowed. A runtime error is thrown.
after update Not allowed (A runtime error is thrown, as trigger.new is already saved). Allowed. Even though bad code could cause an infinite recursion doing this incorrectly, the error would be found by the governor limits. Allowed. The updates are saved before the object is deleted, so if the object is undeleted, the updates become visible.
before delete Not allowed. Allowed(The updates are saved before the object is deleted, so if the object is undeleted, the updates become visible). Not allowed( A runtime error is thrown. The deletion is already in progress).
after delete Not allowed. Not applicable. The object has already been deleted. Not applicable. The object has already been deleted.
after undelete Not allowed. A runtime error is thrown. Allowed. Allowed, but unnecessary. The object is deleted immediately after being inserted.

102
Q

Using Maps and Sets in Bulk Triggers

A

Sets can be used to isolate distinct records, while maps can be used to hold query results organized by record ID.
For example, this bulk trigger first adds each pricebook entry associated with the OpportunityLineItem records in Trigger.new to a set, ensuring that the set contains only distinct elements. It then queries the PricebookEntries for their associated product color, and places the results in a map. Once the map is created, the trigger iterates through the OpportunityLineItems in Trigger.new and uses the map to assign the appropriate color.
~~~
// When a new line item is added to an opportunity, this trigger copies the value of the
// associated product’s color to the new record.
trigger oppLineTrigger on OpportunityLineItem (before insert) {

// For every OpportunityLineItem record, add its associated pricebook entry
// to a set so there are no duplicates.
Set<Id> pbeIds = new Set<Id>();
for (OpportunityLineItem oli : Trigger.new) 
    pbeIds.add(oli.pricebookentryid);

// Query the PricebookEntries for their associated product color and place the results
// in a map.
Map<Id, PricebookEntry> entries = new Map<Id, PricebookEntry>(
    [select product2.color\_\_c from pricebookentry 
     where id in :pbeIds]);
     
// Now use the map to set the appropriate color on every OpportunityLineItem processed
// by the trigger.
for (OpportunityLineItem oli : Trigger.new) 
    oli.color\_\_c = entries.get(oli.pricebookEntryId).product2.color\_\_c;   } ~~~
103
Q

Correlating Records with Query Results in Bulk Triggers

A

Use the Trigger.newMap and Trigger.oldMap ID-to-sObject maps to correlate records with query results. For example, this trigger from the sample quoting app uses Trigger.oldMap to create a set of unique IDs (Trigger.oldMap.keySet()). The set is then used as part of a query to create a list of quotes associated with the opportunities being processed by the trigger. For every quote returned by the query, the related opportunity is retrieved from Trigger.oldMap and prevented from being deleted:
~~~
trigger oppTrigger on Opportunity (before delete) {
for (Quote__c q : [SELECT opportunity__c FROM quote__c
WHERE opportunity__c IN :Trigger.oldMap.keySet()]) {
Trigger.oldMap.get(q.opportunity__c).addError(‘Cannot delete
opportunity with a quote’);
}
}
~~~

104
Q

Using Triggers to Insert or Update Records with Unique Fields

A

When an insert or upsert event causes a record to duplicate the value of a unique field in another new record in that batch, the error message for the duplicate record includes the ID of the first record. However, it is possible that the error message may not be correct by the time the request is finished.

When there are triggers present, the retry logic in bulk operations causes a rollback/retry cycle to occur. That retry cycle assigns new keys to the new records. For example, if two records are inserted with the same value for a unique field, and you also have an insert event defined for a trigger, the second duplicate record fails, reporting the ID of the first record. However, once the system rolls back the changes and re-inserts the first record by itself, the record receives a new ID. That means the error message reported by the second record is no longer valid.

105
Q

Enforcing Sharing Rules
What should be added to Apex class in order to enforce sharing

Apex

A

Apex generally runs in system context. In system context, Apex code has access to all objects and fields—object permissions, field-level security, and sharing rules aren’t applied for the current user.
The class must be declared using with sharing keyword to ensure sharing rules

106
Q

With Sharing

Apex

A

The with sharing keyword lets you specify that the sharing rules for the current user are enforced for the class. You have to explicitly set this keyword for the class because Apex code runs in system context.

107
Q

Without Sharing

Apex

A

You use the without sharing keywords when declaring a class to ensure that the sharing rules for the current user are not enforced. For example, you can explicitly turn off sharing rule enforcement when a class is called from another class that is declared using with sharing

108
Q

Inherited Sharing

Apex

A

Using inherited sharing enables you to pass AppExchange Security Review and ensure that your privileged Apex code is not used unexpectedly or insecurely. An Apex class with inherited sharing runs as with sharing when used as:

An Aura component controller
A Visualforce controller
An Apex REST service
Any other entry point to an Apex transaction

109
Q

Enforcing Object and Field Permissions
User Mode Operations

Apex

A

Data operations (SOQL, DML, and SOSL) in Apex run in system mode by default and have full CRUD access to all objects and fields in general. In Spring 2023, Apex introduced new access levels allowing developers to select the mode for executing data operations.
User mode
System mode
Executing data operations in user mode ensures that sharing rules, CRUD, and FLS permissions are respected and enforced.

110
Q

Enforcing Object and Field Permissions
Access Records in User Mode

Apex

A

Access Records in User mode ensures the enforcement of sharing rules, CRUD/FLS, and Restriction Rules. By utilizing SOQL queries with the USER_MODE keyword, such as in this example.
*List<Account> acc = [SELECT Id FROM Account **WITH USERMODE**];*
System mode privileges are temporarily dropped, ensuring retrieval of only the records accessible to the user. System mode resumes after the query execution is complete.</Account>

111
Q

Enforcing Object and Field Permissions
Insert Records in User Mode

Apex

A

In User mode, Insert Records ensures that the insert operation executes only if the user has permission for both creating a new record and edit permission on the field Opportunity.Amount (FLS check). For example, the user intends to create an opportunity with a value of $500 in the Amount field. To ensure that the insert operation proceeds only if the user possesses the necessary permissions to create a new record and edit the Opportunity.Amount field (Field-Level Security check), follow this approach when writing the code:

Opportunity o = new Opportunity();
o.Amount=500;
insert as user o;

Another way to execute User mode operations:
Opportunity o = new Opportunity();
o.Amount=500;
database.insert(o,AccessLevel.USERMODE);

112
Q

Enforcing Object and Field Permissions
Update Records in User Mode

Apex

A

To update Records in User mode:
Account a = [SELECT Id,Name,Website FROM Account WHERE Id=:recordId];
a.Website=’https://example.com’;
update as user a;

SOSL in User Mode:
* String querystring=’FIND :searchString IN ALL FIELDS RETURNING ‘;
queryString+=’Lead(Id, Salutation,FirstName,LastName,Name,Email,Company,Phone),’;
queryString+=’Contact(Id, Salutation,FirstName,LastName,Name,Email,Phone),’;
queryString+=’Account(Id,Name,Phone)’;
List<List<SObject>> searchResults = **search.query(queryString,AccessLevel.USERMODE*);***
User mode operation is the recommended way to avoid sharing, CRUD and FLS violations.</SObject>

113
Q

SOQL
Using WITH SECURITY_ENFORCED

SOQL

A

You can integrate the WITH SECURITY_ENFORCED clause into your SOQL SELECT queries within Apex code to validate field- and object-level security permissions automatically. This functionality extends to subqueries and cross-object relationships, streamlining query operations and technical intricacies.
Placement:
Insert the clause after the WHERE clause (if present) or after the FROM clause if no WHERE clause exists. Place it before ORDER BY, LIMIT, OFFSET, or aggregate function clauses.
*List<Account> act1 = [SELECT Id, (SELECT LastName FROM Contacts) FROM Account WHERE Name like 'Acme' **WITH SECURITY ENFORCED**]*returns Id and LastName for the Acme account entry if the user has field access for LastName.</Account>

Polymorphic Lookup Fields:
Traversal of a polymorphic field’s relationship is not supported, except for Owner, CreatedBy, and LastModifiedBy fields.
Avoid TYPEOF Expressions:
TYPEOF expressions with an ELSE clause in queries with WITH SECURITY_ENFORCED is not a supported API Version Requirement.
Use API version 48.0 or later for AppExchange Security Review when implementing WITH SECURITY_ENFORCED.
If referenced fields or objects are inaccessible to the user, WITH SECURITY_ENFORCED throws a System.QueryException, ensuring data security.

114
Q

Using CRUD/FLS Check Methods

Apex

A

You can also enforce object-level and field-level permissions in your code by explicitly calling the sObject describe result methods (of Schema.DescribeSObjectResult) and the field describe result methods (ofSchema.DescribeFieldResult) that check the current user’s access permission levels. In this way, you can verify if the current user has the necessary permission and only if they do, can you perform a specific DML operation or query.

For example, you can call the isAccessible,isCreateable, or isUpdateable methods of Schema.DescribeSObjectResult to verify whether the current user has read, create, or update access to an sObject, respectively. Similarly, Schema.DescribeFieldResult exposes these access control methods that you can call to check the current user’s read, create, or update access for a field. In addition, you can call theisDeletable method provided by Schema.DescribeSObjectResultto check if the current user has permission to delete a specific sObject.

Let’s walk through theDescribeSObjectResult class helper functions that you can use to verify a user’s level of access. These include:
IsCreateable()
IsAccessible()
IsUpdateable()
IsDeleteable()

115
Q

Using CRUD/FLS Check Methods
isCreateable()

Apex

A

Before your code inserts a record in the database, you have to check that the logged-in user has both Edit permission on the field and Create permission on the object. You can check both permissions by using the isCreateable() method on the particular object.

  • if (!Schema.sObjectType.Opportunity.isCreateable() || !Schema.sObjectType.Opportunity.fields.Amount.isCreateable()){
    ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.ERROR,
    ‘Error: Insufficient Access’));
    return null;
    }
    Opportunity o = new Opportunity();
    o.Amount=500;
    database.insert(o);*
116
Q

Using CRUD/FLS Check Methods
isAccessible()

Apex

A

Before your code retrieves a field from an object, you want to verify that the logged-in user has permission to access the field by using the isAccessible() method on the particular object.
// Check if the user has read access on the Opportunity.ExpectedRevenue field
if (!Schema.sObjectType.Opportunity.isAccessible() || !Schema.sObjectType.Opportunity.fields.ExpectedRevenue.isAccessible()){
ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.ERROR,’Error: Insufficient Access’));
return null;
}
Opportunity [] myList = [SELECT ExpectedRevenue FROM Opportunity LIMIT 1000];

117
Q

Using CRUD/FLS Check Methods
isUpdateable()

Apex

A

Before your code updates a record, you have to check if the logged-in user has Edit permission for the field and the object. You can check for both permissions by using theisUpdateable()method on the particular object.
//Let’s assume we have fetched opportunity “o” from a SOQL query
if (!Schema.sObjectType.Opportunity.isUpdateable() || !Schema.sObjectType.Opportunity.fields.StageName.isUpdateable()){
ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.ERROR,’Error: Insufficient Access’));
return null;
}
o.StageName=’Closed Won’; update o;

118
Q

Using CRUD/FLS Check Methods
isDeleteable

Apex

A

To enforce “delete” access restrictions, use the isDeleteable() function before your code performs a delete database operation. Here’s how to configure this operation:
if (!Lead.sObjectType.getDescribe().isDeleteable()){
delete l;
return null;
}

Notice that unlike update, create, and access, with delete we explicitly perform only a CRUD check, verifying that the user can delete the object. Since you delete entire records in SOQL and don’t delete fields, you need to check only the user’s CRUD access to the object.

119
Q

Using CRUD/FLS Check Methods
Using stripInaccessible()

Apex

A

You use stripInaccessible method to enforce field- and object-level data protection. This method can be used to strip the fields and relationship fields from query and subquery results that the user can’t access. The method can also be used to remove inaccessible sObject fields before DML operations to avoid exceptions and to sanitize sObjects that have been deserialized from an untrusted source.

You access field- and object-level data protection through the Security and SObjectAccessDecision classes. The access check is based on the field-level permission of the current user in the context of the specified operation: create, read, update, or delete. The stripInaccessible method checks the source records for fields that don’t meet field-level security checks for the current user.

The method also checks the source records for look-up or master-detail relationship fields to which the current user doesn’t have access. The method creates a return list of sObjects that is identical to the source records, except that the fields that are inaccessible to the current user are removed. The sObjects returned by the getRecordsmethod contain records in the same order as the sObjects in the sourceRecords parameter of the stripInaccessible method. Fields that aren’t queried are null in the return list, without causing an exception.
Note - The ID field is never stripped by the stripInaccessiblemethod to avoid issues when performing DML on the result.

To identify inaccessible fields that were removed, you can use the isSet method. For example, the return list contains the Contact object and the custom field social_security_numberc is inaccessible to the user. Because this custom field fails the field-level access check, the field is not set and isSet returns false. This is how it looks.

SObjectAccessDecision securityDecision = Security.stripInaccessible(sourceRecords);
Contact c = securityDecision.getRecords()[0];
System.debug(c.isSet(‘socialsecuritynumberc’)); // prints “false”
**

120
Q

Asynchronous Apex

A

Future Methods Run in their own thread, and do not start until resources are available.
Queueable Apex Similar to future methods, but provide additional job chaining and allow more complex data types to be used.
Batch Apex Run large jobs that would exceed normal processing limits.
Scheduled Apex Schedule Apex to run at a specified time.

121
Q

Future Methods

A

Future Apex is used to run processes in a separate thread, at a later time when system resources become available.

Note: Technically, you use the @future annotation to identify methods that run asynchronously.

Future methods are typically used for:

  1. Callouts to external Web services. If you are making callouts from a trigger or after performing a DML operation, you must use a future or queueable method. A callout in a trigger would hold the database connection open for the lifetime of the callout and that is a “no-no” in a multitenant environment.
  2. Operations you want to run in their own thread, when time permits such as some sort of resource-intensive calculation or processing of records.
  3. Isolating DML operations on different sObject types to prevent the mixed DML error. This is somewhat of an edge-case but you may occasionally run across this issue.

Future Method Syntax
Future methods must be static methods, and can only return a void type. The specified parameters must be primitive data types or collections of primitive data types. Notably, future methods can’t take standard or custom objects as arguments. A common pattern is to pass the method a List of record IDs that you want to process asynchronously.
Future methods are not guaranteed to execute in the same order as they are called
~~~
public class SomeClass {
@future
public static void someFutureMethod(List<Id> recordIds) {
List<Account> accounts = [Select Id, Name from Account Where Id IN :recordIds];
// process account records to do awesome stuff
}
}
~~~
**Sample Callout Code**
To make a Web service callout to an external service or API, you create an Apex class with a future method that is marked with (callout=true). The class below has methods for making the callout both synchronously and asynchronously where callouts are not permitted. We insert a record into a custom log object to track the status of the callout
~~~
public class SMSUtils {
// Call async from triggers, etc, where callouts are not permitted.
@future(callout=true)
public static void sendSMSAsync(String fromNbr, String toNbr, String m) {
String results = sendSMS(fromNbr, toNbr, m);
System.debug(results);
}
// Call from controllers, etc, for immediate processing
public static String sendSMS(String fromNbr, String toNbr, String m) {
// Calling 'send' will result in a callout
String results = SmsMessage.send(fromNbr, toNbr, m);
insert new SMS_Log\_\_c(to\_\_c=toNbr, from\_\_c=fromNbr, msg\_\_c=results);
return results;
}
}
~~~
**Test Classes**
Testing future methods is a little different than typical Apex testing. To test future methods, enclose your test code between the startTest() and stopTest() test methods. The system collects all asynchronous calls made after the startTest(). When stopTest() is executed, all these collected asynchronous processes are then run synchronously. You can then assert that the asynchronous call operated properly.
Here’s our mock callout class used for testing. The Apex testing framework utilizes this ‘mock’ response instead of making the actual callout to the REST API endpoint.
~~~
@isTest
public class SMSCalloutMock implements HttpCalloutMock {
public HttpResponse respond(HttpRequest req) {
// Create a fake response
HttpResponse res = new HttpResponse();
res.setHeader('Content-Type', 'application/json');
res.setBody('{"status":"success"}');
res.setStatusCode(200);
return res;
}
}
~~~
The test class contains a single test method (testSendSms() in this example), which tests both the asynchronous and synchronous methods as the former calls the latter.
~~~
@IsTest
private class Test_SMSUtils {
@IsTest
private static void testSendSms() {
Test.setMock(HttpCalloutMock.class, new SMSCalloutMock());
Test.startTest();
SMSUtils.sendSMSAsync('111', '222', 'Greetings!');
Test.stopTest();
// runs callout and check results
List<SMS_Log\_\_c> logs = [select msg\_\_c from SMS_Log\_\_c];
System.assertEquals(1, logs.size());
System.assertEquals('success', logs[0].msg\_\_c);
}
}
~~~</Account></Id>

122
Q

Future Method: Best Practices

A
  1. Because every future method invocation adds one request to the asynchronous queue, avoid design patterns that add large numbers of future requests over a short period of time. If your design has the potential to add 2000 or more requests at a time, requests could get delayed due to flow control.
  2. Ensure that future methods execute as fast as possible.
    If using Web service callouts, try to bundle all callouts together from the same future method, rather than using a separate future method for each callout.
  3. Conduct thorough testing at scale. Test that a trigger enqueuing the @future calls is able to handle a trigger collection of 200 records. This helps determine if delays may occur given the design at current and future volumes.
  4. Consider using Batch Apex instead of future methods to process large number of records asynchronously. This is more efficient than creating a future request for each record.

Things to Remember
1. Methods with the @future annotation must be static methods, and can only return a void type. The specified parameters must be primitive data types or collections of primitive data types; future methods can’t take objects as arguments.
2. Future methods won’t necessarily execute in the same order they are called. In addition, it’s possible that two future methods could run concurrently, which could result in record locking if the two methods were updating the same record.
3. Future methods can’t be used in Visualforce controllers in getMethodName(), setMethodName(), nor in the constructor.
You can’t call a future method from a future method. Nor can you invoke a trigger that calls a future method while running a future method.
4. The getContent() and getContentAsPDF() methods can’t be used in methods with the @future annotation.
5. You’re limited to 50 future calls per Apex invocation, and there’s an additional limit on the number of calls in a 24-hour period. For more information on limits, see the link below.

123
Q

Methods with the future annotation have the following limits

A

Methods with the future annotation have the following limits:
1. No more than 0 in batch and future contexts; 50 in queueable context method calls per Apex invocation. Asynchronous calls, such as @future or executeBatch, called in a startTest, stopTest block, don’t count against your limits for the number of queued jobs.
2. The maximum number of future method invocations per a 24-hour period is 250,000 or the number of user licenses in your organization multiplied by 200, whichever is greater. This limit is for your entire org and is shared with all asynchronous Apex: Batch Apex, Queueable Apex, scheduled Apex, and future methods. To check how many asynchronous Apex executions are available, make a request to REST API limits resource. See List Organization Limits in the REST API Developer Guide. If the number of asynchronous Apex executions needed by a job exceeds the available number that’s calculated using the 24-hour rolling limit, an exception is thrown. For example, if your async job requires 10,000 method executions and the available 24-hour rolling limit is 9,500, you get AsyncApexExecutions Limit exceeded exception. The license types that count toward this limit include full Salesforce and Salesforce Platform user licenses, App Subscription user licenses, Chatter Only users, Identity users, and Company Communities users.

124
Q

Queueable Apex

A

Take control of your asynchronous Apex processes by using the Queueable interface. This interface enables you to add jobs to the queue and monitor them. Using the interface is an enhanced way of running your asynchronous Apex code compared to using future methods.
Apex processes that run for a long time, such as extensive database operations or external web service callouts, can be run asynchronously by implementing the Queueable interface and adding a job to the Apex job queue. In this way, your asynchronous Apex job runs in the background in its own thread and doesn’t delay the execution of your main Apex logic. Each queued job runs when system resources become available. A benefit of using the Queueable interface methods is that some governor limits are higher than for synchronous Apex, such as heap size limits.

Queueable jobs are similar to future methods in that they’re both queued for execution, but they provide you with these additional benefits.

  1. Getting an ID for your job: When you submit your job by invoking the System.enqueueJob method, the method returns the ID of the new job. This ID corresponds to the ID of the AsyncApexJob record. Use this ID to identify and monitor your job, either through the Salesforce UI (Apex Jobs page), or programmatically by querying your record from AsyncApexJob.
  2. Using non-primitive types: Your queueable class can contain member variables of non-primitive data types, such as sObjects or custom Apex types. Those objects can be accessed when the job executes.
  3. Chaining jobs: You can chain one job to another job by starting a second job from a running job. Chaining jobs is useful if your process depends on another process to have run first.

You can set a maximum stack depth of chained Queueable jobs, overriding the default limit of five in Developer and Trial Edition organizations.

125
Q

Adding a Queueable Job to the Asynchronous Execution Queue

A

This example implements the Queueable interface. The execute method in this example inserts a new account. The System.enqueueJob(queueable) method is used to add the job to the queue.
~~~
public class AsyncExecutionExample implements Queueable {
public void execute(QueueableContext context) {
Account a = new Account(Name=’Acme’,Phone=’(415) 555-1212’);
insert a;
}
}
~~~
To add this class as a job on the queue, call this method:
ID jobID = System.enqueueJob(new AsyncExecutionExample());

After you submit your queueable class for execution, the job is added to the queue and will be processed when system resources become available. You can monitor the status of your job programmatically by querying AsyncApexJob or through the user interface in Setup by entering Apex Jobs in the Quick Find box, then selecting Apex Jobs.

To query information about your submitted job, perform a SOQL query on AsyncApexJob by filtering on the job ID that the System.enqueueJob method returns. This example uses the jobID variable that was obtained in the previous example.

AsyncApexJob jobInfo = [SELECT Status,NumberOfErrors FROM AsyncApexJob WHERE Id=:jobID];
Similar to future jobs, queueable jobs don’t process batches, and so the number of processed batches and the number of total batches are always zero.

126
Q

Adding a Queueable Job with a Specified Minimum Delay

A

Use the System.enqueueJob(queueable, delay) method to add queueable jobs to the asynchronous execution queue with a specified minimum delay (0–10 minutes). The delay is ignored during Apex testing.

See System.enqueueJob(queueable, delay) in the Apex Reference Guide.
When you set the delay to 0 (zero), the queueable job is run as quickly as possible. With chained queueable jobs, implement a mechanism to slow down or halt the job if necessary. Without such a fail-safe mechanism in place, you can rapidly reach the daily async Apex limit.
In the following cases, it would be beneficial to adjust the timing before the queueable job is run.

If the external system is rate-limited and can be overloaded by chained queueable jobs that are making rapid callouts.
When polling for results, and executing too fast can cause wasted usage of the daily async Apex limits.
This example adds a job for delayed asynchronous execution by passing in an instance of your class implementation of the Queueable interface for execution. There’s a minimum delay of 5 minutes before the job is executed.

Integer delayInMinutes = 5;
ID jobID = System.enqueueJob(new MyQueueableClass(), delayInMinutes);

Admins can define a default org-wide delay (1–600 seconds) in scheduling queueable jobs that were scheduled without a delay parameter. Use the delay setting as a mechanism to slow default queueable job execution. If the setting is omitted, Apex uses the standard queueable timing with no added delay.

Using the System.enqueueJob(queueable, delay) method ignores any org-wide enqueue delay setting.

Define the org-wide delay in one of these ways.

From Setup, in the Quick Find box, enter Apex Settings, and then enter a value (1–600 seconds) for Default minimum enqueue delay (in seconds) for queueable jobs that do not have a delay parameter
To enable this feature programmatically with Metadata API, see ApexSettings in the Metadata API Developer Guide.

127
Q

Adding a Queueable Job with a Specified Stack Depth

A

Use the System.enqueueJob(queueable, asyncOptions) method where you can specify the maximum stack depth and the minimum queue delay in the asyncOptions parameter.

The System.AsyncInfo class properties contain the current and maximum stack depths and the minimum queueable delay.

The System.AsyncInfo class has methods to help you determine if maximum stack depth is set in your Queueable request and to get the stack depths and queue delay for your queueables that are currently running. Use information about the current queueable execution to make decisions on adjusting delays on subsequent calls.

These are methods in the System.AsyncInfo class.

hasMaxStackDepth()
getCurrentQueueableStackDepth()
getMaximumQueueableStackDepth()
getMinimumQueueableDelayInMinutes()
This example uses stack depth to terminate a chained job and prevent it from reaching the daily maximum number of asynchronous Apex method executions.
~~~
// Fibonacci
public class FibonacciDepthQueueable implements Queueable {

private long nMinus1, nMinus2;
   
public static void calculateFibonacciTo(integer depth) {
    AsyncOptions asyncOptions = new AsyncOptions();
    asyncOptions.MaximumQueueableStackDepth = depth;
    System.enqueueJob(new FibonacciDepthQueueable(null, null), asyncOptions);
}
   
private FibonacciDepthQueueable(long nMinus1param, long nMinus2param) {
    nMinus1 = nMinus1param;
    nMinus2 = nMinus2param;
}
	public void execute(QueueableContext context) {
   
    integer depth = AsyncInfo.getCurrentQueueableStackDepth();
   
    // Calculate step
    long fibonacciSequenceStep;
    switch on (depth) {
        when 1, 2 {
            fibonacciSequenceStep = 1;
        }
        when else {
            fibonacciSequenceStep = nMinus1 + nMinus2;
        }
    }
   
    System.debug('depth: ' + depth + ' fibonacciSequenceStep: ' + fibonacciSequenceStep);
   
    if(System.AsyncInfo.hasMaxStackDepth() &&
       AsyncInfo.getCurrentQueueableStackDepth() >= 
       AsyncInfo.getMaximumQueueableStackDepth()) {
        // Reached maximum stack depth
        Fibonacci\_\_c result = new Fibonacci\_\_c(
            Depth\_\_c = depth,
            Result = fibonacciSequenceStep
            );
        insert result;
    } else {
        System.enqueueJob(new FibonacciDepthQueueable(fibonacciSequenceStep, nMinus1));
    }
} } ~~~
128
Q

Testing Queueable Jobs

A

This example shows how to test the execution of a queueable job in a test method. A queueable job is an asynchronous process. To ensure that this process runs within the test method, the job is submitted to the queue between the Test.startTest and Test.stopTest block. The system executes all asynchronous processes started in a test method synchronously after the Test.stopTest statement. Next, the test method verifies the results of the queueable job by querying the account that the job created.
~~~
@isTest
public class AsyncExecutionExampleTest {
@isTest
static void test1() {
// startTest/stopTest block to force async processes
// to run in the test.
Test.startTest();
System.enqueueJob(new AsyncExecutionExample());
Test.stopTest();

    // Validate that the job has run
    // by verifying that the record was created.
    // This query returns only the account created in test context by the 
    // Queueable class method.
    Account acct = [SELECT Name,Phone FROM Account WHERE Name='Acme' LIMIT 1];
    System.assertNotEquals(null, acct);
    System.assertEquals('(415) 555-1212', acct.Phone);
} } ~~~
129
Q

Chaining Jobs

A

To run a job after some other processing is done first by another job, you can chain queueable jobs. To chain a job to another job, submit the second job from the execute() method of your queueable class. You can add only one job from an executing job, which means that only one child job can exist for each parent job. For example, if you have a second class called SecondJob that implements the Queueable interface, you can add this class to the queue in the execute() method as follows:
~~~
public class AsyncExecutionExample implements Queueable {
public void execute(QueueableContext context) {
// Your processing logic here

    // Chain this job to next job by submitting the next job
    System.enqueueJob(new SecondJob());
} } ~~~

Apex allows HTTP and web service callouts from queueable jobs, if they implement the Database.AllowsCallouts marker interface. In queueable jobs that implement this interface, callouts are also allowed in chained queueable jobs.

130
Q

Queueable Apex Limits

A
  1. The execution of a queued job counts one time against the shared limit for asynchronous Apex method executions. See Lightning Platform Apex Limits.
  2. You can add up to 50 jobs to the queue with System.enqueueJob in a single transaction. In asynchronous transactions (for example, from a batch Apex job), you can add only one job to the queue with System.enqueueJob. To check how many queueable jobs have been added in one transaction, call Limits.getQueueableJobs().
    Because no limit is enforced on the depth of chained jobs, you can chain one job to another. You can repeat this process with each new child job to link it to a new child job. For Developer Edition and Trial organizations, the maximum stack depth for chained jobs is 5, which means that you can chain jobs four times. The maximum number of jobs in the chain is 5, including the initial parent queueable job.
    When chaining jobs with System.enqueueJob, you can add only one job from an executing job. Only one child job can exist for each parent queueable job. Starting multiple child jobs from the same queueable job isn’t supported.
  3. Detecting Duplicate Queueable Jobs
    Reduce resource contention and race conditions by enqueuing only a single instance of your async Queueable job based on the signature. Attempting to add more than one Queueable job to the processing queue with the same signature results in a DuplicateMessageException when you try to enqueue subsequent jobs.
  4. Transaction Finalizers
    The Transaction Finalizers feature enables you to attach actions, using the System.Finalizer interface, to asynchronous Apex jobs that use the Queueable framework. A specific use case is to design recovery actions when a Queueable job fails.
  5. Transaction Finalizers Error Messages
    Troubleshoot both semantic and run-time issues by analyzing these error messages.
131
Q

Apex Scheduler

A

To invoke Apex classes to run at specific times, first implement the Schedulable interface for the class, then specify the schedule using either the Schedule Apex page in the Salesforce user interface, or the System.schedule method.
Salesforce schedules the class for execution at the specified time. Actual execution can be delayed based on service availability.

You can only have 100 scheduled Apex jobs at one time. You can evaluate your current count by viewing the Scheduled Jobs page in Salesforce and creating a custom view with a type filter equal to “Scheduled Apex”. You can also programmatically query the CronTrigger and CronJobDetail objects to get the count of Apex scheduled jobs.

Use extreme care if you’re planning to schedule a class from a trigger. You must be able to guarantee that the trigger won’t add more scheduled classes than the limit. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.

If there are one or more active scheduled jobs for an Apex class, you can’t update the class or any classes referenced by this class through the Salesforce user interface. However, you can enable deployments to update the class with active scheduled jobs by using the Metadata API (for example, when using the Salesforce extensions for Visual Studio Code). See “Deployment Connections for Change Sets” in Salesforce Help.

132
Q

Implementing the Schedulable Interface

A

To schedule an Apex class to run at regular intervals, first write an Apex class that implements the Salesforce-provided interface Schedulable.

The scheduler runs as system—all classes are executed, whether the user has permission to execute the class or not.

To monitor or stop the execution of a scheduled Apex job using the Salesforce user interface, from Setup, enter Scheduled Jobs in the Quick Find box, then select Scheduled Jobs.

The Schedulable interface contains one execute method that must be implemented.
global void execute(SchedulableContext sc){}
The implemented method must be declared as global or public.

Use this method to instantiate the class you want to schedule.

The following example implements the Schedulable interface for a class called MergeNumbers:

global class ScheduledMerge implements Schedulable {
global void execute(SchedulableContext SC) {
MergeNumbers M = new MergeNumbers();
}
}

To schedule the class, execute this example in the Developer Console.

ScheduledMerge m = new ScheduledMerge();
String sch = ‘20 30 8 10 2 ?’;
String jobID = System.schedule(‘Merge Job’, sch, m);

You can also use the Schedulable interface with batch Apex classes. The following example illustrates how to implement the Schedulable interface for a batch Apex class called Batchable:

global class ScheduledBatchable implements Schedulable {
global void execute(SchedulableContext sc) {
Batchable b = new Batchable();
Database.executeBatch(b);
}
}
An easier way to schedule a batch job is to call the System.scheduleBatch method without having to implement the Schedulable interface.

Use the SchedulableContext object to track the scheduled job when it’s scheduled. The SchedulableContext getTriggerID method returns the ID of the CronTrigger object associated with this scheduled job as a string. You can query CronTrigger to track the progress of the scheduled job.

To stop execution of a job that was scheduled, use the System.abortJob method with the ID returned by the getTriggerID method.

133
Q

Tracking the Progress of a Scheduled Job Using Queries

A

After the Apex job has been scheduled, you can obtain more information about it by running a SOQL query on CronTrigger. You can retrieve the number of times the job has run, and the date and time when the job is scheduled to run again, as shown in this example.
CronTrigger ct =
[SELECT TimesTriggered, NextFireTime
FROM CronTrigger WHERE Id = :jobID];

The previous example assumes you have a jobID variable holding the ID of the job. The System.schedule method returns the job ID. If you’re performing this query inside the execute method of your schedulable class, you can obtain the ID of the current job by calling getTriggerId on the SchedulableContext argument variable. Assuming this variable name is sc, the modified example becomes:

CronTrigger ct =
[SELECT TimesTriggered, NextFireTime
FROM CronTrigger WHERE Id = :sc.getTriggerId()];

You can also get the job’s name and the job’s type from the CronJobDetail record associated with the CronTrigger record. To do so, use the CronJobDetail relationship when performing a query on CronTrigger. This example retrieves the most recent CronTrigger record with the job name and type from CronJobDetail.

CronTrigger job =
[SELECT Id, CronJobDetail.Id, CronJobDetail.Name, CronJobDetail.JobType
FROM CronTrigger ORDER BY CreatedDate DESC LIMIT 1];

Alternatively, you can query CronJobDetail directly to get the job’s name and type. This next example gets the job’s name and type for the CronTrigger record queried in the previous example. The corresponding CronJobDetail record ID is obtained by the CronJobDetail.Id expression on the CronTrigger record.

CronJobDetail ctd =
[SELECT Id, Name, JobType
FROM CronJobDetail WHERE Id = :job.CronJobDetail.Id];
To obtain the total count of all Apex scheduled jobs, excluding all other scheduled job types, perform the following query. Note the value ‘7’ is specified for the job type, which corresponds to the scheduled Apex job type.

SELECT COUNT() FROM CronTrigger WHERE CronJobDetail.JobType = ‘7’

134
Q

Testing the Apex Scheduler

A

The following is an example of how to test using the Apex scheduler.

The System.schedule method starts an asynchronous process. When you test scheduled Apex, you must ensure that the scheduled job is finished before testing against the results. Use the Test methods startTest and stopTest around the System.schedule method to ensure it finishes before continuing your test. All asynchronous calls made after the startTest method are collected by the system. When stopTest is executed, all asynchronous processes are run synchronously. If you don’t include the System.schedule method within the startTest and stopTest methods, the scheduled job executes at the end of your test method for Apex saved using Salesforce API version 25.0 and later, but not in earlier versions.

This example defines a class to be tested.

global class TestScheduledApexFromTestMethod implements Schedulable {

// This test runs a scheduled job at midnight Sept. 3rd. 2042

   public static String CRON_EXP = '0 0 0 3 9 ? 2042';
   
   global void execute(SchedulableContext ctx) {
      CronTrigger ct = [SELECT Id, CronExpression, TimesTriggered, NextFireTime
                FROM CronTrigger WHERE Id = :ctx.getTriggerId()];

      System.assertEquals(CRON_EXP, ct.CronExpression);
      System.assertEquals(0, ct.TimesTriggered);
      System.assertEquals('2042-09-03 00:00:00', String.valueOf(ct.NextFireTime));

      Account a = [SELECT Id, Name FROM Account WHERE Name = 
                  'testScheduledApexFromTestMethod'];
      a.name = 'testScheduledApexFromTestMethodUpdated';
      update a;
   }   
}

The following tests the class:
~~~
@istest
class TestClass {

static testmethod void test() {
Test.startTest();

  Account a = new Account();
  a.Name = 'testScheduledApexFromTestMethod';
  insert a;

  // Schedule the test job

  String jobId = System.schedule('testBasicScheduledApex',
  TestScheduledApexFromTestMethod.CRON_EXP, 
     new TestScheduledApexFromTestMethod());

  // Get the information from the CronTrigger API object
  CronTrigger ct = [SELECT Id, CronExpression, TimesTriggered, 
     NextFireTime
     FROM CronTrigger WHERE id = :jobId];

  // Verify the expressions are the same
  System.assertEquals(TestScheduledApexFromTestMethod.CRON_EXP, 
     ct.CronExpression);

  // Verify the job has not run
  System.assertEquals(0, ct.TimesTriggered);

  // Verify the next time the job will run
  System.assertEquals('2042-09-03 00:00:00', 
     String.valueOf(ct.NextFireTime));
  System.assertNotEquals('testScheduledApexFromTestMethodUpdated',
     [SELECT id, name FROM account WHERE id = :a.id].name);

Test.stopTest();

System.assertEquals(‘testScheduledApexFromTestMethodUpdated’,
[SELECT Id, Name FROM Account WHERE Id = :a.Id].Name);

}
}
~~~

135
Q

Using the System.schedule Method

A

After you implement a class with the Schedulable interface, use the System.schedule method to execute it. The scheduler runs as system—all classes are executed, whether the user has permission to execute the class or not.
The System.schedule method takes three arguments: a name for the job, an expression used to represent the time and date the job is scheduled to run, and the name of the class. This expression has the following syntax:
Seconds Minutes Hours Day_of_month Month Day_of_week Optional_year
The following are some examples of how to use the expression.
0 0 13 * * ? The class runs every day at 1 PM.
0 5 * * * ? The class runs every hour at 5 minutes past the hour.
0 0 22 ? * 6L The class runs on the last Friday of every month at 10 PM.
0 0 10 ? * MON-FRI The class runs Monday through Friday at 10 AM.
0 0 20 * * ? 2010 The class runs every day at 8 PM during the year 2010.
In the following example, the class Proschedule implements the Schedulable interface. The class is scheduled to run at 8 AM on the 13 February.
Proschedule p = new Proschedule();
String sch = ‘0 0 8 13 2 ?’;
System.schedule(‘One Time Pro’, sch, p);

136
Q

Using the System.scheduleBatch Method for Batch Jobs

A

You can call the System.scheduleBatch method to schedule a batch job to run one time at a specified time in the future. This method is available only for batch classes and doesn’t require the implementation of the Schedulable interface. It’s therefore easy to schedule a batch job for one execution. For more details on how to use the System.scheduleBatch method, see Using the System.scheduleBatch Method.

137
Q

Apex Scheduler Limits

A
  1. You can only have 100 scheduled Apex jobs at one time. You can evaluate your current count by viewing the Scheduled Jobs page in Salesforce and creating a custom view with a type filter equal to “Scheduled Apex”. You can also programmatically query the CronTrigger and CronJobDetail objects to get the count of Apex scheduled jobs.
  2. The maximum number of scheduled Apex executions per a 24-hour period is 250,000 or the number of user licenses in your organization multiplied by 200, whichever is greater. This limit is for your entire org and is shared with all asynchronous Apex: Batch Apex, Queueable Apex, scheduled Apex, and future methods. To check how many asynchronous Apex executions are available, make a request to REST API limits resource. See List Organization Limits in the REST API Developer Guide. If the number of asynchronous Apex executions needed by a job exceeds the available number that’s calculated using the 24-hour rolling limit, an exception is thrown. For example, if your async job requires 10,000 method executions and the available 24-hour rolling limit is 9,500, you get AsyncApexExecutions Limit exceeded exception. The license types that count toward this limit include full Salesforce and Salesforce Platform user licenses, App Subscription user licenses, Chatter Only users, Identity users, and Company Communities users.
138
Q

Apex Scheduler Notes and Best Practices

A
  1. Salesforce schedules the class for execution at the specified time. Actual execution can be delayed based on service availability.
  2. Use extreme care if you’re planning to schedule a class from a trigger. You must be able to guarantee that the trigger won’t add more scheduled classes than the limit. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.
  3. Though it’s possible to do additional processing in the execute method, we recommend that all processing must take place in a separate class.
  4. Synchronous Web service callouts aren’t supported from scheduled Apex. To make asynchronous callouts, use Queueable Apex, implementing the Database.AllowsCallouts marker interface. If your scheduled Apex executes a batch job using the Database.AllowsCallouts marker interface, callouts are supported from the batch class. See Using Batch Apex.
  5. Apex jobs scheduled to run during a Salesforce service maintenance downtime will be scheduled to run after the service comes back up, when system resources become available. If a scheduled Apex job was running when downtime occurred, the job is rolled back and scheduled again after the service comes back up. After major service upgrades, there can be longer delays than usual for starting scheduled Apex jobs because of system usage spikes.
    When you refresh a sandbox, scheduled jobs from the source org aren’t copied. You must reschedule any jobs that you need in the refreshed sandbox.
  6. Scheduled job objects, along with their member variables and properties, persist from initialization to subsequent scheduled runs. The object state at the time of invocation of System.schedule() persists in subsequent job executions.
  7. With Batch Apex, it’s possible to force a new serialized state for new jobs by usingDatabase.Stateful. With Scheduled Apex, use the transient keyword so that member variables and properties aren’t persisted. See Using the transient Keyword..
139
Q

Batch Apex

A

Batch Apex is exposed as an interface that must be implemented by the developer. Batch jobs can be programmatically invoked at runtime using Apex.

You can only have five queued or active batch jobs at one time. You can evaluate your current count by viewing the Scheduled Jobs page in Salesforce or programmatically using SOAP API to query the AsyncApexJob object.
Batch jobs can also be programmatically scheduled to run at specific times using the Apex scheduler, or scheduled using the Schedule Apex page in the Salesforce user interface.
To use batch Apex, write an Apex class that implements the Salesforce-provided interface Database.Batchable and then invoke the class programmatically.

To monitor or stop the execution of the batch Apex job, from Setup, enter Apex Jobs in the Quick Find box, then select Apex Jobs.

140
Q

Implementing the Database.Batchable Interface

A

The Database.Batchable interface contains three methods that must be implemented.
start method
~~~
public (Database.QueryLocator | Iterable<sObject>) start(Database.BatchableContext bc) {}
~~~
The start method is called at the beginning of a batch Apex job. In the start method, you can include code that collects records or objects to pass to the interface method execute. This method returns either a Database.QueryLocator object or an iterable that contains the records or objects passed to the job.</sObject>

When you’re using a simple query (SELECT) to generate the scope of objects in the batch job, use the Database.QueryLocator object. If you use a QueryLocator object, the governor limit for the total number of records retrieved by SOQL queries is bypassed. For example, a batch Apex job for the Account object can return a QueryLocator for all account records (up to 50 million records) in an org. Another example is a sharing recalculation for the Contact object that returns a QueryLocator for all account records in an org.

Use the iterable to create a complex scope for the batch job. You can also use the iterable to create your own custom process for iterating through the list.
execute method:
public void execute(Database.BatchableContext bc, list<P>){}

The execute method is called for each batch of records that you pass to it and takes these parameters.

A reference to the Database.BatchableContext object.
A list of sObjects, such as List<sObject>, or a list of parameterized types. If you’re using a Database.QueryLocator, use the returned list.
Batches of records tend to execute in the order in which they’re received from the start method. However, the order in which batches of records execute depends on various factors. The order of execution isn’t guaranteed.</sObject>

finish method:
public void finish(Database.BatchableContext bc){}
The finish method is called after all batches are processed and can be used to send confirmation emails or execute post-processing operations.

Each execution of a batch Apex job is considered a discrete transaction. For example, a batch Apex job that contains 1,000 records and is executed without the optional scope parameter from Database.executeBatch is considered five transactions of 200 records each. The Apex governor limits are reset for each transaction. If the first transaction succeeds but the second fails, the database updates made in the first transaction aren’t rolled back.

141
Q

Using Database.BatchableContext

A

ll the methods in the Database.Batchable interface require a reference to a Database.BatchableContext object. Use this object to track the progress of the batch job.

The following is the instance method with the Database.BatchableContext object:
getJobID Returns the ID of the AsyncApexJob object associated with this batch job as a string. Use this method to track the progress of records in the batch job. You can also use this ID with the System.abortJob method.
~~~
public void finish(Database.BatchableContext bc){
// Get the ID of the AsyncApexJob representing this batch job
// from Database.BatchableContext.
// Query the AsyncApexJob object to retrieve the current job’s information.
AsyncApexJob a = [SELECT Id, Status, NumberOfErrors, JobItemsProcessed,
TotalJobItems, CreatedBy.Email
FROM AsyncApexJob WHERE Id =
:bc.getJobId()];
// Send an email to the Apex job’s submitter notifying of job completion.
Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
String[] toAddresses = new String[] {a.CreatedBy.Email};
mail.setToAddresses(toAddresses);
mail.setSubject(‘Apex Sharing Recalculation ‘ + a.Status);
mail.setPlainTextBody
(‘The batch Apex job processed ‘ + a.TotalJobItems +
‘ batches with ‘+ a.NumberOfErrors + ‘ failures.’);
Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}
~~~

142
Q

Using Database.QueryLocator to Define Scope

A

The start method can return either a Database.QueryLocator object that contains the records to use in the batch job or an iterable.

The following example uses a Database.QueryLocator:
~~~
public class SearchAndReplace implements Database.Batchable<sObject>{</sObject>

public final String Query;
public final String Entity;
public final String Field;
public final String Value;

public SearchAndReplace(String q, String e, String f, String v){

  Query=q; Entity=e; Field=f;Value=v;    }

public Database.QueryLocator start(Database.BatchableContext bc){
return Database.getQueryLocator(query);
}

public void execute(Database.BatchableContext bc, List<sObject> scope){
for(sobject s : scope){
s.put(Field,Value);
}
update scope;
}</sObject>

public void finish(Database.BatchableContext bc){
}
}

Invokation:
// Query for 10 accounts
String q = ‘SELECT Industry FROM Account LIMIT 10’;
String e = ‘Account’;
String f = ‘Industry’;
String v = ‘Consulting’;
Id batchInstanceId = Database.executeBatch(new UpdateAccountFields(q,e,f,v), 5);
~~~

143
Q

Using an Iterable in Batch Apex to Define Scope

A

The start method can return either a Database.QueryLocator object that contains the records to use in the batch job or an iterable. Use an iterable to step through the returned items more easily.
~~~
public class batchClass implements Database.batchable{
public Iterable start(Database.BatchableContext info){
return new CustomAccountIterable();
}
public void execute(Database.BatchableContext info, List<Account> scope){
List<Account> accsToUpdate = new List<Account>();
for(Account a : scope){
a.Name = 'true';
a.NumberOfEmployees = 70;
accsToUpdate.add(a);
}
update accsToUpdate;
}
public void finish(Database.BatchableContext info){
}
}
~~~</Account></Account></Account>

144
Q

Using the Database.executeBatch Method to Submit Batch Jobs

A

You can use the Database.executeBatch method to programmatically begin a batch job.
When you call Database.executeBatch, Salesforce adds the process to the queue. Actual execution can be delayed based on service availability.
The Database.executeBatch method takes** two parameters:
1. An instance of a class that implements the Database.
Batchable** interface.
2. An optional parameter scope. This parameter specifies the number of records to pass into the execute method. Use this parameter when you have many operations for each record being passed in and are running into governor limits. By limiting the number of records, you’re limiting the operations per transaction. This value must be greater than zero. If the start method of the batch class returns a QueryLocator, the optional scope parameter of Database.executeBatch can have a maximum value of 2,000. If set to a higher value, Salesforce chunks the records returned by the QueryLocator into smaller batches of up to 2,000 records. If the start method of the batch class returns an iterable, the scope parameter value has no upper limit. However, if you use a high number, you can run into other limits. The optimal scope size is a factor of 2000, for example, 100, 200, 400 and so on.
The Database.executeBatch method returns the ID of the AsyncApexJob object, which you can use to track the progress of the job.
~~~
ID batchprocessid = Database.executeBatch(reassign);

AsyncApexJob aaj = [SELECT Id, Status, JobItemsProcessed, TotalJobItems, NumberOfErrors
FROM AsyncApexJob WHERE ID =: batchprocessid ];
~~~
You can also use this ID with the System.abortJob method.

145
Q

Holding Batch Jobs in the Apex Flex Queue

A

With the Apex flex queue, you can submit up to 100 batch jobs.

The outcome of Database.executeBatch is as follows.
The batch job is placed in the Apex flex queue, and its status is set to Holding.
If the Apex flex queue has the maximum number of 100 jobs, Database.executeBatch throws a LimitException and doesn’t add the job to the queue.
If your org doesn’t have Apex flex queue enabled, Database.executeBatch adds the batch job to the batch job queue with the Queued status. If the concurrent limit of queued or active batch jobs has been reached, a LimitException is thrown, and the job isn’t queued.

Reordering Jobs in the Apex Flex Queue

While submitted jobs have a status of Holding, you can reorder them in the Salesforce user interface to control which batch jobs are processed first. To do so, from Setup, enter Apex Flex Queue in the Quick Find box, then select Apex Flex Queue.

Alternatively, you can use Apex methods to reorder batch jobs in the flex queue. To move a job to a new position, call one of the System.FlexQueue methods. Pass the method the job ID and, if applicable, the ID of the job next to the moved job’s new position. For example:

Boolean isSuccess = System.FlexQueue.moveBeforeJob(jobToMoveId, jobInQueueId);

You can reorder jobs in the Apex flex queue to prioritize jobs. For example, you can move a batch job up to the first position in the holding queue to be processed first when resources become available. Otherwise, jobs are processed “first-in, first-out”—in the order in which they’re submitted.
When system resources become available, the system picks up the next job from the top of the Apex flex queue and moves it to the batch job queue. The system can process up to five queued or active jobs simultaneously for each organization. The status of these moved jobs changes from Holding to Queued. Queued jobs get executed when the system is ready to process new jobs. You can monitor queued jobs on the Apex Jobs page.

146
Q

Batch Job Statuses

A

Holding Job has been submitted and is held in the Apex flex queue until system resources become available to queue the job for processing.
Queued Job is awaiting execution.
Preparing The start method of the job has been invoked. This status can last a few minutes depending on the size of the batch of records.
Processing Job is being processed.
Aborted Job aborted by a user.
Completed Job completed with or without failure.
Failed Job experienced a system failure.

147
Q

Using the System.scheduleBatch Method

A

You can use the System.scheduleBatch method to schedule a batch job to run once at a future time.

The System.scheduleBatch method takes these parameters.
1. An instance of a class that implements the Database.Batchable interface.
2. The job name.
3. The time interval, in minutes, after which the job starts executing.
4. An optional scope value. This parameter specifies the number of records to pass into the execute method. Use this parameter when you have many operations for each record being passed in and are running into governor limits. By limiting the number of records, you’re limiting the operations per transaction. This value must be greater than zero.If the start method of the batch class returns a QueryLocator, the optional scope parameter of Database.executeBatch can have a maximum value of 2,000. If set to a higher value, Salesforce chunks the records returned by the QueryLocator into smaller batches of up to 2,000 records. If the start method of the batch class returns an iterable, the scope parameter value has no upper limit. However, if you use a high number, you can run into other limits. The optimal scope size is a factor of 2000, for example, 100, 200, 400 and so on.
The System.scheduleBatch method returns the scheduled job ID (CronTrigger ID).

This example schedules a batch job to run 60 minutes from now by calling System.scheduleBatch. The example passes this method an instance of a batch class (the reassign variable), a job name, and a time interval of 60 minutes. The optional scope parameter has been omitted. The method returns the scheduled job ID, which is used to query CronTrigger to get the status of the corresponding scheduled job.
~~~
String cronID = System.scheduleBatch(reassign, ‘job example’, 60);

CronTrigger ct = [SELECT Id, TimesTriggered, NextFireTime
FROM CronTrigger WHERE Id = :cronID];

// TimesTriggered should be 0 because the job hasn’t started yet.
System.assertEquals(0, ct.TimesTriggered);
System.debug(‘Next fire time: ‘ + ct.NextFireTime);
// For example:
// Next fire time: 2013-06-03 13:31:23
~~~
Some things to note about System.scheduleBatch:

  1. When you call System.scheduleBatch, Salesforce schedules the job for execution at the specified time. Actual execution occurs at or after that time, depending on service availability.
  2. The scheduler runs as system—all classes are executed, whether the user has permission to execute the class or not.
  3. When the job’s schedule is triggered, the system queues the batch job for processing. If Apex flex queue is enabled in your org, the batch job is added at the end of the flex queue. For more information, see Holding Batch Jobs in the Apex Flex Queue.
  4. All scheduled Apex limits apply for batch jobs scheduled using System.scheduleBatch. After the batch job is queued (with a status of Holding or Queued), all batch job limits apply and the job no longer counts toward scheduled Apex limits.
    5.After calling this method and before the batch job starts, you can use the returned scheduled job ID to abort the scheduled job using the System.abortJob method.
148
Q

Using Callouts in Batch Apex

A

To use a callout in batch Apex, specify Database.AllowsCallouts in the class definition. For example:
~~~
public class SearchAndReplace implements Database.Batchable<sObject>,
Database.AllowsCallouts{
}
~~~
Callouts include HTTP requests and methods defined with the webservice keyword.</sObject>

149
Q

Using State in Batch Apex

A

Each execution of a batch Apex job is considered a discrete transaction. For example, a batch Apex job that contains 1,000 records and is executed without the optional scope parameter is considered five transactions of 200 records each.

If you specify Database.Stateful in the class definition, you can maintain state across these transactions. When using Database.Stateful, only instance member variables retain their values between transactions. Static member variables don’t retain their values and are reset between transactions. Maintaining state is useful for counting or summarizing records as they’re processed. For example, suppose your job processes opportunity records. You can define a method in execute to aggregate the totals of the opportunity amounts as they are processed.

If you don’t specify Database.Stateful, all static and instance member variables are set back to their original values.
The following example summarizes a custom field total_c as the records are processed.
~~~
public class SummarizeAccountTotal implements
Database.Batchable<sObject>, Database.Stateful{</sObject>

public final String Query;
public integer Summary;

public SummarizeAccountTotal(String q){Query=q;
Summary = 0;
}

public Database.QueryLocator start(Database.BatchableContext bc){
return Database.getQueryLocator(query);
}

public void execute(
Database.BatchableContext bc,
List<sObject> scope){
for(sObject s : scope){
Summary = Integer.valueOf(s.get('total\_\_c'))+Summary;
}
}</sObject>

public void finish(Database.BatchableContext bc){
}
}
~~~
In addition, you can specify a variable to access the initial state of the class. You can use this variable to share the initial state with all instances of the Database.Batchable methods. For example:
~~~
// Implement the interface using a list of Account sObjects
// Note that the initialState variable is declared as final

public class MyBatchable implements Database.Batchable<sObject> {
private final String initialState;
String query;</sObject>

public MyBatchable(String intialState) {
this.initialState = initialState;
}

public Database.QueryLocator start(Database.BatchableContext bc) {
// Access initialState here

return Database.getQueryLocator(query);   }

public void execute(Database.BatchableContext bc,
List<sObject> batch) {
// Access initialState here</sObject>

}

public void finish(Database.BatchableContext bc) {
// Access initialState here

}
}
~~~
The initialState stores only the initial state of the class. You can’t use it to pass information between instances of the class during execution of the batch job. For example, if you change the value of initialState in execute, the second chunk of processed records can’t access the new value. Only the initial value is accessible.

150
Q

Testing Batch Apex

A

When testing your batch Apex, you can test only one execution of the execute method. Use the scope parameter of the executeBatch method to limit the number of records passed into the execute method to ensure that you aren’t running into governor limits.

The executeBatch method starts an asynchronous process. When you test batch Apex, make certain that the asynchronously processed batch job is finished before testing against the results. Use the Test methods startTest and stopTest around the executeBatch method to ensure that it finishes before continuing your test. All asynchronous calls made after the startTest method are collected by the system. When stopTest is executed, all asynchronous processes are run synchronously. If you don’t include the executeBatch method within the startTest and stopTest methods, the batch job executes at the end of your test method. This execution order applies for Apex saved using API version 25.0 and later, but not for earlier versions.

For Apex saved using API version 22.0 and later, exceptions that occur during the execution of a batch Apex job invoked by a test method are passed to the calling test method. As a result, these exceptions cause the test method to fail. If you want to handle exceptions in the test method, enclose the code in try and catch statements. Place the catch block after the stopTest method. However, with Apex saved using Apex version 21.0 and earlier, such exceptions don’t get passed to the test method and don’t cause test methods to fail.
Asynchronous calls, such as @future or executeBatch, called in a startTest, stopTest block, don’t count against your limits for the number of queued jobs.
~~~
public static testMethod void testBatch() {
user u = [SELECT ID, UserName FROM User
WHERE username=’testuser1@acme.com’];
user u2 = [SELECT ID, UserName FROM User
WHERE username=’testuser2@acme.com’];
String u2id = u2.id;
// Create 200 test accounts - this simulates one execute.
// Important - the Salesforce test framework only allows you to
// test one execute.

List <Account> accns = new List<Account>();
for(integer i = 0; i<200; i++){
Account a = new Account(Name='testAccount'+ i,
Ownerid = u.ID);
accns.add(a);
}</Account></Account>

insert accns;

Test.StartTest();
OwnerReassignment reassign = new OwnerReassignment();
reassign.query=’SELECT ID, Name, Ownerid ‘ +
‘FROM Account ‘ +
‘WHERE OwnerId='’ + u.Id + ‘'’ +
‘ LIMIT 200’;
reassign.email=’admin@acme.com’;
reassign.fromUserId = u.Id;
reassign.toUserId = u2.Id;
ID batchprocessid = Database.executeBatch(reassign);
Test.StopTest();

System.AssertEquals(
database.countquery(‘SELECT COUNT()’
+’ FROM Account WHERE OwnerId='’ + u2.Id + ‘'’),
200);

}
}
~~~
Use the System.Test.enqueueBatchJobs and System.Test.getFlexQueueOrder methods to enqueue and reorder no-operation jobs within the context of tests.

151
Q

Batch Apex Limitations

A

Keep in mind these governor limits and other limitations for batch Apex.
1. Up to 5 batch jobs can be queued or active concurrently.
2. Up to 100 Holding batch jobs can be held in the Apex flex queue.
3. In a running test, you can submit a maximum of 5 batch jobs.
The maximum number of batch Apex method executions per 24-hour period is 250,000, or the number of user licenses in your org multiplied by 200—whichever is greater. Method executions include executions of the start, execute, and finish methods. This limit is for your entire org and is shared with all asynchronous Apex: Batch Apex, Queueable Apex, scheduled Apex, and future methods. To check how many asynchronous Apex executions are available, make a request to REST API limits resource. See List Organization Limits in the REST API Developer Guide. If the number of asynchronous Apex executions needed by a job exceeds the available number that’s calculated using the 24-hour rolling limit, an exception is thrown. For example, if your async job requires 10,000 method executions and the available 24-hour rolling limit is 9,500, you get AsyncApexExecutions Limit exceeded exception. The license types that count toward this limit include full Salesforce and Salesforce Platform user licenses, App Subscription user licenses, Chatter Only users, Identity users, and Company Communities users.
4. A maximum of 50 million records can be returned in the Database.QueryLocator object. If more than 50 million records are returned, the batch job is immediately terminated and marked as Failed.
5. If the start method of the batch class returns a QueryLocator, the optional scope parameter of Database.executeBatch can have a maximum value of 2,000. If set to a higher value, Salesforce chunks the records returned by the QueryLocator into smaller batches of up to 2,000 records. If the start method of the batch class returns an iterable, the scope parameter value has no upper limit. However, if you use a high number, you can run into other limits. The optimal scope size is a factor of 2000, for example, 100, 200, 400 and so on.
If no size is specified with the optional scope parameter of Database.executeBatch, Salesforce chunks the records returned by the start method into batches of 200 records. The system then passes each batch to the execute method. Apex governor limits are reset for each execution of execute.
6. The start, execute, and finish methods can implement up to 100 callouts each.
7. Only one batch Apex job’s start method can run at a time in an org. 8. Batch jobs that haven’t started yet remain in the queue until they’re started. This limit doesn’t cause any batch job to fail and execute methods of batch Apex jobs still run in parallel if more than one job is running.
9. Enqueued batch Apex jobs are processed when system resources become available. There’s no guarantee on how long it takes to start, execute, and finish the queued jobs. You can use the Apex flex queue to prioritize jobs.
10. Using FOR UPDATE in SOQL queries to lock records during update isn’t applicable to Batch Apex.
11. Cursors and related query results are available for 2 days, including results in nested queries. For more information, see API Query Cursor Limits.

152
Q

Batch Apex Considerations and Best Practices

A
  1. Use extreme caution if you’re planning to invoke a batch job from a trigger. You must be able to guarantee that the trigger doesn’t add more batch jobs than the limit. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.
  2. When you call Database.executeBatch, Salesforce only places the job in the queue. Actual execution can be delayed based on service availability and flex queue priority.
  3. When testing your batch Apex, you can test only one execution of the execute method. Use the scope parameter of the executeBatch method to limit the number of records passed into the execute method to ensure that you aren’t running into governor limits.
  4. The executeBatch method starts an asynchronous process. When you test batch Apex, make certain that the asynchronously processed batch job is finished before testing against the results. Use the Test methods startTest and stopTest around the executeBatch method to ensure that it finishes before continuing your test.
  5. Use Database.Stateful with the class definition if you want to share instance member variables or data across job transactions. Otherwise, all member variables are reset to their initial state at the start of each transaction.
  6. Methods declared as future aren’t allowed in classes that implement the Database.Batchable interface. Methods declared as future can’t be called from a batch Apex class.
  7. When a batch Apex job is run, email notifications are sent to the user who submitted the batch job. If the code is included in a managed package and the subscribing org is running the batch job, notifications are sent to the recipient listed in the Apex Exception Notification Recipient field.
  8. Each method execution uses the standard governor limits anonymous block, Visualforce controller, or WSDL method.
  9. Each batch Apex invocation creates an AsyncApexJob record. To construct a SOQL query to retrieve the job’s status, number of errors, progress, and submitter, use the AsyncApexJob record’s ID. For more information about the AsyncApexJob object, see AsyncApexJob in the Object Reference for Salesforce.
  10. For each 10,000 AsyncApexJob records, Apex creates an AsyncApexJob record of type BatchApexWorker for internal use. When querying for all AsyncApexJob records, we recommend that you filter out records of type BatchApexWorker using the JobType field. Otherwise, the query returns one more record for every 10,000 AsyncApexJob records. For more information about the AsyncApexJob object, see AsyncApexJob in the Object Reference for Salesforce.
  11. All implemented Database.Batchable interface methods must be defined as public or global.
    For a sharing recalculation, we recommend that the execute method delete and then re-create all Apex managed sharing for the records in the batch. This process ensures that sharing is accurate and complete.
  12. Batch jobs queued before a Salesforce service maintenance downtime remain in the queue. After service downtime ends and when system resources become available, the queued batch jobs are executed. If a batch job was running when downtime occurred, the batch execution is rolled back and restarted after the service comes back up.
  13. Minimize the number of batches, if possible. Salesforce uses a queue-based framework to handle asynchronous processes from such sources as future methods and batch Apex. This queue is used to balance request workload across organizations. If more than 2,000 unprocessed requests from a single organization are in the queue, any additional requests from the same organization are delayed while the queue handles requests from other organizations.
  14. Salesforce recommends that you design your asynchronous Apex jobs to handle variations in processing time. For example, to handle potential processing overlaps, consider chaining batch jobs instead of scheduling jobs at fixed intervals.
  15. Ensure that batch jobs execute as fast as possible. To ensure fast execution of batch jobs, minimize Web service callout times and tune the queries used in your batch Apex code. The longer the batch job executes, the more likely other queued jobs are delayed when many jobs are in the queue.
  16. If you use batch Apex with Database.QueryLocator to access external objects via an OData adapter for Salesforce Connect:
    -Enable Request Row Counts on the external data source, and each response from the external system must include the total row count of the result set.
    -We recommend enabling Server-Driven Pagination on the external data source and having the external system determine page sizes and batch boundaries for large result sets. Typically, server-driven paging can adjust batch boundaries to accommodate changing datasets more effectively than client-driven paging.
    -When Server-Driven Pagination is disabled on the external data source, the OData adapter controls the paging behavior (client-driven). If external object records are added to the external system while a job runs, other records can be processed twice. If external object records are deleted from the external system while a job runs, other records can be skipped.
    -When Server-Driven Pagination is enabled on the external data source, the batch size at runtime is the smaller of these two sizes:
    Batch size specified in the scope parameter of Database.executeBatch. The default is 200 records.
    Page size returned by the external system. We recommend that you set up your external system to return page sizes of 200 or fewer records.
  17. Batch Apex jobs run faster when the start method returns a QueryLocator object that doesn’t include related records via a subquery. Avoiding relationship subqueries in a QueryLocator allows batch jobs to run using a faster, chunked implementation. If the start method returns an iterable or a QueryLocator object with a relationship subquery, the batch job uses a slower, non-chunking, implementation. For example, if this query is used in the QueryLocator, the batch job uses a slower implementation because of the relationship subquery:
    SELECT Id, (SELECT id FROM Contacts) FROM Account
    A better strategy is to perform the subquery separately, from within the execute method, which allows the batch job to run using the faster, chunking implementation.
  18. To implement record locking as part of the batch job, you can requery records inside the execute() method, using FOR UPDATE. Requerying records in this manner ensures that conflicting updates are not overwritten by DML in the batch job. To requery records, simply select the Id field in the batch job’s main query locator.
  19. The Salesforce Platform’s flow control mechanism and fair-usage algorithm can cause a delay in running batch jobs.
153
Q

Chaining Batch Jobs

A

Starting with API version 26.0, you can start another batch job from an existing batch job to chain jobs together. Chain a batch job to start a job after another one finishes and when your job requires batch processing, such as when processing large data volumes. Otherwise, if batch processing isn’t needed, consider using Queueable Apex.

You can chain a batch job by calling Database.executeBatch or System.scheduleBatch from the finish method of the current batch class. The new batch job will start after the current batch job finishes.

For previous API versions, you can’t call Database.executeBatch or System.scheduleBatch from any batch Apex method. The version that’s used is the version of the running batch class that starts or schedules another batch job. If the finish method in the running batch class calls a method in a helper class to start the batch job, the API version of the helper class doesn’t matter.

154
Q

Firing Platform Events from Batch Apex

A

Batch Apex classes can fire platform events when encountering an error or exception. Clients listening on an event can obtain actionable information, such as how often the event failed and which records were in scope at the time of failure. Events are also fired for Salesforce Platform internal errors and other uncatchable Apex exceptions such as LimitExceptions, which are caused by reaching governor limits.
An event message provides more granular error tracking than the Apex Jobs UI. It includes the record IDs being processed, exception type, exception message, and stack trace. You can also incorporate custom handling and retry logic for failures. You can invoke custom Apex logic from any trigger on this type of event, so Apex developers can build functionality like custom logging or automated retry handling.

For information on subscribing to platform events, see Subscribing to Platform Events.

The BatchApexErrorEvent object represents a platform event associated with a batch Apex class. This object is available in API version 44.0 and later. If the start, execute, or finish method of a batch Apex job encounters an unhandled exception, a BatchApexErrorEvent platform event is fired. For more details, see BatchApexErrorEvent in the Platform Events Developer Guide.

To fire a platform event, a batch Apex class declaration must implement the Database.RaisesPlatformEvents interface.
public with sharing class YourSampleBatchJob implements Database.Batchable<SObject>,
Database.RaisesPlatformEvents{
// class implementation
}
This example creates a trigger to determine which accounts failed in the batch transaction. Custom field Dirty\_\_c indicates that the account was one of a failing batch and ExceptionType\_\_c indicates the exception that was encountered. JobScope and ExceptionType are fields in the BatchApexErrorEvent object.
~~~
trigger MarkDirtyIfFail on BatchApexErrorEvent (after insert) {
Set<Id> asyncApexJobIds = new Set<Id>();
for(BatchApexErrorEvent evt:Trigger.new){
asyncApexJobIds.add(evt.AsyncApexJobId);
}</Id></Id></SObject>

Map<Id,AsyncApexJob> jobs = new Map<Id,AsyncApexJob>(
    [SELECT id, ApexClass.Name FROM AsyncApexJob WHERE Id IN :asyncApexJobIds]
);

List<Account> records = new List<Account>();
for(BatchApexErrorEvent evt:Trigger.new){
    //only handle events for the job(s) we care about
    if(jobs.get(evt.AsyncApexJobId).ApexClass.Name == 'AccountUpdaterJob'){
        for (String item : evt.JobScope.split(',')) {
            Account a = new Account(
                Id = (Id)item,
                ExceptionType\_\_c = evt.ExceptionType,
                Dirty\_\_c = true
            );
            records.add(a);
        }
    }
}
update records; } ~~~
155
Q

Testing BatchApexErrorEvent Messages Published from Batch Apex Jobs

A

Use the Test.getEventBus().deliver() method to deliver event messages that are published by failed batch Apex jobs. Use the Test.startTest() and Test.stopTest() statement block to execute the batch job.

This snippet shows how to execute a batch Apex job and deliver event messages. It executes the batch job after Test.stopTest(). This batch job publishes a BatchApexErrorEvent message when a failure occurs through the implementation of Database.RaisesPlatformEvents. After Test.stopTest() runs, a separate Test.getEventBus().deliver() statement is added so that it can deliver the BatchApexErrorEvent.
~~~
try {
Test.startTest();
Database.executeBatch(new SampleBatchApex());
Test.stopTest();
// Batch Apex job executes here
} catch(Exception e) {
// Catch any exceptions thrown in the batch job
}

// The batch job fires BatchApexErrorEvent if it fails, so deliver the event.
Test.getEventBus().deliver();
~~~

156
Q

sObject Types

A

The new operator still requires a concrete sObject type, so all instances are specific sObjects. For example:
sObject s = new Account();

You can also use casting between the generic sObject type and the specific sObject type. For example:
// Cast the generic variable s from the example above
// into a specific account and account variable a
Account a = (Account)s;
// The following generates a runtime error
Contact c = (Contact)s;

DML operations work on variables declared as the generic sObject data type as well as with regular sObjects.

sObject variables are initialized to null, but can be assigned a valid object reference with the new operator. For example:
Account a = new Account();

Developers can also specify initial field values with comma-separated name = value pairs when instantiating a new sObject. For example:
Account a = new Account(name = ‘Acme’, billingcity = ‘San Francisco’);

Custom Labels
Custom labels aren’t standard sObjects. You can’t create a new instance of a custom label. You can only access the value of a custom label using system.label.label_name. For example:
String errorMsg = System.Label.generic_error;

157
Q

Accessing SObject Fields

A

As in Java, SObject fields can be accessed or changed with simple dot notation. For example:
Account a = new Account();
a.Name = ‘Acme’; // Access the account name field and assign it ‘Acme’
If you use the generic SObject type instead of a specific object, such as Account, you can retrieve only the Id field using dot notation. You can set the Id field for Apex code saved using Salesforce API version 27.0 and later). Alternatively, you can use the generic SObject put and get methods.

If your organization has enabled person accounts, you have two different kinds of accounts: business accounts and person accounts. If your code creates a new account using name, a business account is created. If your code uses LastName, a person account is created.

SObject fields can be initially set or not set (unset); unset fields are not the same as null or blank fields. When you perform a DML operation on an SObject, you can change a field that is set; you can’t change unset fields.
To erase the current value of a field, set the field to null.
If an Apex method takes an SObject parameter, you can use the System.isSet() method to identify the set fields. If you want to unset any fields to retain their values, first create an SObject instance. Then apply only the fields you want to be part of the DML operation.

This example code shows how SObject fields are identified as set or unset.
~~~
Contact nullFirst = new Contact(LastName=’Codey’, FirstName=null);
System.assertEquals(true, nullFirst.isSet(‘FirstName’), ‘FirstName is set to a literal value, so it counts as set’);
Contact unsetFirst = new Contact(LastName=’Astro’);
System.assertEquals(false, unsetFirst.isSet(‘FirstName’), ‘FirstName is not set’);
~~~
An expression with SObject fields of type Boolean evaluates to true only if the SObject field is true. If the field is false or null, the expression evaluates to false. This example code shows an expression that checks if the IsActive field of a Campaign object is null. Because this expression always evaluates to false, the code in the if statement is never executed.
~~~

Campaign cObj= new Campaign();

if (cObj.IsActive == null) {
… // IsActive is evaluated to false and this code block is not executed.
}
~~~