CTA-Data Flashcards
bullk api v2 - hows it different?
Bulk API 2.0 allows for:
*Easy-to-monitor job status.
*Automatic retry of failed records.
*Support for parallel processing.
*Auto batch management.
*All OAuth flows supported vs not supported (Need to use SOAP Login or get session id from OAuth flow then use it)
*CSV file format vs CSV, XML, JSON etc supported
*150 MB file size vs 10 MB file size
2. Bulk 2.0 Maximum data load per day —- 150 Mil, 10k jobs
How to import Articles
before you can import Knowledge Base, you must first create a .csv file, a .properties file, and then a zip file.
It can have translation article too.
Person Account?
SF Data model to implement B2C relationship.
It can’t be linked in a direct relationship to other accounts or they can’t be part of account hierarchy or they need to be manually enabled and once enabled they can’t be disabled
Contact OWD has to be Private or CBP,
some AppX packages may not support PA
storage - stored as Contact and Account
can be merged with only other PAs
Lead conversion - if Lead has Company field, then it will be converted to Biz Account
What to know about Asset?
1 Need turn on Asset Sharing from Asset Setting to use Sharing rule
2 Asset doesn’t take up data storage
3 Asset can build up Hierarchy
4 Asset Relationship object
How to select a currency for record?
Each record has the currencyISO option to select.
The objects that can have ‘Controlled by Parent’ OWD settings are
Order, Contact (Contact only has CBP and Private), Asset, Activity (only CBP and Private), a few channel program and Contact Point objects.
related items on the Opportunity detail page sometimes take a long time to load and the page freezes until the records are loaded.
Enable Separate loading of related lists
Reduce the number of related list
Reduce the number of records in troubled related list
Reduce the fields displayed in troubled related list
Use Single Related list component to display those in separate tabs
In data migration, how to keep the original created date, modified date history?
contact salesforce to enable some auditing feature to allow updating those fields based on the source data.
Set Audit Fields upon Record Creation
In data migration, how to accomplish loading the historical auditing information
cannot insert history tracking object
can use big object to store it or use EA to load those information for analytics purpose
what is skinny table, and pros and cons?
Skinny table is a feature to ask SF to create which consolidate the regular used fields (standard and custom) for same object to a dedicated table in the backend so performance can be improved for LDV object.
Pros:
performance of query, reporting , list view should be improved can contain 100 fields, support encrypted fields Full data sandbox can automatically have it after refresh Kept in sync with source tables when source are modified Do not include soft deleted records
Cons:
developer type of sandboxes won’t have it - can contact SF Any field type change requires contact to SF to recreate it can’t get field from other objects maintenance overhead Read is better but DML is worse as SF needs to DML on two tables Only support a few Standard objects Account, Contact, Opportunity, Lead, and Case objects
Can you create records via Rest API which has duplication rule enabled and trigger it?
No. DuplicateRuleHeader is only available in SOAP API which allows you to handle duplicate records properly.
Big object catches
1 Big objects support only object and field permissions.
2 Once you’ve deployed a big object, you can’t edit or delete the index. To change the index, start over with a new big object.
3 SOQL relationship queries are based on a lookup field from a big object to a standard or custom object in the select field list (not in filters or subqueries).
4 Big objects support custom Salesforce Lightning and Visualforce components rather than standard UI elements (home pages, detail pages, list views, and so on).
5 You can create up to 100 big objects per org. The limits for big object fields are similar to the limits on custom objects, and depend on your org’s license type.
6 Big objects don’t support transactions that include big objects, standard objects, and custom objects.
To support the scale of data in a big object, you can’t use triggers, flows, processes, and the Salesforce app.
SOQL vs Async SOQL
Use standard SOQL when:
You want to display the results in the UI without having the user wait for results. You want results returned immediately for manipulation within a block of Apex code. You know that the query will return a small amount of data.
Use Async SOQL when:
You are querying against millions of records. You want to ensure that your query completes. You don’t need to do aggregate queries or filtering outside of the index.
The limit for Async SOQL queries is one concurrent query at a time.
Async SOQL is implemented via the Chatter REST API.
How to Use Async SOQL to Query Big Objects
There are two main ways to use Async SOQL to get a manageable dataset out of a big object. The first is to use filtering. You can use filtering to extract a small subset of your big object data into a custom object. You can then use it in your reports, dashboards, or other nifty analytic tool.
The other way to create a manageable dataset is through coarse aggregations. These are the aggregate functions supported by Async SOQL: AVG(field), COUNT(field), COUNT_DISTINCT(field), SUM(field), MIN(field), MAX(field). These aggregate functions give you much finer control over what data is extracted from the big object.
Difference between High Volume EO and EO?
1 can’t write to High Volume EO as it doesn’t have record ID generated by Salesforce
Access via Lightning Experience
Access via the Salesforce mobile app
Appearance in Recent Items lists
Record feeds
Reports and dashboards
Writable external objects
Data Migration staging database
The end-to-end solution comprises the source system’s databases, a staging database, and Salesforce. The staging database consists of two layers: the Transformation Layer and Target Layer.
• The Transformation layer is a set of intermediate database structures used for performing transformation and data quality rules. Only transformed and cleansed data will be loaded into the Target Layer.
• The Target Layer has tables structured identical to the Salesforce Objects—data types may differ depending on the database platform used.
• Data from the Target Layer will be loaded into Salesforce via Informatica cloud or any other ETL Cloud capable tool of choice.
Raw Schema -> Canonical Schema->Target Schema
Data migration Testing
Testing: Unit and Integration
1. Identify the appropriate load sequence. Consider relationships across all objects.
2. Run sample migration of a small subset of records from each legacy application; extract, transform, and load into SFDC.
3. Debug issues and update scripts as needed.
4. Run sample migration tests until they run clean with no errors.
Testing: Full Load and Performance Testing
1. Run full migration into sandbox. Full migration = extract, transform, and load all records.
2. Prepare reports on records in the source system or extracts, and the records loaded into Salesforce.com. Identify any missing data.
3. Fix issues and repeat until there are no errors.
4. Run full migration in a full sandbox environment.
5. Validate data from a technical and business perspective.
Omni channel supported objects?
Cases
Chats Contact requests SOS video calls Social posts Orders Leads Custom objects that don’t have a master object
Federated Search
In Salesforce Setup, search for and open External Data Sources.
Click New External Data Source
Enter a name for the connectionThis is the name that appears on the search results tab in Salesforce for customers.
Select Federated Search: OpenSearch for the Type.
When you insert an identical big object record with the same representation multiple times to Big Object what happens?
only a single record is created so that writes can be idempotent.
Data Model Key Considerations.
1 Individual Object is part of OWD, and has ownership
2 Product has no owner
3 CPQ quote and quote line also uses Product and Pricebook as normal, with a lookup to oppty
4 AccountTeam and OpportunityTeam are objects
5 ACR doesn’t have owner field but AccountRelation does and it has OWD too
6 Consent related object don’t have ownership but it has OWD
7 Add Contract, Quote can be created from external API
8 Put Payment object in if using a payment app exchange product
9 Entitlement Contact is a junction object between Entitlement and Contact
10 For utility industry, use Account for Property and ACR to relate. Contract object to store the subscription, maybe with a CO Contract Line Item. Asset to related Meter CO and Account, and a Custom Object Meter Reading to be detail object of Asset with Lookup to Meter Co
External license can only read PB and Product
Product can be used to model Individual Item such as rental car, scooter, apartment with lookup to Asset if owned by Landlord (Questionable)
Profile Edit permission can override Sharing Set Read Only access
Asset, Oppty, Case don’t need have an Account association but Order does
ETL - data source system has duplicates
Use Registration style, create a global ID, stamp it back to the source and mention source system owners need manage de-dupe if required.
LDV- if the object volume is huge needs archive but the data is needed for business processing,
Then use Big object if processing needs be done on platform
Or SF function if off platform but it requires cost
Einstein Data Detect
A managed pkg to install.
create a data detect policy scan
How heroku connect works between a SF object and Heroku
Heroku has postgres DB
Heroku enables Heroku Connect Add-on In heroku connect set up, connect to the salesforce instance and postgres DB in heroku connect select the object that needs to be sync. HC can auto create the related table with the schema or manually created and map select fields set up sync timing in two ways Don’t count as API call. Full sync can’t filter