CTA-Data Flashcards

1
Q

bullk api v2 - hows it different?

A

Bulk API 2.0 allows for:

*Easy-to-monitor job status.
*Automatic retry of failed records.
*Support for parallel processing.
*Auto batch management.
*All OAuth flows supported vs not supported (Need to use SOAP Login or get session id from OAuth flow then use it)
*CSV file format vs CSV, XML, JSON etc supported
*150 MB file size vs 10 MB file size
2. Bulk 2.0 Maximum data load per day —- 150 Mil, 10k jobs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to import Articles

A

before you can import Knowledge Base, you must first create a .csv file, a .properties file, and then a zip file.
It can have translation article too.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Person Account?

A

SF Data model to implement B2C relationship.
It can’t be linked in a direct relationship to other accounts or they can’t be part of account hierarchy or they need to be manually enabled and once enabled they can’t be disabled
Contact OWD has to be Private or CBP,
some AppX packages may not support PA
storage - stored as Contact and Account
can be merged with only other PAs
Lead conversion - if Lead has Company field, then it will be converted to Biz Account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What to know about Asset?

A

1 Need turn on Asset Sharing from Asset Setting to use Sharing rule
2 Asset doesn’t take up data storage
3 Asset can build up Hierarchy
4 Asset Relationship object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How to select a currency for record?

A

Each record has the currencyISO option to select.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The objects that can have ‘Controlled by Parent’ OWD settings are

A

Order, Contact (Contact only has CBP and Private), Asset, Activity (only CBP and Private), a few channel program and Contact Point objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

related items on the Opportunity detail page sometimes take a long time to load and the page freezes until the records are loaded.

A

Enable Separate loading of related lists

Reduce the number of related list

Reduce the number of records in troubled related list

Reduce the fields displayed in troubled related list

Use Single Related list component to display those in separate tabs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In data migration, how to keep the original created date, modified date history?

A

contact salesforce to enable some auditing feature to allow updating those fields based on the source data.

Set Audit Fields upon Record Creation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In data migration, how to accomplish loading the historical auditing information

A

cannot insert history tracking object

can use big object to store it

or use EA to load those information for analytics purpose
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is skinny table, and pros and cons?

A

Skinny table is a feature to ask SF to create which consolidate the regular used fields (standard and custom) for same object to a dedicated table in the backend so performance can be improved for LDV object.

Pros:

performance of query, reporting , list view should be improved

can contain 100 fields, support encrypted fields

Full data sandbox can automatically have it after refresh   Kept in sync with source tables when source are modified   Do not include soft deleted records

Cons:

developer type of sandboxes won’t have it - can contact SF

Any field type change requires contact to SF to recreate it

can’t get field from other objects

maintenance overhead

Read is better but DML is worse as SF needs to DML on two tables

Only support a few Standard objects Account, Contact, Opportunity, Lead, and Case objects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can you create records via Rest API which has duplication rule enabled and trigger it?

A

No. DuplicateRuleHeader is only available in SOAP API which allows you to handle duplicate records properly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Big object catches

A

1 Big objects support only object and field permissions.
2 Once you’ve deployed a big object, you can’t edit or delete the index. To change the index, start over with a new big object.
3 SOQL relationship queries are based on a lookup field from a big object to a standard or custom object in the select field list (not in filters or subqueries).
4 Big objects support custom Salesforce Lightning and Visualforce components rather than standard UI elements (home pages, detail pages, list views, and so on).
5 You can create up to 100 big objects per org. The limits for big object fields are similar to the limits on custom objects, and depend on your org’s license type.
6 Big objects don’t support transactions that include big objects, standard objects, and custom objects.
To support the scale of data in a big object, you can’t use triggers, flows, processes, and the Salesforce app.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

SOQL vs Async SOQL

A

Use standard SOQL when:

You want to display the results in the UI without having the user wait for results.
You want results returned immediately for manipulation within a block of Apex code.
You know that the query will return a small amount of data.

Use Async SOQL when:

You are querying against millions of records.
You want to ensure that your query completes.
You don’t need to do aggregate queries or filtering outside of the index.

The limit for Async SOQL queries is one concurrent query at a time.
Async SOQL is implemented via the Chatter REST API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to Use Async SOQL to Query Big Objects

A

There are two main ways to use Async SOQL to get a manageable dataset out of a big object. The first is to use filtering. You can use filtering to extract a small subset of your big object data into a custom object. You can then use it in your reports, dashboards, or other nifty analytic tool.

The other way to create a manageable dataset is through coarse aggregations. These are the aggregate functions supported by Async SOQL: AVG(field), COUNT(field), COUNT_DISTINCT(field), SUM(field), MIN(field), MAX(field). These aggregate functions give you much finer control over what data is extracted from the big object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Difference between High Volume EO and EO?

A

1 can’t write to High Volume EO as it doesn’t have record ID generated by Salesforce
Access via Lightning Experience
Access via the Salesforce mobile app
Appearance in Recent Items lists
Record feeds
Reports and dashboards
Writable external objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Data Migration staging database

A

The end-to-end solution comprises the source system’s databases, a staging database, and Salesforce. The staging database consists of two layers: the Transformation Layer and Target Layer.
• The Transformation layer is a set of intermediate database structures used for performing transformation and data quality rules. Only transformed and cleansed data will be loaded into the Target Layer.
• The Target Layer has tables structured identical to the Salesforce Objects—data types may differ depending on the database platform used.
• Data from the Target Layer will be loaded into Salesforce via Informatica cloud or any other ETL Cloud capable tool of choice.

Raw Schema -> Canonical Schema->Target Schema

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Data migration Testing

A

Testing: Unit and Integration
1. Identify the appropriate load sequence. Consider relationships across all objects.
2. Run sample migration of a small subset of records from each legacy application; extract, transform, and load into SFDC.
3. Debug issues and update scripts as needed.
4. Run sample migration tests until they run clean with no errors.
Testing: Full Load and Performance Testing
1. Run full migration into sandbox. Full migration = extract, transform, and load all records.
2. Prepare reports on records in the source system or extracts, and the records loaded into Salesforce.com. Identify any missing data.
3. Fix issues and repeat until there are no errors.
4. Run full migration in a full sandbox environment.
5. Validate data from a technical and business perspective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Omni channel supported objects?

A

Cases

Chats

Contact requests

SOS video calls

Social posts

Orders

Leads

Custom objects that don’t have a master object
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Federated Search

A

In Salesforce Setup, search for and open External Data Sources.

Click New External Data Source

Enter a name for the connectionThis is the name that appears on the search results tab in Salesforce for customers.

Select Federated Search: OpenSearch for the Type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

When you insert an identical big object record with the same representation multiple times to Big Object what happens?

A

only a single record is created so that writes can be idempotent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Data Model Key Considerations.

A

1 Individual Object is part of OWD, and has ownership
2 Product has no owner
3 CPQ quote and quote line also uses Product and Pricebook as normal, with a lookup to oppty
4 AccountTeam and OpportunityTeam are objects
5 ACR doesn’t have owner field but AccountRelation does and it has OWD too
6 Consent related object don’t have ownership but it has OWD
7 Add Contract, Quote can be created from external API
8 Put Payment object in if using a payment app exchange product
9 Entitlement Contact is a junction object between Entitlement and Contact
10 For utility industry, use Account for Property and ACR to relate. Contract object to store the subscription, maybe with a CO Contract Line Item. Asset to related Meter CO and Account, and a Custom Object Meter Reading to be detail object of Asset with Lookup to Meter Co

External license can only read PB and Product
Product can be used to model Individual Item such as rental car, scooter, apartment with lookup to Asset if owned by Landlord (Questionable)
Profile Edit permission can override Sharing Set Read Only access
Asset, Oppty, Case don’t need have an Account association but Order does

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

ETL - data source system has duplicates

A

Use Registration style, create a global ID, stamp it back to the source and mention source system owners need manage de-dupe if required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

LDV- if the object volume is huge needs archive but the data is needed for business processing,

A

Then use Big object if processing needs be done on platform
Or SF function if off platform but it requires cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Einstein Data Detect

A

A managed pkg to install.
create a data detect policy scan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How heroku connect works between a SF object and Heroku

A

Heroku has postgres DB

Heroku enables Heroku Connect Add-on

In heroku connect set up, connect to the salesforce instance and postgres DB

in heroku connect select the object that needs to be sync. HC can auto create the related table with the schema or manually created and map

 select fields 

set up sync timing in two ways

Don’t count as API call. Full sync can’t filter
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

SF Data Archiving Best Practices

A

1 understand data growth
2 Establish Data Retention Policy,
3 Build Archiving Solution (Schedule in batch, keep parent/child structure, bypass automation not trigger sharing recalculation, hard delete option)
4 Testing it,
5 Ensure restoration

27
Q

How does selectiveness work and give an example.

A

Selectiveness means the filters used in the query are selective.

Standard Index is selective:
- <30% of first million records
- <15% of remaining records after first 1 million
- no more than 1 million of total records

Custom Index is selective:
- <10% of first million records
- <5% of remaining records after first 1 million
- no more than 333,333 of total records

28
Q

Can you add custom fields on AccountTeam or Oppty Team or Case Team?

A

Yes, we can add custom fields, validation rules and triggers on OpptyTeam and Account team. no customisation on Case teams

29
Q

Adding Skinny Table, will it increase seach performance from global search?

A

No. Search and SOSL is done on different server and adding skinny table, indexes on data/table will not improve Search performance.

30
Q

Can Partner/CCP users use Bulk API to load data into Salesforce?

A

No. Regardless of whether the “API Enabled” profile permission is granted, portal users (Customer Portal, Self-Service portal, and Partner Portal) can’t access Bulk API.

31
Q

Large data migration considerations

A

Bulk API 2.0, Parallel loading, PK Chunking, Granular locking, Defer sharing calculation, OWD to PR/W, Create Roll-up summary field after the load, Group child records by parent record ID to avoid data locking, Sequence the data load, Clean the data outside of SF before loading, Automation off (Trigger, VRs, WF)

https://ericshencta.atlassian.net/wiki/spaces/SCP/pages/15009096/Data+Migration
Do dry runs in partial copy / full Copy before attempting the migration in production.
In case there is development going, it’s better to request another full copy for data migration.
Have the code ready to enable bypass on automations, validations rules, etc.
Use the Bulk API
with parallel mode to speed up load where possible
Order your load by loading parents first, children next
Group data by parent object id to avoid locks.
Enable ‘Edit Audit fields’ permission to retain data values in Created on, Created by
Enable Inactive Owner setting to allow records with inactive owner
be prepared to load users up as inactive

32
Q

Can Lead be a Master object?

A

No

33
Q

How to retrieve SF updated for last 24 hrs

A

SOAP get updates if volume is low
Data replication API
ETL
CDC

34
Q

Soap vs Bulk Api for data migration

A

1 SOAP API avoid record locking on parent objects of MD R/S
2 Bulk API may cause it
3 Bulk API allows multi attachments loaded from a single zip file

35
Q

How Salesforce tackles GDPR compliance

A

SF is a processer, SF customer is data controller.
1, Consent Mgm objects, and Consent Capture pkgs which have flows to respect customer choices such as Right to Forget, build additional flows, Apex trigger on Individual object
1.1 Consent Event stream receive notifications about changes to consent fields or contact information on core objects.
2, Platform Encryption for data security, Event Monitorting for data breach, Field Audit history for data retention
2.a Privacy Center
3, identify lead, contact fields and categorise it as GDPR compliance in field configuration
4, Data residency - links to Org strategy
5, Marketing Cloud – delete contacts if they require (MC connect), cloud page to capture.
6. SF Email, unsubscribe Footer

36
Q

BA understands that ensuring acceptable data health score is essential for their master data management strategy. Recommend a comprehensive data quality management plan.

A

BA’s data quality management plan should cover profiling, cleansing, enriching, matching & merging, and monitoring.
BA must establish a data profiling cadence depending of the volumes and frequency of refreshes.
A composite data health score for each data source will serve as an anchor for downstream consolidation, dedup and curation activities.
Rules can be also created to standardize and cleanse incoming data to map to a normalized target value (e.g. ISO country code).
Use reference data sources for enriching internal data.
Leverage Salesforce duplicate management capability or 3rd party Appexchange solution to establish point of entry controls for duplicate records.
When matching and merging, leverage 3rd party batch merge apps to consolidate attributes that meet or exceed an established matching thresold. This would reduce manual ‘stare-and-compare’ workload for data stewards.

37
Q

Currently system owners serve as data stewards for their domains. BA has been looking into establishing a central Data Governance body, but that initiative is not gaining a lot of traction. BA is evaluating alternative strategies to institute data governance to ensure consistent standards and rules are established. Also appropriate controls need to be in place to ensure adherence to these standards and rules.

A

Use agile data governance approach to assign stewardship function to business process owners.
Establish a decentralized model that is aligned with enterprise architecture to establish global and local standards and policies.
Embed data governance in the development and build teams. Decouple “system” owners as data stewards.
Ensure data standards, policies and procedures are documented and frequently evaluated for currency and relevance.
Implement data quality dashboards to expose data health as mapped to appropriate data owner/steward.

38
Q

Lightning Platform Query Optimizer

A

The Lightning Platform query optimizer helps the database system’s optimizer produce effective execution plans for Salesforce queries, and it is a major factor in providing efficient data access in Salesforce
Force.com query optimizer generate most efficient query based on:

Statistics
Indexes / Skinny Tables
Sharing

39
Q

Custom Index

A

The platform maintains indexes on the following fields for most objects.

RecordTypeID, Division, CreatedDate, Systemmodstamp (LastModifiedDate), Name, Email (for contacts and leads), Foreign key relationships (lookups and master-detail) , Salesforce record ID.

Salesforce also supports custom indexes on custom fields, except for multi-select picklists, text areas (long), text areas (rich), non-deterministic formula fields, and encrypted text fields.

External IDs cause an index to be created on that field. The query optimizer then considers those fields.

You can create External IDs only on the following fields.

Auto Number, Email, Number and Text

What can be custom indexed
Most standard fields and almost all custom fields can be custom indexed.
Simple formula fields can be custom indexed.
Boolean fields can be custom indexed on “True” or “False” value.
Null values can be included in custom index( need to explicitly add when creating index)

What cannot be custom indexed
Multi-select picklist fields cannot be custom indexed.
Non-deterministic formula fields cannot be custom indexed.
Cross-object spanning formula fields cannot be custom indexed.

40
Q

Overlapping Runs

A

Schedule plenty of time between your recurring jobs’ runs to ensure that they complete without overlapping. If there’s a risk that your jobs might overlap, have each job verify that the previous job finishes executing before it begins processing its data.

41
Q

Mitigating Lookup Skew

A

Reducing Record Save Time.

Distributing the skew.

Using a picklist field.

Reducing the load.

Set clear value in lookup field

42
Q

Performance Testing Strategy

A

Define the use cases and the strategy early in the process

Best Practices:
1 Full Copy Sandbox
2 Keep the data volume and data profile similar to Production
3 Consider UI, Integration and Data Loading use cases for performance
4 For integration, mimic the number of virtual users to be at least 50% of Production

43
Q

How to manually determine the chucking size of huge data load?

A

At extremely high volumes—hundreds of millions of records—defining these chunks by filtering on field values may not be practical. The number of rows that are returned may be higher than the selectivity threshold of Salesforce’s query optimizer. The result could be a full table scan and slow performance, or even failure. Then you need to employ a different strategy.

1 Create or use an existing auto-number field, or use number field to make up an unique value
2 Create a formula field that converts the auto-number field text value in to a numeric value so that you can use comparison operation for indexing
3 place a custom index on this field

Or use PK chunking

44
Q

PK Chunking

A

PK chunking splits bulk queries on very large tables into chunks based on the record IDs of the queried records.

You can use PK Chunking with most standard objects. It’s supported for Account, Campaign, CampaignMember, Case, Contact, Lead, LoginHistory, Opportunity, Task, and User, as well as all custom objects. To enable the feature, specify the header Sforce-Enable-PKChunking on the job request for your Bulk API query.

To choose a chunk size, simply specify it in the header. For example, this header enables PK chunking with a chunk size of 50,000 records: Sforce-Enable-PKChunking: chunkSize=50000.

45
Q

Dupe Rule and Matching Rule

A

You can use up to five active duplicate rules per object.

You can include up to three matching rules in each duplicate rule, with one matching rule per object.
46
Q

Use Lightning Data packages

A

Each Lightning Data package includes the following.

A custom object containing data from the data service
An external object used for updating and importing records
A data integration rule that identifies matches between the external object records and your Salesforce records

Setup from User’s related list ‘Lightning Data Assignment’

47
Q

Big object indexing

A

An index must include at least one custom field and can have up to five custom fields total.

All custom fields that are part of the index must be marked as required.

You can’t include Long Text Area and URL fields in the index.

The total number of characters across all text fields in an index can’t exceed 100.

After you’ve created the index, you can’t edit or delete it. To change the index, create another big object with a new index.

Design your index so that you assign the most frequently used field in a query filter to Index Position 1. The order in which you define the fields determines the order that they’re listed in the index.
48
Q

LDV Best Practices

A

https://ericshencta.atlassian.net/wiki/spaces/SCP/pages/15007840/LDV+Best+Practices
Index fields that are searched often or used in filters.
Use filters and smaller date ranges when fetching data.
Watch out for ownership or account data skew.
Use Skinny Tables created for better performance on reports (though there is an overhead).
Use Analytics tools (like Tableau CRM / EA) for reporting on large data sets

Archival
Build an Archival Strategy.
Different objects may be have different strategy eg: Exception logs might be kept for 15 days and then deleted, whereas Cases might be kept for 3 years
Identify the tools and rules involved.
Determine the frequency of viewing archived data.

49
Q

Division

A

Divisions work best in companies that have adopted a public data model, ideally the org wide sharing is public for most (if not all) objects.

Child records cannot cross Divisions – so in other words, you couldn’t have an Account in the “Global” Division with Opportunities, some of which are in the “U.S.” Division and others in the “U.K.” Division. Division is always inherited down from the parent record.

Divisions is not a Security feature. It is merely about minimizing the day-to-day “noise” across Divisions while still allowing for Org-wide visibility, effective reporting, and collaboration. If you’re concerned about security a more appropriate feature to consider is Territories which is deeply entrenched in Security and far more appropriate for a private data model.

Once Divisions is enabled, it cannot be disabled! Definitely try this in the Sandbox (or, if you are on PE, in a Dev Org) and testing rigorously before pushing into Production.

50
Q

Consideration of archiving strategy?

A

1 any regulatory restrictions that influences archiving and purging plans?
2 Archived data need be reported on or accessed by Salesforce in future?
3 Existing DWH?

51
Q

Best practice using SOSL

A

1 keep searches specific and avoid wildcards
2 Use Find in ‘All fields’ for faster searches

52
Q

Data monetization?

A

Data monetization is the act of measuring the economic benefit of corporate data

53
Q

Data obfuscation?

A

Process of converting original data with a modified content, such as protected PII data,

Two techniques: Pseudonymization and Anonymization. the latter means data is not identifiable, whilst the former can still identify the source data indirectly.

54
Q

LDV Impact and Risk

A

Slow record CRUD operation
Slow down search
Slow down SOQL and SOSL queries
Slow Down list views, reports and dashboards
Impacts the data integration interfaces, performance of SF APIs
Longer to calculate sharing records
Higher chance of hitting governor limits
Slow down full data sandbox refresh

55
Q

LDV mitigation tools

A

Data consumption analysis - SF calculates nightly the database stats
Query Optimiser
Bulk API
Batch Apex
Deferred Sharing

56
Q

What helps optimizer makes decision ?

A

Entity Visibility Statistics.
This comes in very handy when making decisions on how to join the sharing tables. The less visibility you have into an entity, the higher the chance of leading from
the sharing tables. The more visibility you have into an entity, the better the chance of joining to the main entity first and trying to eliminate rows, before
bringing in the sharing join.

Entity Level Statistics.
This tells us how much data there is for a given org in a given entity. It’s the equivalent of table level statistics in Oracle(only at org_level)
Most of the SFDC optimizer calculations revolve around this number.

Custom Index Level Statistics.
It’s very important to have this one right. Usage of a custom index depends on this number, you have this wrong and nothing else matters.
The SFDC optimizer can come up with really bad plans.

Foreign Key Lookup Statistics.
These are statistics on fields, that are marked as a Lookup field to another entity. For eg. a custom field “Account_Id__c” on Contact can be a foreign key
lookup field to the Account entity. These numbers are then effective when doing join operations between the two entities or filtering on the lookup field from the
Contact entity.
Eg.
select Id from Contact where Account_Id__c = some_value

57
Q

SOAP Data Replication API Steps

A

1 Optionally, determine whether the structure of the object has changed since the last replication request, DescribeObject()
2 Call getUpdated(), passing in the object and timespan for which to retrieve data.
3 Pass in all IDs in an array. For each ID element in the array, call retrieve() to obtain the latest information you want from the associated object.
4 Call getDeleted(), passing in the object and timespan for which to retrieve data. Like getUpdated(), getDeleted() retrieves the IDs for data to which the logged-in user has access. Data that is outside of the user’s sharing model is not returned.
5 Optionally, save the request time spans for future reference. You can do this with the getDeleted() latestDateCovered value or the getUpdated() latestDateCovered value.

58
Q

SOAP Data Replication Consideration

A

Client applications should save the timespan used in previous data replication API calls so that the application knows the last time period for which data replication was successfully completed.
To ensure data integrity on the local copy of the data, a client application needs to capture all of the relevant changes during polling—even if it requires processing data redundantly to ensure that there are no gaps. Your client application can contain business logic to skip processing objects that have already been integrated into your local data.
Gaps can also occur if the client application somehow fails to poll the data as expected (for example, due to a hardware crash or network connection failure). Your client application can contain business logic that determines the last successful replication and polls for the next consecutive timespan.
If for any reason the local data is compromised, your client application might also provide business logic for rebuilding the local data from scratch.

59
Q

How to delete records in Big Object?

A

No record ID so it’s like below.
// Declare sObject using the index of the custom big object –>
List cBO = new List();
cBO.addAll([SELECT Account__c, Game_Platform__c, Play_Date__c FROM Customer_Interaction__b WHERE Account__c = ‘001d000000Ky3xIAB’]);

Database.deleteImmediate(cBO);

60
Q

Big Object Index

A

An index must include at least one custom field and can have up to five custom fields total.

All custom fields that are part of the index must be marked as required.

You can’t include Long Text Area and URL fields in the index.

The total number of characters across all text fields in an index can’t exceed 100.

NOTE Email fields are 80 characters. Phone fields are 40 characters. Keep these lengths in mind when designing your index because they count toward the 100 character limit.

After you’ve created the index, you can’t edit or delete it. To change the index, create another big object with a new index.

Design your index so that you assign the most frequently used field in a query filter to Index Position 1. The order in which you define the fields determines the order that they’re listed in the index.
61
Q

Data Feedback from Datto

A

SF Survey if listed, put more object name. It’s ok to solution getfeedback.
Don’t mention Roll-up summary field in Data migration. (it’s computing heavy)
Data model, need more business term to describe the story.
LDV- never archive Master data like Account/Contact, only archive transactional data
Overview- state business challenges
Landscape -> Existing system, Decommission system (what data inside, mention we’ll talk how to load it later)
Data migration process: Score the data extracted from source systems, for low scored data ask Data steward to decide how to proceed. etc.

62
Q

Feedback from Brett

A

LDV solutions- tie up requirements when recommending custom index, skinny table (from report req most likely)
Mention the consideration for Own backup as part of archiving strategy as well.

Data migration - talk about fail over strategy, impacts on source systems if duplicate exist. Just need show considerations, doesn’t really need to solve how to de-dupe the source.

Data model - align with requirements

Data structure for multi-org: talk about data governance, COE, CICD, Branch, Package

Video - never save it in Salesforce.

When making assumptions, think about real life experience for customer. Treat CTA as a simulation of real customer presentation to provide best recommendations and considerations

63
Q

Contract ownership

A

Better align with Account owner.
When account owner changed

Contracts with a status of Draft or In Approval are transferred automatically. The new owner has read-only access to contracts with a status of Activated.
Orders with a status of Draft, with or without a transferring contract, are transferred automatically. The new owner has read-only access to orders with a status of Activated.