80 Questions Flashcards
Why would you create SQL in cal views
To implement custom logic
Which type of join supports a temporal condition in a cal view
Inner join
What can you do with shared hierarchies
Enable SQL SELECT statements to access hierarchies, Provide reusable hiearchies for drilldown in a cube without star join
What options do you have to handle orphan nodes in your hierarchies
generates additional root nodes, assign them to a level below the root
Which privileges would an user require to view US data when querying the cube cal view?
A SELECT privilege on the cube cal and an analytic privilege (Country=US) on the dim cal view
What do you use in the def on a dynamic SQL analytic privilege
A procedure that returns the data access condition as an SQL expression
Which tool generates and executes the SQL for specific node of ur cal view
Debug query mode
You are managing ur source files using GIT. In which sequence does ur file progress towards a commit
Working directory->staging area->local git repository
You want to create a star schema using a cal view. The measures are based on columns from two transaction tables. Dimension cal views provide the attributes. What is the correct approach
Combine the transaction tables using a join node in a cal view of type Cube with star join. Use a star join node to join the dim to the fact table.
What are some best practices when developing a cal views
Include all data flow logic within one cal view?, Avoid defining joins on cal columns
In ur cal view, you want to consume a custom data source defined using SQLScript. In which type of object do you write ur code
Table function
What are some of the typical roles in an SAP HANA Cloud implementation
Data architect, Modeler
In a cal view, the table function node executes a table function that requires input parameters. How can you fill the input parameters of the table function
Define constant values, map columns from lower nodes, and create the map an input parameter.
In DB module, what is the purpose of the .hdiconfig file
To specify which HDI plug ins are available
You are deploying a new cal view, A, that uses cal view B. When you preview cal view A, the account number is not masked. What could be the reason
You didn’t define masking in cal view A
What are the limitations of using a full outer join in a star join node
It must appear in the last dimension in the star join node, it is restricted to one dimension in a star join node
You combine two tables in a join node using multiple columns in each table. Why do you enable the dynamic join option
To ensure that join execution only uses the joined columns requested in the query, to allow data analysis at different levels of granularity with the same cal view
You create a table function to remove historic records, sum the current total weekly working hrs for each employee, and update the personnel table with the results. The deployment of the table function fails. Which of the following could be a valid reason.
Your function includes a truncate statement
Why would you choose an HDI shared service plan instead of a schema service plans
You want to use BAS, you want to use containers to isolate objects, and You want to create DB objects using source files
You want to ensure that cal view does not give unexpected results for a query that is based on any combination of columns. what is the recommended approach for verifying the results
Write and execute a custom SQL query in the SQL console, Select data preview for the cal view
You have configured static cache for your cal view and run a query against it, but the cache results are ot being used. What might be the reason for this
You did not define any columns in the cache setting
At which levels of a project structure can you execute a deploy operation
Entire workspace, sub folder of a database module
You have imported a new cal view in a folder that contains an .hdinamespace file. This cal view consumes one data source, which is a table. When trying to deploy the cal view, the deployment fails with a namespace-relate issue. What could be the reason?
The namespace used within the cal view to reference the table is different from the actual namespace in the identifier of this table, The imported cal view and its data source have different namespaces
What is generated when you deploy a cube cal view design time file
Cached results to improve read performance, metadata to enable consumption by external tools
Why would you enable Debug Query mode in a cal view
To identify data sources that are not accessed by a query
You define a hierarchy in a cal view. you want to expose the hierarchy to SQL. Which of the following conditions must be met
The hierarchy must be exposed by a cal view type Cube with star join, the hierarchy must be a shared hierarchy
Why does SAP issue warnings about the use of imperative or procedural SQLScripts statement
They introduce potential security risks
which components are part of SAP Hana Cloud
Data lake, SAP Hana DB
What are some of the restrictions that apply when defining parallelization block in calculation view
Only one block can be defined across a stack of cal views, The block must star with a node that defines a table as a data source
What is a restricted measure
A measure that is filtered by one of more attributes values
Which calendar types can be selected when creating time-based dimensions.
fiscal and gregorian
you implement a referential join between table A and Table B, but when you query the calculation view, Table B is not pruned.
join cardinality is set o :1, and integrity constrain is set to right
you have imported cal views from SAP Hana on premise to SAP Hana cloud. why should you switch cal column expression fro column engine to SQL.
To benefits from additional SQL optimizations
In a cal view, why would you implement an SQL expression.
To define a filter, to generate a restricted column and to generate a cal column.
You create a user provided service to access tables in external schemas. In which file type do you assign the user provided service to ur DB?
yaml
Your calculation view consumes one datasource, which includes the following columns: SALES_ORDER_ID, PRODUCT_ID, QUANTITY AND PRICE. Inthe output, you want to see summarized dta by PRODUCT_ID and cal column PRODUCT_TOTAL witht he formula QUANTITY * PRICE. In which type of node do you define the calculation to display the correct result
Projection
You want to join two tables in a cal view. why do you use a non equi join
The join condition is not represented by matching values
What are possible consequences of improper unfolding
SQL compilation time increases, count distinct results are incorrect
Which of the following are standard options provided to define analytic privileges
SQL expression, dynamic and Attributes
You have generated caculation view properties file. What does it contain?
Description of all objects defined in a calculation view
In BAS, you rename a Dimension cal view that is used by a cube cal view. You do not use the option to rename the runtime view and adjust the reference. Afterward, you perform the following deploy operations: Deploy the dimension cal view as a single object. Deploy the entire SAP Hana DB module. What is the outcome of the deploy operations
The first deployment is successful. The second deployment is successful.
Which database features are typically not required by analytical applications that run on SAP Hana cloud?
Pre-calculated aggregates, indexes
You combine two customer master data tables with a union mode in a cal view. Both master data tables include the same customer name. How do you ensure that each customer name appears only once in the results
Add an intersect node above the union node
Why do you use the hidden columns checkbox in the semantic node of your cal view
To ensure specific columns are not exposed to the reporting tool, To remove a column that is also used as a label column
What is the SQL keyword used to process input parameters defined in a cal view
Placeholder
Two calculation view, A and B are defined. Analytic privilege 1 -> product = P1. Cal View 2: Analytical privilege 2 -> country = US or GE, Product = P2; Analytical privilege 1 -> Country = US. When you preview Cal View A, what data do you see?
US for P1 and GE for P1
In calculation view, why would you choose the DEPRECATED setting
To ensure it is not exposed to reporting tools for consumption, To warn developers that the cal view is no longer supported
You have defined a pruning configuration table in a cal view. What are you attempting to prune from the query execution
Filters
which of the following data sources can you include in a graphical calculation view
Table function and row table
Why would you set the ignore multiple outputs for filters property in cal views
To force filters to apply at the lowest node
Why would you use the SQL analyzer
To display the execution time of a cal view, to preview data at the node level of a cal view.
In a cal colum, what is the purpose of a variable
To provide a dynamic value in cal colum
A new version of SAP Hana Cloud, SAP Hana DB is available from today. If you do not perform the upgrade manually, how much time do you have before your database will be automatically upgrade to the next version.
3 months
Which project structure object correponds to a unique HDI container
SRC folder
You created a procedure to be consumed in an analytic privilege of the type DYNAMIC but it is not working as expected. what could be the reason
No input parameter is defined, you defined more than one output parameter
What can you identify using Performance Analysis mode
joins that are defined on calculated columns, and information about join cardinality
You deleted the design time file of a cal view in your HDB module. what is the recommended way to ensure the corresponding runtime object is also removed from the database
Deploy the project that contained the deleted design time file
Why would you use parameter mapping in a cal view
To pass variable values to external value help views, to push down filters to the lowest level cal views
Why might you use the keep flag property in an aggregation node
to include columns that are not requested by a query but are essential for the correct result
In SAP Hana Cloud, which tasks are handled by the cloud provider
Tuning the DB to run optimally on the underlying operating system and hardware, backing up the operating system and the DB software, Installing, configuring and upgrading the operating system
Why would you create cal view of data category dimension with type time
To provide additional time related navigation possibilities
Why would you use the transparent filter property in a cal view
To allow filter pushdown in stacked cal view
Which of the following approaches might improve the performance of joins in a cube cal view
Specify the number of joined columns, limit the number of joined columns.
What are the key steps to implement currency conversion in a cal view?
Assign semantic type enable the measure for conversion
Choose client, source, and target currencies
Choose conversion date and rate type
When is the first column store compression executed
When a delta merger is triggered
Why would an SQL developer work with SQLScript
To pass parameters from cal views, to exploit additional data types, to implement conditional logic
What is the recommended tool for developing cloud foundry applications
SAP Hana Cloud Central
You set the Null Handling property for attribute but do not set a default value. What is displayed when null values are found in a column of data type NVARCHAR
Empty string
What are the consequences of not executing a delta merge
Read performance decreases, new records are not read.
Why would you partition a table in an SAP Hana Cloud DB
To overcome the 2 billion record limit
You created a table and inserted data in it using SQL statements inside the SAP Hana Deployment Infrastructure (HDI) container of your project. You add this table as as data source to a cal view and try to deploy it. why do you observe in the SAP Hana DB container
The deployment fails and the table is not dropped
What are some of the typical tasks performed by the SAP Hana Cloud modeler role
Create graph workspaces and develop cal views
Which of the following techniques can you use to improve the performance of the cal views
partition large tables and limit the number of stacked cal views
Using the table in the diagram, you need to create a cube cal view. what is the simplest approach to create output shown in the screen shot.
Table A Output A
Country value France Germany UK
France 100 100 100 200
Germany 100
UK 200
Create a restricted column for each country
a Cal view consumes the data sources shown in the graphic. You want to identify which companies sold products in January and February. what is the optimal way to do this.
Sales Prediction Jan Sales Prediction February
001 X 001 X
002 Y 002 Y
003 004
005
use an intersect node
Which solutions form the SAP BAS platform
Analytics and Application Development & Integration
a Cal view includes a rank node that uses the source data and settings shown in the graphic. How many rows are inthe output query
9
You want to map an input parameter of cal view A to an input parameter of Cal view B using the parameter mapping feature in cal view editor. However, the input parameters of cal view B are not proposed as source parameters. what might be the reason for this?
You already mapped the input parameters in another cal view
What are some best practices for writing SQLScrips for use with cal views
Break up large statements by using variables and choose declarative language instead of imperative language
What are the advantages of column store table compared to row store tables in general terms
Higher data compression rates, parallel access is improved and higher performance for query operations.
What is Iot
Devices are more intelligent by including CPUs and Internet connectivity built inside.
What are the opportunities for innovation in the digital world?
IoT
Increase data volumes
Data Science
New Data Types
Increase of Mobiles devices
The move to the Cloud platforms
What are the IT challenges on any organisation at the moment?
- Massive increase in data volume
- Legacy applications are difficult to enhance
- Users expect innovative apps with great performance
- IT landscapes have grown too complex
What is the key driver in the intelligent enterprise?
Data
What Intelligence Enterprise is made of?
- Business Processes
- Applications
- Technology
- Infrastructure
What is SAP BTP comprised of?
- Database & Data Management
- Analytics
- Application Development & Integration
- Intelligent Technologies
What is SAP Hana Cloud
It is fully managed, in memory, cloud database as service (DBaas)
Can SAP Hana Cloud access live data remotely in real time from any system?
True
What are the key services of SAP Hana cloud?
SAP Hana DB and Data Lake
What are the two most important aspects of SAP Hana Elasticity:
compute (CPU) and data storage
What is a dimension
Analysing the measures is easier if you group attributes together by dimension.
Dimension with associated attributes are
Sales Organisation->Country->Region
A star schema consists of
one fact table that references one or several dimension tables.
A hierarchy is
a structured representation of an organisation, a list of products, the time dimension, and so on, by levels
The term semantics is sometimes used to describe what a piece of data means, or relates to. A piece of numeric data that you report can be of different types, here are some examples:
- A monetary value: the total amount of sales orders
- A number of items: a number of sales orders, or a number of calls to support services
- A weight, volume, distance or a compound of these measures
- A percentage
Default data category is never exposed to client tools. Are this type of views to be exposed to the users?
False
Can the default views be used to build other views?
True
Is the default view appear as blank in DS?
True
In Cube star join: where do the private objects come from?
Join of facts tables
In Cube star join: where do the shared objects come from
Join of dimension tables
In How many ways can you determine what type of table is a SAP Hana table
3
What are those ways to determine the SAP Hana table type
- System/View catalog-> check the table icon. Open the table definition
- Within a node in Information view consuming the table
- From the SQL console – table M_TABLES:
SELECT SCHEMA NAME, TABLE NAME, TABLE TYPE FROM M_TABLES
WHERE SCHEMA_NAME = XXXX
AND TABLE_NAME = XXXX
TUFs are used when there is a need for
“if…then…else” or loops (for while)
What are the SAP Hana Engines
- Join engine
- OLAP engine
- Calculation engine
What are the two main steps to optimised the cal view when it is queried
- The cal engine generates a single SQL statement, then it is passed to the SQL optimiser, and
- The SQL optimiser adds additional optimisations and delegates operations to the best DB execution operator
Standard data preview features In SAP Web IDE, there are two tabs,
Raw data: displays all data
Analysis: Selected attributes and measures in tables or graphs
Setting a filter in SAP Web Ide is performed under
Tool->preference->data preview
It is on: the data preview is not execured immediately
Deferred Default Query execution
Deferred Default Query execution
When the user wants to apply additional criteria
When the user wants to execute a custom query derived from the standard data preview
What is an inner join
The Inner Join is the most basic of the join types. It returns rows when there is at least one match on both sides of the join.
Where is Full Outer Join supported by
calculation views only, in the standard and Star Join nodes.
What is a Full Outer Join
It combines the behaviours of the Left and Right Outer Joins.
The result set is composed of the following rows:
* Rows from both tables that match on joined columns
* Rows from the left table with no match in the right table
* Rows from the right table with no match in the left table
The type of joins between the fact and dimension tables within the star schema can be defined in the Star Join node. The available joins are
1 Referential Join
2 Inner Join
3 Left Outer Join
4 Right Outer Join
5 Full Outer Join, with some specific restrictions (see above)
6 Text Join
What is the SAP Hana specific joins
- Referential join
- Text join
- Temporal join
- Star join
- Spatial join
How does the full outer join behave?
The behaviour is combining the left and outer joins
* Rows from both tables that match on joined columns
* Rows from the left table with no matching in the right table
* Rows from the right table with no matches in the left table
Where is the Full outer join defined
In the standard and star join nodes.
In a star join node a full outer join can be defined on one dimension cal view and this view must appear last in what join
the star join node
A referential join is semantically an inner join that assumes that referential integrity is given,
meaning that the left table always has a matching entry in the right table
A referential join is performed only
if at least one field from the right table is requested
If the cardinality is 1..1 or n..1. would the join be executed?
the join is not executed
Text joins act as Left Outer joins and can be used with SAP tables only if
The language column - field (SPRAS) is present.
Temporal joins are only supported in the star join of calculation views of the type cube with star join. What should the join be defined as?
The join must be defined as inner
Temporal conditions can be defined on columns of the following data types: 3
- Timestamp
- Date
- Integer
The star join in calculation views of the type Cube with star join is a node type, rather that a join type.
True
What is Join cardinality
The cardinality defines how data from two tables are related
When does Multi-Join Priority Affect the Join Node Results
- When all joins are Inner Joins, the result set is generally the same regardless of the join execution order.
- With a mix of Inner and Left Outer Joins, the result set can vary based on the join execution order.
What is non equi join
SAP HANA cloud provides a type of join, called Non-Equi Join, where the join condition is not represented by an = (equal) operator
Non equi join can be defined for the following types of joins
- Inner
- Left Outer
- Right Outer
- Full Outer
What columns are involved in the Dynamic Join
Only the joined columns requested in the query are brought into context and play a part in the join execution
What happened in a Dynamic Join when none of the joined columns are requested by the client query
you get a query runtime error
Union node can be used to join data set with not matching structures
yes
How many union approaches exist
standard and constant values
What is standard union:
a standard union is where, for each target column in the union, there is always one source column that is mapped
What is union with constant value
A union with constant values is where you provide a fixed value that is used to fill a target column where the data source cannot provide a mapped column.
What is the purpose of an aggregation node
The purpose of an Aggregation node is to apply aggregate functions to measures based on one or several attributes
What are the aggregated functions used in the graphical calculation views?
SUM (the default function), MIN, MAX, and COUNT.
What are the additional aggregate functions in SAP HANA Cloud that can be applied to the calculation views:
- Average
- Variance
- Standard deviation
- Median
In an Aggregation node, a calculated column is always computed AFTER the aggregate functions.
True
What are node features helping to control the aggregation nodes
- Keep Flag
- Transparent Filter
Setting the Keep Flag property to true for a source field column forces
the calculation to be triggered at the relevant level of granularity
Set transparent filter is needed when
When using stacked views where the lower views have distinct count measures.
When queries executed on the upper calculation view contain filters on
columns that are not projected.
What is the purpose of rank node
The purpose of the Rank node is to enable the selection, within a data set, of the top or bottom 1, 2, … n values for a defined measure, and to output these measures together with the corresponding attributes and, if needed, other measures.
When there is a need to combine measures from two tables, there is the tendency to create a join. This practise is very expensive. It is more beneficial to use a union node.
A union is not a join.
What are the Calculated columns?
- The calculation can be arithmetic or just character string manipulation
- Cal column also support non measure attributes as part of the calculation
- It is possible to nest cal columns so that on cal column can be used in other cal columns
The purpose of these time based dimension cal views
is to ease the manipulation of measures across time.
What are the calendars available for time cal views
- Gregorian: This is made up of years, months, and days.
- Fiscal: this is organised into fiscal years and fiscal periods
What is the name of the table populated for the Gregorian calendar?
M_TIME_DIMENSION
What is the name of the table populated for the fiscal calendar
M_FISCAL_CALENDAR
Which schema holds the tables M_TIME_DIMENSION and M_FISCAL_CALENDAR
_SYS_BI
Can a table be used several times in a cal view
True
Alias name can be generated when adding the same table to a view.
True
Can the Alias name be modified
Yes, on the properties-> alias
Aggregating data in a dimension calculation view would be like a “Select Distinct Statement”
In cal view measures can originate from several data sources, such as:
- Several tables
- information views
- Table functions
When a report tool request data from cal view, the speed of returning data depend on
- Selected attributes and measures
- Aggregate functions applied by the client query
- Ordering defined on one or several columns
Overview of the possible node types
Node type Use case
Projection To filter data or obtain a subset of required columns from a datasource
Aggregation To summarize measures by grouping them together by attribute columns values
Join To query data from two or more data sources
Union To combine the data from two data sources
Star join To join attributes to the very last node of a Cube with star join calculation
Rank To order the data for a set of partition columns and select only the top 3/4/…./n elements
Can a dynamic join be used on one attribute only
No, it must be done in several attributes
Would you receive an error message if the calling query does not request a join dimension. The dynamic join is not activated.
True
The star join is always deployed with an aggregation node on top of it.
Can the star join in cal views supports the referential join type?
True
When can the union node be utilised
When the multiple result set have identical structures
Can the union node be utilised with more than two identical structures
True
Standard union can be of two types
- A standard union
- A union with constant values
What is a standard union
For each target object, there is one source column that is mapped.
Does Standard union must provide provide a source column for all fields
False
What is Union with constant values.
This is similar to multiprovider in BW
What is a standard union with constant values
This is when the system provide a constant value for a column with no datasource value.
When can the standard union with constant values can be utilised
This will depend on the data and the way the uses want to report.
When Auto map can be used?
when the names of the data sources are similar
Selected a data source first before triggering the auto map by name is useful when one or several other data sources have a lot of columns that you do not want to include in the output
True
When none of the datasources can provide a value, the constant value is achieved by
First creating a column I the data target set
Assign a constant value for one data datasource and another value for another datasource. These values will populate the target column of the first step. This column can be used for aggregating, filtering and so on.
How to set up the constant value
Right click the made column target and choose manage mapping and set the constant value.
Empty union Behaviour; This is the case when there is a field from one datasource not used by the other data source. To alter the behaviour, this is the process:
- Add a constant to the union output
- Provide a constant value for each datasource with a suitable value that help you identify each source
- Change the property empty union behaviour to “row with constant”
Is the empty union behaviour subject to each data source
True
What are these new nodes? Intersect and minus?
These operations are set on fields at runtime.
What is the purpose of the aggregation node
This is needed for applying aggregate functions to measures based on one or several attributes.
To what SQL function the aggregation node can be compared to
GROUP BY clause of SQL
What functions can the aggregate node support in cal views
SUM, MIN, MAX, and COUNT
What are the other additional. Functions supported by the aggregation node
- Average
- Variance
- Standard Deviation
When a cal column is processed in an aggregation node
After the aggregation node
If there was a need to processed a cal column before aggregation, where should the cal column be processed?
In a projection before the aggregation node
What features can control the aggregation node?
Keep flag and Transparent filter.
What is the effect of setting Column – keep the flag = true
This forces the calculation to be performed at the lowest level of granularity.
Setting the transparent filter = true is important on the following:
When using stack view where the lower views have distinct count measures
When queries executed on the upper cal view contain filters on columns that are not projected
What are the steps for generating rank – cal view:
- Partition by column – Country
- Sort direction – order by “measure”, and threshold
- Generate rank column – “label the column to be produced” (aggregation function, target value and result set direction)
- Rank column is generated.
- Each row will be related to an individual value
The source data set can be partitioned by one or several columns
True
If you choose the Dynamic Partition Element, the columns listed in the Partition will be ignored if they are not requested by an upper node or top query
True
What is dynamic partitioning element
Define whether the partition can be adjusted automatically based on the columns that are selected by an upper node or an upper view/query that you execute on top of the current one.
What are the Rank node Aggregation Functions computing the row numbers?
- Row
- Rank
- Dense Rank
Does the sum function operate within the sorted columns
True
What is the data lineage is for
This is used to track the origin of a column along the calculation scenario and down to the first node when it appears.
The calculation engine pre-optimizes queries before they are worked on by the SQL optimizer.
True
The calculation engine considers(3)
settings in cal views such as dynamic join,
join cardinality and
union node pruning.
What happens during the instantiation process(Y)
the query and the original calculation model are combined to build the optimized, execution calculation model.
What can be done to improve cal view performance
- Break down large models into individual calculation views so that they are easier to understand and also allow you to define reusable components, thus avoiding redundancy and duplicated maintenance as a consequence.
- Try to develop calculation views in layers so that each layer consumes the lower layers.
What is unfolding
At run time, your calculation view is analyzed by the SQL processor. A key aim is to convert each calculation view operation into a corresponding SQL operator to convert the complete data flow graph into a single SQL statement.
A complete unfolded query access only tables
True
If a column view is used as a data source instead of a table, then you know this is not an unfolded query as the result has been materialized as an interim view and not read directly from a table.
True
Switch Calculation View Expressions to SQL.
True
Why is SQL language preferred to column engine
Column Engine language limits query unfolding.
Column language is not available in SAP Hana Cloud
True
Try to avoid filters on cal columns
True
Calculate as high as you can in the stack so that you avoid calculating on granular data and instead calculate on aggregated data.
True
There are many reasons for table partitioning, as follows(LbPaPrO)
Load balancing in a distributed system.
Parallelization.
Pruning to improve query performance.
Overcoming the size of column store tables
Cache can only be used for calculation views that do not check for analytic privileges
True
In order to use the calculation view static cache there are some important prerequisites that must be met:
- Enable cache is selected
- Cal view can be unfolded
- No granularity tracking that prevent cache use
What feature can be used with the SQL Analyzer to check if unfolding can occur
Explain plan
The parallelization block always starts with a node that includes a data source that must be a table.
True
You can use a table defined in the local container or a synonym which points to an external table. You cannot use another calculation view as the data source or a table function.
True
What are the prerequisites for join pruning
- No field is requested from the to be prune table.
- The join type is outer, referential or text.
- The join cardinality is either “…1” for the to be prune table or only measures with count distinct aggregation or no measures at all requested.
When the Optimize Join Columns option is active, pruning of join columns between two data sources, A and B, occurs when all four following conditions are met:
- The join type is Outer, Referential, or Text (actually, the calculation view cannot be built if join type is Inner).
- Only columns from one join partner, A, are requested.
- The join column from A is NOT requested by the query.
- The cardinality on B side (the side of the join partner from which no column is requested) is :1.
There are three approaches to implementing pruning rules in unions:
- Two of these approaches are based on column values and
- One is based on column names.
What is explicit pruning
Explicit pruning helps performance by avoiding access to data sources that are not needed by the query.
What is implicit pruning?
Implicit pruning is implemented by defining one or more pruning rules in a dedicated table called the Pruning Configuration Table.
Where does the Performance Analysis Mode is activated
From within the Cal view
What are the artefacts included in the Performance Analysis:
- The type of tables used (row or column)
- Join types used (inner, outer, and so on)
- Join cardinality of data sources selected (n:m and so on)
- Whether tables are partitioned and also how the partitions are defined and how many records each partition contains
The Performance Analysis Mode produces the following warnings:
- The size of tables with clear warnings highlighting the very large tables
- Missing cardinalities or cardinalities that do not align with the current data set
- Joins based on calculated columns.
- Restricted columns based on calculated columns.
What is the primary objective of join pruning
To improve the performance of a calculation view
What are union pruning approaches?
- Column based pruning
- Explicit pruning using constants
- Implicit pruning with a configuration table
It is best practice to avoid stacking calculation views and instead, try to include all logic within one calculation view.
True
What is the benefit of configuring static cache
Improve calculation view performance
How many parallelization blocks can you have in a calculation view
1
To work with calculation view Debug Query Mode, you first need to deploy the calculation view?
True
The following features are available for each node in the cal views(6)
Switching Node Types
Replacing a Data Source
Extract Semantics
Propagate to Semantics
Previewing the Output of Intermediate Nodes
Map Input Parameters between nodes
Two features are available in the Business Application Studio to analyze modeling content within a project. These are
- Data lineage
- Impact analysis
What is the purpose of impact analysis:
The purpose of impact analysis is to show all the chain of calculation views that depend on a given calculation view
The Where-Used feature supports the following objects
- Input Parameters
- Calculated Columns
- Restricted Columns
What is BAS – work space?
A workspace in SAP Business Application Studio is a file structure where you can work on one or several projects. Each user has their own workspace, and can create additional ones if needed
Main Database Artifacts Defined in a HDB Module
- Tables
- Calculation views
- Functions
- Procedures
- Analytic privileges
- Synonyms
What is the structure for naming a calculation view in the HDB
<Namespace>::<Runtime>
</Runtime></Namespace>
The following 4 rules apply to the namespace:
- The namespace is optional. Some objects can be defined with a namespace in their identifier, and others without.
- A HDB module can have no namespace defined at all, or can specify any number of different namespaces.
- The namespace is always identical for all the design-time objects stored in the same folder.
- The namespace must always be explicitly mentioned inside the design-time files, and must correspond to the namespace defined explicitly in the containing folder, or implicitly cascaded from the parent folder(s).
Key Properties of HDI Containers
- A container is created automatically when you deploy your database module for the first time.
- A container generates a database schema the moment a container is first created.
- Database objects are deployed into that schema.
- Source file definitions must be written in a schema-free way.
- Direct references to objects outside your container are not allowed. You must use synonyms.
A deploy operation can be executed at the following levels of a project structure:
- An entire HDB module
- A sub-folder of the HDB module
- One or several individual objects
Corresponding to the export, you can either import an individual file or a .zip or .tar archive. Three features are available for that:
- Drag and drop from your file system into the Explorer view. Archives such as .zip and .tar files are not extracted. This can be done in command line in a Terminal window.
- Upload Files (from the File menu) and Explorer view context menu. Archives are not extracted.
- Import files (from the Welcome page). Archives will be extracted.
What are the main Rules for a Consistent Management of Modeling Content Files
- In an entire HDB module, the definition of a given runtime object (<namespace>::<object_name>) cannot be provided more than once.</object_name></namespace>
- The namespace defined in the design-time file of a database object must correspond to the namespace setting applied to the folder in which it is located
- A deploy operation always checks the end-to-end dependency between modeling content across all the HDB module, but only deploys the design-time files you have selected for the deploy operation.
- During a deploy operation, the checks apply to all the runtime objects that are already built, and all the objects included in the deploy scope (that is, in case of a partial deploy, the design-time files you have selected).
By default, a container has full access to all the database objects it contains (tables, calculation views, and so on), but has no access at all to other database schemas, whether it is another container’s schema or a classic database schema.
True
Before we describe the .hdbgrants file, we need to describe two types of user that are referenced in the file.
- Object Owner - when a database artifact is deployed to a database container, it is owned by the technical user of the container and not the developer who created the source file. Only the object owner can alter and drop the object. During deployment, the object owner must have privileges to access any external objects that have been referencing in a calculation view.
- Application User - Any ‘real’ (not technical) user (for example, a reporting user) who accesses the calculation view must have privileges to also access the external objects that are referenced. These privileges might be different (probably more restricted) than the object owner privileges.
When defining a synonym there are three key parameters
- Name of synonym - you will refer to this name whenever you need to access the target object
- Object - the actual object in the target schema, such as a table name
- Schema - where the target object is found
The SAP HANA Deployment Infrastructure (HDI) relies on a strong isolation of containers and a schema-free definition of all the modeling artifacts.
True
These are the steps for creating User Provided Service
You simply choose the name of the service and a user id and password that has privileges to all external objects that you wish to access through the service.
These privileges in turn will be granted by this service user, to the container’s technical user and application users using a .hdbgrants file.
Privileges will be granted to the user via
hdbgrants file
The Schema Central Master Data contains the table PROSPECTS. What is the process to grant access to objects from another schema?
Database Explorer – SQL console
Create user Üser_provided_Service password “Just4fun_135” no force first password change;
Create role “externalaccess_rolefor OO”; Person creating the cal views
Create role “externalaccess_rolefor AP”; - Perso accessing Cal views of the Object owner
Grant select on schema central_master_data to “externalaccess_rolefor_OO WITH GRANT OPTION
Grant select on schema central_master_data to “externalaccess_rolefor_AP WITH GRANT OPTION
What are the two types of users
- Object Owner - when a database artifact is deployed to a database container, it is owned by the technical user of the container and not the developer who created the source file. Only the object owner can alter and drop the object. During deployment, the object owner must have privileges to access any external objects that have been referencing in a calculation view.
- Application User - Any ‘real’ (not technical) user (for example, a reporting user) who accesses the calculation view must have privileges to also access the external objects that are referenced. These privileges might be different (probably more restricted) than the object owner privileges.
The user-provided service’s user has all privileges required to access all external objects. Technically, it would be possible to grant all the database privileges of the service user to the object owner and application user.
True
The final step in the setup of external schema access, is to create synonyms that point to the target objects of the external schema. The synonyms declaration is done in a .hdbsynonym file. This file type can be edited either with the text editor, as in the example below, or a dedicated synonym editor.
True
There are three key parameters When defining a synonym
- Name of synonym - you will refer to this name whenever you need to access the target object
- Object - the actual object in the target schema, such as a table name
- Schema - where the target object is found.
When you need to access data from another HDI container, the setup is relatively similar to what you have just learned for an external (classic) database schema. Let’s point out the main differences:
- There is no need for a user-provided service if the external container service is running in the same space as the one your project is assigned to.
You can add the external HDI container service to your project (which automatically adds it to the mta.yamlfile. - You must create roles inside the external container that contain the relevant privileges to all objects that could be accessed by the service.
- The .hdbgrants file does not refer to database object privileges of the technical user assigned to the user-provided service, but to the dedicated roles created inside the external container (see the previous point).
What are the benefits of using GIT - version control system
- Source code backup
- A complete change history
- Branching and merging capabilities
- Traceability (for example, connecting a change to project management software)
What is git
Git is a Distributed Version Control System (D-VCS) used to manage source code in software development.
When you work with files locally in Git, this involves three major “logical” areas.
- The Working Directory
- The Staging Area, also known as INDEX.
- The (local) Git Repository
What is INDEX - GIT context
the staging area is the “virtual” place where you put all the modifications that you want to include in your next commit.
What is a branch
From a conceptual standpoint, a branch is a series of commits. Technically, a branch is just a pointer that references a particular commit of the project history.
What is the name of the default branch
master
What is PAT
Personnel Access Token
PAT - Then, for each file, you have the following possibilities
- Stage the modification so that it will be included in the next commit
- Leave the modification unstaged (it will not be included in the next commit)
- Discard the change
Staging or discarding can also be done for the entire set of modifications.
You must not amend commits that have been shared with other developers, because this would modify a (shared) history on which others might have already based their work.
True
Does the developer need to have a developer role in the target space when deploying an application.
False
What are the steps for importing MTA from Dev to Q/A
- Build the HDB module(s).
- Build the entire Project in order to generate an MTA archive file (.mtar).
- Export the MTA archive.
- Deploy the MTA archive to the target landscape
Why do you set the deprecate flag in a calculation view
To provide a warning to the developer suggesting they should not use the calculation view.
Which type of semantics can you extract using the optionExtract Semantics
Column labels and hierarchies
What is the type of file that is generated when you deploy a complete application
MTAR
The name of the schema that corresponds to a container is generated and cannot be changed.
False
Which database artifact do you define to access external schema objects
Synonym
Git is used to automate the deployment of runtime objects in SAP HANA Cloud
False
A runtime object in a container must always have a corresponding source file
True
In a project structure, which of these appears directly beneath the project
Module
What are analytic privileges
Analytic privileges are used to enable data access in calculation views, by filtering the data based on the values of one or more attributes.
How to create and assign an Analytic Privilege
- Create a source file with the extension .hdbanalyticprivilege.
- Assign the calculation view(s) that you want to secure with this analytic privilege.
- Choose the type of restrictions you want to use and define the restrictions.
- Set the secured calculation views to check analytic privileges.
- Deploy the analytic privilege.
- Assign the analytic privilege to a role.
- Assign the role to a user.
Restriction types in analytical privileges(SDA)
- Attribute
- SQL expressions
- Dynamic
Only columns of type Attribute (NOT Measure) can be specified in dimension restrictions.
True
For each calculation view, the following criteria are considered
- Does the user have SELECT privilege on the column view (this is the actual database object generated from a calculation view) in the container schema?
- Does the calculation view check analytic privileges?
- Is the user granted analytic privileges for the view?
SELECT privilege is only required for the top view of a view hierarchy
True
The key rules that govern the access to data are, as follows:
- Object privileges
There is no need to grant SELECT privileges on the underlying views or tables. The end user only needs to be granted SELECT privileges on the top column view of the view hierarchy. - Analytic Privileges
The analytic privileges logic is applied through all the view hierarchy.
Whenever the view hierarchy contains at least one view that is checked for analytic privileges but for which the end user has no analytic privilege, no data is retrieved (not authorized).
Privilege assigns to role or user
True
User owns an object.
True
Assigning a privilege directly to a user is not a good practice and creates a lot of maintenance.
True
Some key points regarding security concepts of SAP HANA Cloud:
- All the privileges granted directly or indirectly to a user are combined.
Whenever a user tries to access an object, the system performs an authorization check based on the user’s roles and directly allocated privileges (if any). - It is not possible to explicitly deny privileges, all privileges grant access.
The system does not need to check all the users roles. As soon as all the privileges required for a specific operation on a specific object have been found, the system ends the check and allows the operation without checking if the same privileges appears again in another role. - Several predefined roles exist in the SAP HANA Cloud database.
Some of them are templates (and need to be customized), and others can be used as they are.
In the SAP HANA Cloud database, there are two ways to create roles:
- As pure run-time objects (with no source file) that are created using SQL or SAP HANA Cockpit. These are called Catalog Roles. You assign privileges to these roles using SQL grant statements.
- By means of source files that you create in the HDB module of a project. There are called Design-Time Roles and the source file describes the privileges that are immediately granted when the role is deployed.
The design-time files used to create roles must have the extension .hdbrole in order to be recognized as design-time role files.
The design time role(ROSAS - 5) can include
- Role
- Object privilege
- Schema privileges
- Analytic privileges
- System privileges
The .hdbrole file cannot contain references to real schema names, but only logical references to schemas that are resolved in another type of design-time file: the .hdbroleconfig file
True
When creating calculation views, the main authorization you need is a SELECT privilege on the data sources.
True
The <filename>.hdbgrants file is structured into three levels</filename>
- The name of the user-provided service
- The users to whom the privileges are granted
There are two possible “values” for users:- object_owner is the technical user that owns all the objects of the container schema.
- application_user represent the users who are bound to the application modules.
- The set of privileges granted
The syntax of this third level is very similar to the syntax of what you find in a .hdbrole file.
A mask expression is defined in a calculation view as follows
- In the Semantics node choose the Columns tab.
- Select a column and choose the Data Masking icon in the toolbar
- Define the masking expression using SQL
Only columns of certain data types can be masked in a calculation view
- VARCHAR
- NVARCHAR
- CHAR
- SHORTTEXT
Masking is supported for both table types (ROW tables and COLUMN tables).
True
From SAP HANA Cloud QRC 4/2021 onwards, two different mask modes are available
- Default
Masking is done based on the user calling the calculation view with the masking definition. - Session User (new)
Masking is done based on session user running the SQL query
What is a dynamic analytic privilege
A reusable analytic privilege that can be used for several users who need to access different data
What is data masking
Obscuring column values by hiding some or all characters with replacement characters
Which approach is recommended to create roles that grant privileges to access local objects generated in your container
Design-time Roles
Can SAP Hana Cloud support data tiering
True
Can SAP Hana Cloud be connected to on premise system
True
What are the use cases of SAP Hana Cloud(4)
1 To provide the DB for next generation applications that requires super fast performance
2 To power data warehouse including SAP Data Warehouse Cloud and custom data warehouses
3 To extend the data storage and processing capacities of on premise applications
4 To extend the functionality of existing SAP applications using cloud based services
What are the main differences between SAP Hana on-premise and SAP Hana Cloud in terms of infrastructure?
- No software installation, patch, back up, tune monitor, start/stop process
- No OS selection
- Hardware
Which tools are commonly used in SAP Hana on-premise and SAP Hana Cloud?
- SAP Hana Cockpit
- SAP Hana Database Explorer
- SAP Hana Web IDE
What is the main tool introduced on SAP Hana Cloud to manage development
BAS
What are some features not available in SAP Hana Cloud yet present in SAP Hana on-premise?
- SDQ
- Text Analysis and mining
- XS- XSA
- Multitenancy
- SAP Data Warehousing foundation
- SAP Streaming Analysis
What are the new SAP Hana Cloud data types
- Spatial: floor plans, coordinates, geographic positions, maps and Engineering diagrams
- Graph: social networks and supply chains, etc
Can SAP Hana Cloud managed the following data types:
* Enterprise systems
* Data warehouses
* Archives
* Big data
* Files stores
* DBs
* Social networks
True
Which types of data can SAP HANA Cloud store and process
- Spatial
- Text
- Structured
- Graph
Why can’t IT develop next-generation applications using their existing technology
- It is difficult to manage increasing data growth
- Legacy applications have become difficult to extend
- Current landscapes have become too complex
What is SAP HANA Cloud
A fully-managed, cloud data platform which includes an in-memory database and data lake
Which new development approaches are supported by SAP HANA Cloud
- Push-down of data processing to the database
- No need to calculate and store pre-aggregated data
Which recent technology advances present organizations with the opportunity to build innovative applications? 3
- Data science is more accessible
- New types of data available
- Cloud computing
Why might customers choose to implement SAP HANA Cloud versus SAP HANA on-premise
- Reduced administration effort on customer side
- Easy to scale
- Fast deployment
When is compression applied to the SAP Hana Cloud DB
During delta merger
Who is responsible for deployment and monitoring the SAP Hana Cloud service
SAP Hana Administrator.
What is the main tool used for monitoring the SAP Hana Cloud Db
SAP Hana Cockpit
What are the key roles of an SAP Hana Cloud Implementation
- The administrator role
- Modeler role
- Application Developer role
- Security Architect role
- Data Architect role
Was SAP Hana Cloud developed from scratch
True
What were the cloud design principles considered when designing SAP Hana Cloud
Elasticity and fast deployment
What is dictionary encoding
It is a first level compression technique and is applied to all columns of a column store table
What does Second compression level include:
- run-length encoding,
- cluster encoding, and
- prefix encoding.
Is compression relevant to row store tables
False
What are some benefits of SAP HANA Cloud data compression
- Fit entire enterprise databases into memory and avoid disk access
- Get more data into CPU cache and therefore reduce main memory access
What are advantages of column store tables versus row store tables
- Data footprint is automatically reduced through compression.
- Only the columns required for processing are actually loaded to memory.
- Columns can be partitioned to improve performance.
What is the purpose of the delta-merge in SAP HANA Cloud
To maintain good read performance of column tables following record updates
Which hardware technology improvements does SAP HANA Cloud exploit to maximize performance
- Larger memory sizes
- Multi-core CPUs
What is an entitlement
The service plan bought
What is Quota
The size of memory and service attached to the quota
At what level the entitlement and quota are bought
Global account
Are Sub accounts independent from each other
True
What is provisioning in terms of BTP
Creating an SAP Hana Cloud instance
What must be created first before creating an SAP Hana Cloud instance
Cloud Foundry Space
What are the key areas monitored in SAP Hana Cockpit (SMAWTD)
- Services
- Memory
- Alerts
- Workload
- Table usage
- Database Configuration
What is SAP Hana Cloud Central used for?
Provision and manage instances
What is key tool used by the administrator of SAP HANA Cloud for monitoring
the SAP HANA Cockpit.
What are the key areas of SAP HANA Cloud that can be managed and monitored using the SAP HANA Cockpit:
- Services - database services such as indexserver, nameserver.
- Memory - monitor memory usage and check out-of-memory issues
- Alerts - be warned of critical situation such as disk becoming full
- Workload - organise jobs into workloads for better system utilization
- Table Usage - ensure tables are optimally designed for best performance
- Database Configuration - manage configuration (*.ini) files that determine database behaviour
- Manage users and roles
- Manage SDI
Where can the SAP HANA cockpit be accessed from(3)
SAP HANA Cloud Central,
SAP BTP Cockpit, or
by using the direct URL
Is the SAP HANA Cockpit used to manage only SAP HANA Cloud databases
True
What are two CLI commands available used with SAP Hana Cloud
Setup and administrator
What can be created with BTP CLI
- Subaccounts and directories
- Managing entitlements of global accounts and subaccounts
- Managing users and their authorizations in global accounts and subaccounts
- Subscribing to applications
What can be performed by CF CLI:
- Create spaces
- Add organizations members
- Add Space
- Create Spaces quota plans
- Assign Quotas plans to Spaces
What is the main use of SAP Hana Database Explorer?
It is used for query information about the DB and display info about DB’s catalogue objects of the SAP Hana Db
Can the Database Explorer be used with both HDI containers and classic schemas
True
Where can the SAP HANA Cloud Cockpit be accessed from
It is located under Database Administration > HDI Administration.
Where can the SAP Hana Db Explorer be accessed from
SAP Hana Cockpit or BAS
Why the SAP Hana Cloud could be start/stop
- Adding Data Lake
- Scale up the DB to add more RAM
- Maintenance or the application that is running on the DB
- Take the DB offline to prevent unwanted updates
- Cloud provider support team suggests a restart after troubleshooting
What are the functions of SAP Hana Cloud Administrator (WMUSSM)
- Work load management
- Manage data tiers
- Utilise resource to fix issues
- Set up alerts
- Set up administrative tasks
- Managing tables
What are the security tasks of the SAP Hana Cloud administrator (MDAMM)
- Monitor critical security settings
- Data encryption
- Auditing activities
- Manage certificates & keys
- Monitors data anonymization
What are the administration tasks of the admin person
- Creating users, user groups and assigning roles and privileges
- Investigating authorization or authentication issues
- Deactivating users
What is the setup sequence for managing users, roles and permissions
1 Create privileges (many standard privileges are supplied by SAP)
2 Create roles
3 Assign privileges to roles
4 Create users
5 Create user groups
6 Assign users to user groups
7 Assign users to roles
Why do you use SAP HANA Cloud Central tool?
To start and stop an SAP HANA Cloud database instance.
Before you can create an SAP HANA Cloud database instance, what must you already have done
- Enabled Cloud Foundry environment for the sub account
- Assigned quota to the sub account
- Created a Cloud Foundry space in the sub account
What are the main types of calculation views
- Dimension
- Cube
- Cube with star join
What data modeling artifacts can be created in Business Application Studio:
- Calculation Views
- Procedures
- Table and Scalar Functions
- Flowgraphs
- Analytic Privileges
- Local and Virtual Tables
- Replication Tasks
In what language are BAS artefacts store and encoded
JSON or XML
Are table functions read only
True
Where can a procedure be call
- SQlScript
- Function, and
- Another procedure
Procedures used within modeling must be set to read-only
True
What is stateless
A procedure set to read only
What is stateful
A procedure used for update, insert and delete. They cannot be called from a cal view.
There are some levels of access to consider when securing Cal Views(4)
- SQL select objects on tables
- Analytics privileges restricts access to rows
- Access to specific columns – create views and grant select to the views
- Masking characters in columns such as telephone, salary figures, etc
Can Procedures be used as datasources
False
Can stateful procedures be call from Cal Views
False
Can stateful procedures be call from Cal Views
False
Access to specific rows can be achieved via which security tool
Analytic privilege
Access to specific tables can be restricted via which artefact
Select
How to grant access to specific columns
By creating a view with a subset of columns and granting SELECT only on this view.
Can data masking blur the entire content of a column
True
What are the data type SAP spatial provide for storing geometrical data
Points. Lines and polygons.
What spatial query functions are included in SQLScript in SAP Hana Cloud
Within
Distance
Crosses
SAP Hana Cloud provides two libraries for predictive analysis
PAL – Predictive Analysis
APL – Automated Predictive Library
Does Pal (Predictive Analysis Library) require special knowledges of algorithms
True
Does APl (Automated Predictive Library) require special knowledges of algorithms
False
What are the data mining pre-processing tasks included in PAL
- Sampling
- Binning
- Partitioning
What are the algorithms for data mining categories included in the PAL (ACCRTN)
1 Association
2 Classification
3 Clustering
4 Regression
5 Time Series
6 Neural Networks
Why do we model in SAP HANA
- To push data-intensive processing away from the application and to the database.
- To develop reusable data processing logic in the database.
Which type of advanced model should you create if you wanted to explore business entities that are highly networked
Graph
What are the SAP BTP environments offered to the public
- Cloud Foundry
- ABAP
- Kyma
Is Cloud Foundry environment open platform as service (PaaS)
True
What are the benefits of Cloud Foundry(LACO)
- Language independent
- Administration separation
- CLI
- Opensource
What is CAPM (TOLALIAP)
It is a framework of tools, languages, libraries and APIs that combine open source and SAP provided tools and technologies
What are the main elements making up SAP CAPM
- Sap Fiori
- Java/Node.JS
- SAP Hana
- CDS
What is CDS used for within the CAP CAPM environment
To create physical layer (data types and tables)
To create Virtual layer (view, etc)
CAP create native HANA Cloud database artefacts such as tables, views and functions and mix them with CDS artefacts
True
The recommended tool for developing CAP applications is Business Application Studio
True
What are the SAP HANA Cloud service plans for managing database artifacts
- Schema service plan
- HANA Deployment Infrastructure (HDI) Shared service plan
What is GIT used for?
- Back up of source code
- Provide change history of source code
- Provide branching and merging capabilities
- Traceability and audit
When a developer is satisfied that their content is ready. They commit one but more often multiple relates source files to the git
True
What are the components of the SAP Hana Database explorer(5):
- A catalog Browser
- An SQL console
- An SQL analyser
- An SQL Analyze
- An SQL debugger
Where can the database explorer be launched
- Business Application Studio
- SAP HANA Cockpit
- SAP HANA Cloud Central
- SAP BTP Cockpit
In SAP BTP in CAPM OData services are automatically generated from CDS files that describe the OData model
True
What is the backbone of SAP Cloud Application Programming Model
CDS
What are the UI5 versions produced by SAP
- OpenUI5 - Is the open source version, free to use, and released under the Apache 2.0 license. As well as specific SAP UI libraries, OpenUI5 uses many open source libraries. So SAP have chosen to make this version freely available for anyone to use. OpenUI5 can be freely downloaded from Github
- SAPUI5 - Similar to OpenUI5 but includes many more libraries that render the UI for specific SAP features. SAPUI5 is already integrated with SAP HANA Cloud.
What is OData
Open Data Protocol (OData) is a widely accepted protocol that is used to perform database-style create, read, update, and delete operations on resources by using HTTP.
A development space provides all of the tools and resources needed for developing your application.
True
Which environments does SAP BTP provided for developing SAP HANA Cloud applications
- ABAP
- Cloud Foundry
What can you develop using SQLScript
- Database functions
- Stored procedures
What is a must when important individual files
The command (tar / unzip) must be added.
What are the Import files – Options
- Import (Welcome page) Import project (right-click on blank area)
- File → Upload Files
- Drag and drop from your file explorer
In SAP HANA Cloud, which tool is recommended for data modeling
- BAS
In a cube with star join calculation view, the Column tab of the Semantics node separates columns into two categories
- Private, and
- Shared
In shared columns can the name and label be changed in al view
True
What are the dimension types
standard and time
What are valid data sources supported in SAP HANA Cloud calculation views.
- Row Table
- Column Table
- Virtual Tables
- Calculation Views
- SQL Views
- Table Functions
A semantic type describes the specific meaning of each column and this can be helpful to any client that consumes the calculation view by enabling it to then represent the columns in the appropriate format
True
What are the uses of the Semantics settings
- Assigning a description column to another column - for example, assigning the product id column to a product description column so a user sees a value that is more meaningful.
- Hiding a column - can be used if a column is only used in a calculation, or is an internal value that should not be shown, for example, hiding the unhelpful product id when we have assigned a description column that should be shown in its place.
- Assigning a variable - allowing a user to select a filter value at runtime for the attribute
- Change the aggregation type
There are two ways to sort columns automatically
- use the sort direction property (Semantics/Columns) and
- use the sort result set dialog box (Semantic/column/Icon/ + to add columns and directions.
How do you define the Null values
- Select the Semantics node
- Choose the Columns tab
- Select a measure or attribute
- Select the ‘Null Handling’ checkbox
- Optionally, in the Default Value text field, provide a default value
Calculation view properties are organized with four tabs what are they (GASS)
General, Advanced, Static Cache and Snapshots.
What is the role of a dimension calculation view?
To generate a view of master data from one or more tables
Which are supported data source types for calculation view consumption?
- Cal Views
- Virtual tables
- Column tables
- Row tables
Why do you create a time-based dimension calculation view
To automatically generate time-related attributes from a base date
What are the benefits of calculation views?
- They calculate live data on-the-fly
- They adapt automatically to the requesting query
Why do you hide columns in a calculation view?
When you want to hide a column that is used in a calculation but is not required to display in a report.
What is the role of the cube calculation view
To aggregate measures without the need for dimensions
What is the name of the tool that is launched with Data Preview of a calculation view?
SAP HANA Database Explorer
What is Projection node used for:
- To select only the required columns from a data source.
- To define calculated columns.
- To define parameters that request values at run-time, such as user-prompts.
- To apply a filter on the data source