Data analytics Flashcards
Structured query language (SQL) queries retrieve from what type of databases:
Select one:
a.
Flat
b.
Large
c.
Hierarchical
d.
Relational
A system that collects data to monitor patients with a specific disease is best described as a:
Select one:
a.
Legacy
b.
Registry
c.
Practice management system
d.
Personal health record
Correct. A system that collects data to monitor patients with a specific disease is a registry.
A data item that would be part of the Subjective heading of the problem-oriented medical record includes:
Select one:
a.
Current clinician assessment of high blood pressure diagnosis
b.
Patient prescription for amlodipine to treat high blood pressure
c.
Blood pressure reading taken during the current visit of 140/90
d.
Patient statement that he is having chest pain
Correct. The Subjective portion of the problem-oriented medical record reports what the patient’s symptoms are, in this case chest pain.
A system that collects data for data mining in a large health system is best described as a
Select one:
a.
Personal health record
b.
Health information exchange
c.
Practice management system
d.
Clinical data warehouse
clinical data warehouse
type of health information exchange best describes the use case of transmitting a discharge summary from a hospital to a skilled nursing facility?
Select one:
a.
Consumer-mediated
b.
Directed
c.
Record banking
d.
Query-based
Directed
A record locator system across a community HIE organization is what type of health information exchange?
Select one:
a.
Consumer-mediated
b.
Directed
c.
Query-based
d.
Record banking
query based
An analysis of EHR data to identify patients at risk for death from COVID-19 during hospitalization is best described as?
Select one:
a.
Economic catastrophe
b.
Predictive analytics
c.
Prescriptive analytics
d.
Descriptive analytics
B
Loss of granularity can occur during a data transformation when data in the source system is…
Select one:
a.
broken into different components and stored discretely in the target system
b.
combined into a single field in the target system
c.
rearranged in the target system to achieve a more understandable display
d.
easier to understand than how it is presented in the target system
Loss of granularity can occur during a data transformation when data in the source system is combined into a single field in the target system.
Combining data from multiple fields in the source system to a single field in the target system loses granularity, even if none of the original data is lost. This is because the target system can no longer call on the discrete components of the data for queries or rules. Breaking data from the source into different components is gaining granularity rather than losing it. Whether data is easier to understand in the target is a component of context rather than granularity.
In the DIKW model, which of the following is true?
Select one:
a.
Knowledge is the key element to making good patient decisions.
b.
Information can be subsequently used to discover patterns and relationships.
c.
Wisdom is when someone puts meaning behind the data and forms a pattern.
d.
Data has a lot of meaning when it is presented in the right way.
Information can be subsequently used to discover patterns and relationships. The DIKW model refers to the continuum which starts with data, then information, then knowledge, and finally wisdom.
Data has no meaning by itself. Information is data + meaning (context), and knowledge is acquired when relationships and patterns between different pieces of information are understood. Wisdom occurs when someone is able to use knowledge appropriately and is the key to making good patient decisions.
Information architecture is best described as which of the following?
Select one:
a.
The map of data flow between different information systems in your healthcare organization.
b.
The experience of the information technology analysts who are programming your business intelligence applications.
c.
The way in which databases are organized onto servers so that you know exactly where information is stored in the data center.
d.
The way in which information is organized, represented and found by end-users.
The way in which information is organized, represented and found by end-users.
One way to think about information architecture is that it is the way in which information is architected in a display to end-users. It is not network architecture, server architecture, or interface architecture, and its focus is primarily on end-users of the system.
Human data mapping errors are most likely when which of the following takes place?
Select one:
a.
Mapping is manually performed by an information technology analyst with no medical background
b.
Data is transferred from the source system but cannot be seen in the target system
c.
A data mapping script is programmed incorrectly to send source data to the wrong field in the target system
d.
Multiple elements of the same data type are mapped to the wrong field in the target system
Mapping is manually performed by an information technology analyst with no medical background.
Because the mapping is manually performed by someone without expertise in the subject area being mapped, the risk of a human mapping error is relatively high.
By contrast, use of an incorrectly programmed data mapping script is an example of a systematic error because it is occurring for all elements of a particular data type, even though it was a human who did the programming. The remaining answer options are not necessarily examples of human mapping errors because it was not stated in those choices how the data was mapped.
In which of the following points in the data life cycle are you least likely to lose data (either intentionally or unintentionally)?
Select one:
a.
Integration
b.
Reanalysis
c.
Processing
d.
Distribution
Distribution is sharing of data that exists in the system
Which of the following would represent data verification?
Select one:
a.
Checking interfaced data between a new laboratory information system and an electronic health record
b.
Testing all functions of upgraded software including verifying that functions and data that were not supposed to be impacted remain the same
c.
Performing a check of all records between the source system and the new application written by the informatics fellow to help end-users decide the risk of thrombosis in a patient
d.
Checking interfaced data between a new FDA-approved sepsis algorithm and the electronic health record
Checking interfaced data between a new FDA-approved sepsis algorithm and the electronic health record. FDA-approved systems do not have to be extensively validated.
Instead, they can be verified with a less extensive check of functions and data because the FDA approval process requires that manufacturers and the FDA have verified the performance of the system. Verification is really to make sure that the system was not damaged during transit or implementation. The remaining answers are all situations in which validation should occur whereby data and functions are extensively tested to ensure that they are functioning as expected.
Which of the following is true of a data migration?
Select one:
a.
Data reconciliation is used in a data migration to verify the reasons why records are not matching up between the source system and the target system.
b.
Mathematical methods are used to help determine whether the data migration has large scale (gross) errors.
c.
Counting records between a source system and a target system are not useful for data migrations because they don’t tell you the reason why the data didn’t transfer correctly.
d.
Data validation is never used on a data migration because it would take too many people to do it.
Mathematical methods are used to help determine whether the data migration has large scale (gross) errors.
Data reconciliation is comprised of mathematical methods, including counting, to check a data migration at a high level to see if any gross errors are present. Data validation is also used on subsets of migrated data to ensure that data has been transferred accurately. Data reconciliation will tell you that an error occurred, but it usually will not tell you why. This is why data validation is helpful on subsets of data because it can help you determine the reasons for errors.