CHIA F Flashcards
Kahneman and Tversky disrupted mainstream economics by demonstrating that decisions are not always optimal. Their ‘prospect theory’ showed that
humans’ willingness to take risks is context-dependent – i.e., it is influenced by the way choices are framed (Samson, 2014). Essentially, we dislike losses more than we like an equivalent gain. The pain of giving something up is greater than the pleasure of receiving it.
According to dual-system theory of behavuoural economics
System 1
• Comprises thinking processes that are intuitive, instinctive, and experience-based.
• Associated with heuristics (cognitive shortcuts), biases (systematic errors), and aversion to change.
System 2
• Comprises thinking processes that are reflective, controlled, deliberative, and analytical.
• Associated with agency, choice, and concentration.
‘Market failure’ refers to
a situation where the market does not deliver an efficient outcome, which generally occurs in cases where private incentives are misaligned with the broader interests of society as a whole
Market power is
is exercised when one or more parties can ‘coerce’ others. Examples include:
• Large and powerful suppliers (monopolists or oligopolists) who can extract higher prices from their customers than they could in more competitive markets.
• Large and powerful customers (monopsonists and oligopolists) who can extract lower prices from their suppliers.
The effects of market power may feed into cost-benefit or effectiveness analysis is in the valuation of costs or benefits. However, they may also require regulatory intervention.
‘Public goods’ in health economics are:
are goods or services that are ‘non-rivalrous’ (one person consuming the good does not prevent others from also consuming it) and/or ‘non-excludable’ (it is impractical to exclude people from benefiting from the good, once it is made available). A classic example is clean air. Pragmatically, one person breathing clean air does not stop others from doing so, and once clean air is available, it is difficult to prevent anyone from breathing it. Consumers fail to pay for a public good because they cannot be excluded from the benefits are known as ‘free-riders’.
Health information is often a public good. Population health services such as clean air, food safety and vector control may also be public goods. One person consuming them does not prevent others from doing the same, and once they are provided, it may be difficult to stop anyone from realising the benefits.
What are exernalities in health economics
Externalities occur when the consumption of certain goods and services deliver benefits to or impose costs upon unrelated third parties. These are positive and negative externalities, respectively. For example:
• Vaccination has the benefit of protecting its direct consumer against illness but may also protect the spread of disease to others, enabling them to benefit. This is a positive externality.
• Sugar prices typically do not account for the public health costs of excess societal sugar consumption. This constitutes a negative externality.
What are indirect network exernalities in health economics
Network effects are one specific form of externality that health informaticians are likely to encounter.
Indirect network externalities concern complementary goods and services. For example, the value of a computer peripheral such as external speakers increases with the range of computers they can operate with. On the other hand, cybersecurity threats are also complementary services. Cybersecurity threats have been rising rapidly in healthcare in recent years, spurred on in part by greater health IT usage. This is an example of a negative, indirect network externality.
Research covering capability requirements for digital transformation across 31 OECD countries and their partner economies suggests
• Despite automation, task-based (non-cognitive, learned on the job) skills remain as important as cognitive (learned through education) skills.
• Digitally-intensive industries reward workers with relatively higher levels of self-organisation, advanced numeracy skills, and communication and socioemotional skills.
• Bundles of synergistic skills are significant in digitally-intensive industries.
The World Economic Forum (WEF) identifies eight specific digital skills domains in which proficiency is likely to be required for people to feel “competent, comfortable, confident and safe in their daily navigation of a digitalised work and life environment
• Digital identity (digital citizen, digital co-creator, digital entrepreneur).
• Digital rights (freedom of speech, intellectual property rights, privacy).
• Digital literacy (computational thinking, content creation, critical thinking).
• Digital competencies (online collaboration, online communication, digital footprints).
• Digital emotional intelligence (social and emotional awareness, emotional regulation, empathy).
• Digital security (password protection, internet security, mobile security).
• Digital safety (behavioural risks, content risks, contact risks).
• Digital use (screen time, digital health, community participation).
Irrespective of the methodology used, education and training needs analysis typically involve four stages –
organisational analysis, operational analysis, person analysis, and training requirements analysis. Each of the first three stages aims to identify needs and ensure that the organisation’s needs, operational requirements, and people align. The fourth considers whether education and training are the best options, and if so, consolidates, and quality assures the requirements
Training needs analysis - Organisational analysis
Analysis of the organisational dimension of training needs aims to clearly articulate what the organisation requires of its people, irrespective of the specific roles they individually play.
Organisational analysis of training needs requires consideration of current performance and intentions (as signalled through strategic planning and other foresight processes).
Techniques for undertaking such organisational analyses include desk research (e.g., the perusal of plans, policies, strategies, performance reports, complaints, etc.), comparative research (e.g., literature searches, competency benchmarking, etc.), staff, consumer, and other stakeholder surveys, interviews, and focus groups. Dialogue, rather than passive data collection, is vital.
Training needs analysis - Operational analysis
Essentially, this analysis examines what the organisation, through its people, needs to do to achieve its strategic objectives.
Operational analysis involves examining the organisation’s activities and how they are performed. In the context of evolution towards digital health, this primarily means examining changes expected to what the organisation does (at an operational level) and how it will do it. However, in terms of current practice, it also means examining current performance, identifying existing strengths (for retention and consolidation) and weaknesses (for improvement), and identifying whether education and training gaps are associated with any of these.
Techniques for undertaking operational analysis include desk research (e.g., the perusal of operational documentation), comparative research (e.g., competency benchmarking), staff, consumer, and other stakeholder surveys, interviews, and focus groups.
Training needs analysis - Person analysis
Knowing the competencies and proficiency levels required enables the assessment of the people involved against these standards. Essentially, this means ascertaining which individuals need education and training – which people perform which roles and undertake which activities? What is their assessed proficiency in terms of the competencies required? What are the gaps?
Techniques for person analysis include desk research (e.g., the perusal of performance assessments and education and training records), direct observation of staff in the role, work samples, and staff interviews.
Training needs analysis
This training requirements analysis step involves working out the optimal strategies for ending up with the right competencies in the right places at the right time. Once these strategies are determined, the aggregate education and training needs will be visible, and prioritisation can occur. At this point, it is worthwhile:
• Conducting a quality assurance exercise to ensure that the competencies required are well enough specified to enable educators and trainers to determine how they can best be delivered.
• Undertaking ‘due diligence’ – i.e., validating that the costs of the education and training proposed are likely to generate sufficient returns to justify them.
barriers to digital health innovation and education (3)
• Lack of content and lack of demand where it does exist.
“Some concern from universities, colleges and accreditation providers about the addition of digital health content in curricula due to ‘curriculum crowding’” and “limited demand for digital health-focused subjects in universities, possibly due to a perception that these are only applicable to health informaticians”.
• Resource constraints.
“Thin margins and, in many areas of the health sector, relatively small business scale (such as small general practices) that impose limitations on investment capacity” and “difficulty accessing training … versions of digital health used in state and territory health systems to provide students with ‘hands-on’ experience”.
• Professional resistance.
“Resistance to innovations that blur existing scope of practice boundaries, or which do not align with [existing] funding models”.
Data design - Data objects, attribtes and relationships
data objects (data entities or concepts with common properties which are stored and operated upon during the running of a software program, e.g., actors (such as persons, equipment, etc.), roles (such as citizens, patients, health service providers, etc.) and events (such as consultations, admissions, transfers of care, etc.)
and their attributes (descriptions of the objects’ properties, e.g., first, last, and other names, date of birth, gender, etc. in the case of persons)
and their relationships (descriptions of how different data objects may be associated (e.g., a person may be a citizen, a patient and/or a health service provider) or data objects may be related to their attributes (e.g., a person may have multiple other names but can have only one date of birth)
This typically includes documentation in the form of an information model.
In data design, the data objects, attributes, and relationships articulated during the analysis phase of the system life cycle are reconceptualised as: (4)
data types (e.g., alphanumeric – string, text, or formatted text; date/time – date, time, or timestamp; time-series – date/time range, repeat interval, timing/quantity)
data structures (specific ways of organising data in computer programs so that it can be used efficiently and effectively – more on this shortly)
the integrity rules required to ensure the data is what it purports to be, and
the operations that can be applied to the data structures.
characteristics of information that are associated with fitness for purpose include (9)
Provendance
The instititional envrironmenbt
Relevance
Completeness and validity
Timeliness
Accuracy and precision
Coherence
Interpretability
Accessibility
Analysis of data needs:
consideration of context (regulation, community expectations, applicable data principles, policies, and strategies) and capability (possession of or access to the competencies and resources required to design, develop, manage, and maintain data throughout its life cycle) as well as data functionality (how can it be appropriately used?).
Analysis of data usage:
concerns how, where, when, and in what forms various users can access the data and the access rights they have – e.g., to modify or delete.
New data design and development processes begin when existing data does not meet the identified needs. In brief (4)
• Data items are specified. They are named and defined in meaningful ways, and their attributes are articulated and documented as metadata (information about the data that helps users understand and accurately interpret it). This should consider relevant standards that facilitate safe and effective use, reuse, and interoperability.
• Data capture and quality assurance instruments and processes are developed or otherwise actioned (e.g., some data may be purchased), and data processing (e.g., cleansing, transformation, manipulation, etc.), storage and retrieval mechanisms are developed, tested, and actioned.
• Data presentation formats and delivery channels are developed, tested, and actioned.
• Data usage and utility (value derived) are then monitored and assessed throughout the data lifecycle, with modifications as required.
Essential steps to appraise the structure and design of health information
- Confirming and validating the different use contexts and ensuring these are documented appropriately.
- Identifying the data, information, knowledge, and wisdom required to inform these uses and the characteristics that would make these fit for purpose
- Assessing design characteristics – is the metadata readily available? Is it well constructed? Does it comply with regulatory requirements? Does it conform to relevant standards? Do the data types permit and facilitate the processing required (e.g., can arithmetic operations be performed if needed)? Do the data structures allow and facilitate the processing necessary (e.g., do they enable ‘fuzzy logic’ to be applied)?
- Assessing usage – does the information satisfy the needs of all its existing and potential users.
Data attributes can be
• Simple – attributes that cannot be split into other attributes (e.g., first name).
• Composite –groupings of other attributes (e.g., name comprising first, last, and other names).
• Derived – attributes that are calculated or determined from other attributes, such as age calculated from date of birth.
• Single-value – attributes only captured once (e.g., first name, with alternatives being aliases).
• Multi-Value – attributes that can be captured more than once for an entity (e.g., multiple mobile phone numbers).
Some principles to guide the nature and extent of attribute elaboration include: (5)
• Compliance with relevant regulations and policies, including privacy.
• Restricting the attributes to those reasonably necessary for, or directly related to, the organisation’s purpose and functions.
• Recognition that some attributes might involve sensitive information.
• Representing attributes in meaningful ways (relevant, complete, valid, interpretable) that can be captured with high quality (accurate, precise, coherent).
• Documenting metadata appropriately (such that others can unambiguously and sufficiently understand the data).
Metadata relevant to the specification of attributes includes
• Name – A meaningful title for the attribute.
• Description – An informative description of the attribute.
• Format – A defined format in which the attribute will be expressed.
• Value domain – The fully specified set of permissible values, or
Classification/Terminology/Vocabulary – The fully-specified, external value domain drawn upon (e.g., SNOMED CT-AU - Common v1.5).
• Value domain or Classification/Terminology/Vocabulary owner – The agency responsible for maintaining the value domain or classification/terminology/vocabulary.
• Derivation – The fully specified means of calculating the attribute if it is derived from other data.
• Source – The origin of the attribute values. Bear in mind here that:
o A data object may have attributes captured from multiple sources.
o A multi-value attribute may have values captured from different sources.
Data and metadata standards facilitate
• The effective use of data. They typically provide good documentation of data entities, concepts, and their attributes, enabling effective interpretation and highlighting limitations.
• Efficiency in data development and collection – they shortcut the development of data because the specification has already been done. They also shortcut data collection because others have already implemented them and can point to good practice.
• Data quality – the ‘bugs’ have typically been discovered and corrected by the time a data specification becomes a standard, and many different perspectives are usually incorporated in standards development.
• Data sharing and reuse – standardised data entities, concepts, and attributes can be safely exchanged and assimilated across different systems that use them appropriately.
In general, appraisal questions relating to the potential of a new data source or emerging technology explore (5
• Appropriateness – Will the new data source or emerging technology be suitable for and compatible with the intended purpose and context?
• Efficiency – Can the new data sources or emerging technology be generated and/or applied within acceptable resource usage limits?
• Effectiveness – Will using the new data source or emerging technology achieve the desired purpose (how likely is it to generate the intended outputs and outcomes)?
• Cost-effectiveness – More precisely, can the value or benefit of using the new data source or emerging technology exceed the cost of producing them? To what extent – i.e., are there alternate uses of the resources that could generate higher value?
• Implementation – Can the new data source or emerging technology be implemented, in practice, as required to achieve the above? Are the underlying assumptions (and there are always underlying assumptions) valid – e.g., does the organisation have the requisite capabilities? How likely is it that the data suppliers will behave as expected?
appraisal of relevance requires
• A fit for purpose definition of relevance, preferably articulating some characteristics associated with it.
• An understanding of purpose and context. The example in F.8.1.3 above illustrates how relevance is purpose dependent. Context is similar. For example, a data collection on a tropical disease or an emerging vector control technology for insects found in tropical zones may be highly relevant in Northern Queensland but irrelevant in Southern Tasmania.
• A frame of reference.
• Evidence to support claims of relevance.
• Methods for evaluating the evidence to inform a decision on relevance. This is addressed in chapter A.5 (evaluating evidence to inform decisions).
the’ 5 Vs’ model
• Volume – The volume of data associated with these four sources alone is enormous. Research firm IDC predicts that the ‘global datasphere’ will grow from less than 20 zettabytes in 2016 to 175 zettabytes (175 trillion gigabytes) by 2025. Furthermore, IDC predicts health to be the fastest-growing contributor to the datasphere over its forecast period, with a 36% compound annual growth rate in data holdings (Reinsel, Gantz & Rydning, 2018).
• Velocity – IDC also predicts that, across all industries, the proportion of data that is captured in real-time will double between 2017 and 2025, from 15% to nearly 30% (Reinsel, Gantz & Rydning, 2018). Data captured via the IoHT and consumer tech are examples of data that is captured in real-time.
• Variety – It should be evident that the array of data coming from these new, high-volume sources is vast. In the past, health services have primarily controlled the data they have captured. However, IoHT and consumer tech data are generated outside the health sector, and genomic and unstructured data contain extensive variety.
• Veracity – Again, IoHT and consumer tech data are generated outside the health sector, and their veracity may be inconsistent. This is why data provenance (knowing the data’s pedigree) is now important. But unstructured data may also contain all sorts of abbreviations, icons, local terms, etc., and emerging genomic data standards are not universally adhered to. So, veracity cannot be taken for granted for any of these new sources.
• Value – Nonetheless, these data are potentially of high value if we accept that health and wellbeing are essentially (e.g., 80%, per CSIRO) determined outside of clinical care settings.
Data governance is
a system of decision rights and accountabilities for information-related processes, executed according to agreed-upon models which describe who can take what actions, with what information, and when, under what circumstances, using what methods
Effective data governance ensures that
• Data management meets the needs of relevant stakeholders, who are meaningfully engaged to determine objectives and overall direction of data/information activities.
• A clear plan is made for data/information management, with effective prioritisation and decision making.
• Data/information resources are regularly monitored and evaluated according to the overall direction and objectives.
Effective data governance is:
• Accountable. It ensures that decisions taken in respect of data and information are taken by those with the responsibility for them, and those responsible are answerable to the organisation and interested parties for their decisions. It ensures that accountabilities, obligations, and the capabilities required are all aligned.
• Compliant. It ensures that decisions and actions taken are consistent with regulatory requirements. It includes respecting data sovereignty – the jurisdictional control or legal authority that can be asserted over data because its sourcing or physical location is within jurisdictional boundaries.
• Coherent. It ensures that data decision making, capabilities, resource allocations, etc., are well aligned with other dimensions of enterprise governance and that enterprise data architecture is well aligned with the business, applications, and technology architectures.
• Open and transparent. Openness may also be described as inclusiveness – the practice of encouraging and facilitating involvement from all interested parties if they wish to be involved. Transparency means making decisions and information about data available to interested parties.
• Responsive and equitable. It ensures that data management recognises and serves the needs of all interested parties, that trade-offs between competing interests are principled, and that it responds to changes in context, circumstances, or directions.
• Ethical. It ensures that the ethical implications of data-related decisions and actions are recognised, understood, and considered, and that appropriate ethical standards are followed.
• Risk managed. It ensures that relevant risks are recognised, understood, and mitigated appropriately. It also ensures that risks are balanced (e.g., security and access risks may conflict), and the enterprise’s risk appetite and tolerances are respected.
• Cultural. It fosters a positive organisational climate that understands, internalises, and acts at all times to enact the enterprise’s data ethos, responsibilities, and context.
• Resourced. It ensures that the organisation’s capabilities and access to resources are consistent with its aspirations, strategies, and responsibilities.
• Agile. It ensures that data governance ethos, characteristics, requirements, structures, processes, etc. – comprising the data governance framework – are adaptable to contextual changes within appropriate time frames.
• Assessed and evaluated. It ensures that data governance and management performance are regularly monitored and adjusted as required, and periodic evaluation takes place to ensure data governance is adding appropriate value to the enterprise.
The elements of data governance – the features that comprise a data governance framework or system – include the following
• Strategy and planning. Data and information are assets, and the need for them, their acquisition, lifecycle management and eventually disposal should be strategised and planned for just as for any other asset class. Aims include data coherence (ensuring enterprise-wide congruence of purpose, design, and effectiveness), optimal returns on investment, road mapping and prioritisation.
• Data governance principles. No governance framework can cater for every possible circumstance, so the role of principles is to guide decision-makers as to ‘what’s right’. They describe the enterprise’s values and beliefs with respect to data and information.
• Roles, responsibilities, and accountabilities. People’s decisions and actions dictate whether and how data and information are captured and used. Articulating roles, obligations, and accountabilities provides the basis for controlling behaviours.
People undertake Roles. Accountability means being both responsible and answerable (liable) for something happening, whereas being responsible means being expected to ensure the thing happens. Responsibility can be delegated or outsourced, but accountability cannot.
• Capabilities. The enterprise needs a workforce with the requisite knowledge, skills and experience, appropriate tools and technology, and sufficient funding to undertake effective data governance
Common data governance roles include:
o Enterprise data sponsor – accountable and responsible for ensuring effective data governance and management frameworks, approving strategies, policies, protocols, and guidelines in relation to data assets, providing appropriate resources, compliance, and the filling of other data governance roles. May delegate some or all these responsibilities (but remains accountable).
o Data governance committee – responsible for advising the enterprise data sponsor on the data governance framework and its usage.
o Data management committee (may be incorporated within data governance) – responsible to the enterprise data sponsor for oversight and coordination of data management activities across the enterprise.
o Data sponsors/owners – responsible and accountable for approving strategies, policies, protocols, and guidelines in relation to a subset of data assets, providing appropriate resources, compliance, and the filling of subordinate data governance roles.
o Data stewards – responsible for data content, context, and associated business rules. This typically includes data requirement management, metadata definition and management, data quality framework, and data acquisition (including associated contract management where the data is externally procured).
o Data custodians – responsible for the storage and transport of, and access to, the data and applying business rules. This typically includes security, availability and access management, the application of technical standards and policies, and master data management.
o Data users – responsible for safe, authorised, appropriate and effective use of the data. Critical aspects of the user role include maintaining privacy and security, reporting quality issues and data breaches, and complying with enterprise constraints on the use of the data.
Typical data management functions include
• Data architecture development and maintenance – definition of the information flows and storage in ways that optimise their interactions at the enterprise level (i.e., ensuring the whole is greater than the sum of the parts), and the controls applied to them to ensure optimisation. This includes assuring the integrability and interoperability of data.
Data architecture also involves ensuring coherence between business, data, applications, and technology architectures and ensuring data is readily accessible as appropriate.
• Data modelling, design and development and maintenance – the analysis of requirements, data design and validation of design with stakeholders, development or acquisition of the data capture, storage, processing and dissemination capabilities, tools/technologies and processes, the testing of these, and maintenance of these over the entire data lifecycle.
• Metadata management – collecting, categorising, maintaining, integrating, controlling, managing, and ensuring the availability of metadata. Metadata includes information about enterprise data such as its description, lineage, usage, relationships, ownership, and status.
• Assuring data sovereignty – establishing the jurisdictional control or legal authorities that can be asserted over data, ensuring these conform to requirements, and establishing processes to assure sovereignty requirements are adhered to.
• Data storage and operations – ensuring data storage environments are secure, appropriate, and enable information continuity, sharing and re-use, commensurate with the enterprise’s needs and context.
• Assuring data quality – establishing, implementing, and monitoring the standards and procedures via which data quality is made to conform to requirements.
• Managing data security – establishing, implementing, and monitoring standards, policies, infrastructure, and procedures to protect privacy and confidentiality and assure business continuity at all data lifecycle stages.
• Managing reference and master data. Reference data is data that elaborates other data, such as classification and terminology systems and value sets. Master data provides the ‘source of truth’ drawn upon by other systems
Indigenous data sovereignty can be defined as
“the right of Indigenous Peoples to own, control, access and possess data that derive from them, and which pertain to their members, knowledge systems, customs or territories”
The 5 principles of the Maiam nayri Wingara Indigenous Data Sovereignty Collective developed an Australian set of Indigenous Data Governance protocols and principles
These five principles assert the right of Aboriginal and Torres Strait Islander people to:
• Exercise control of the data ecosystem, including creation, development, stewardship, analysis, dissemination, and infrastructure.
• Data that is contextual and disaggregated.
• Data that is relevant and empowers sustainable self-determination and effective self-governance.
• Data structures that are accountable to Indigenous peoples and First Nations.
• Data that is protective and respects our individual and collective interests.
The NHMRC Guidelines reflect six core values important to all Aboriginal and Torres Strait Islander peoples
spirit and integrity, cultural continuity, equity, reciprocity, respect, and responsibility
6 components of an information system
hardware, software, and networks.
data, people, and processes
Computer software is often categorised as programming, system, or application software, malware or middle ware. Explain the differences
• Programming software comprises tools that assist programmers in writing computer programs. These tools
include text editors, debuggers, compilers, and interpreters:
o Compilers translate source code written in a programming language into the language the computer can deal with (often in binary form).
o Interpreters execute source code or precompiled code or translate source code into an intermediate language before execution.
• System Software refers to the computer programs used to start and run computer systems and networks. It includes operating systems, device drivers and utilities.
• Application software refers to computer programs that perform tasks for users. Examples include task-oriented programs such as web browsers, word processors, spreadsheets, and function-oriented ones such as practice management software and rostering programs.
• malware – shorthand for ‘malicious software’. Malware includes computer viruses, worms, trojan horses, scareware, ransomware, and spyware.
• middleware, which connects or mediates between software components in a distributed computing environment
Network topologies
NAME?
Database structure - Hierachiacl data model
This is one of the earliest and simplest data models. It has many drawbacks, however. First, it is rigid – if another node or relationship needs to be added, the whole model may need to be reconfigured. It is best suited to one-to-one and one-to-many relationships. It is much more challenging to depict many-to-many relationships.
Database structure - relational data model
which arranges data in linked (related) two-dimensional tables. Each table row holds a record with a unique identifier (its ‘key’), while the columns contain fields (representing data attributes).
Relational databases are highly efficient, minimising redundancy and maximising maintainability and flexibility.
Database structure - Database schema
A schema is effectively a blueprint for a particular database, describing how the database should be implemented – for example, with specific constraints (rules), using specific data types, etc.