CHIA F Flashcards

1
Q

Kahneman and Tversky disrupted mainstream economics by demonstrating that decisions are not always optimal. Their ‘prospect theory’ showed that

A

humans’ willingness to take risks is context-dependent – i.e., it is influenced by the way choices are framed (Samson, 2014). Essentially, we dislike losses more than we like an equivalent gain. The pain of giving something up is greater than the pleasure of receiving it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

According to dual-system theory of behavuoural economics

A

System 1
• Comprises thinking processes that are intuitive, instinctive, and experience-based.
• Associated with heuristics (cognitive shortcuts), biases (systematic errors), and aversion to change.
System 2
• Comprises thinking processes that are reflective, controlled, deliberative, and analytical.
• Associated with agency, choice, and concentration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

‘Market failure’ refers to

A

a situation where the market does not deliver an efficient outcome, which generally occurs in cases where private incentives are misaligned with the broader interests of society as a whole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Market power is

A

is exercised when one or more parties can ‘coerce’ others. Examples include:
• Large and powerful suppliers (monopolists or oligopolists) who can extract higher prices from their customers than they could in more competitive markets.
• Large and powerful customers (monopsonists and oligopolists) who can extract lower prices from their suppliers.
The effects of market power may feed into cost-benefit or effectiveness analysis is in the valuation of costs or benefits. However, they may also require regulatory intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

‘Public goods’ in health economics are:

A

are goods or services that are ‘non-rivalrous’ (one person consuming the good does not prevent others from also consuming it) and/or ‘non-excludable’ (it is impractical to exclude people from benefiting from the good, once it is made available). A classic example is clean air. Pragmatically, one person breathing clean air does not stop others from doing so, and once clean air is available, it is difficult to prevent anyone from breathing it. Consumers fail to pay for a public good because they cannot be excluded from the benefits are known as ‘free-riders’.

Health information is often a public good. Population health services such as clean air, food safety and vector control may also be public goods. One person consuming them does not prevent others from doing the same, and once they are provided, it may be difficult to stop anyone from realising the benefits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are exernalities in health economics

A

Externalities occur when the consumption of certain goods and services deliver benefits to or impose costs upon unrelated third parties. These are positive and negative externalities, respectively. For example:
• Vaccination has the benefit of protecting its direct consumer against illness but may also protect the spread of disease to others, enabling them to benefit. This is a positive externality.
• Sugar prices typically do not account for the public health costs of excess societal sugar consumption. This constitutes a negative externality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are indirect network exernalities in health economics

A

Network effects are one specific form of externality that health informaticians are likely to encounter.

Indirect network externalities concern complementary goods and services. For example, the value of a computer peripheral such as external speakers increases with the range of computers they can operate with. On the other hand, cybersecurity threats are also complementary services. Cybersecurity threats have been rising rapidly in healthcare in recent years, spurred on in part by greater health IT usage. This is an example of a negative, indirect network externality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Research covering capability requirements for digital transformation across 31 OECD countries and their partner economies suggests

A

• Despite automation, task-based (non-cognitive, learned on the job) skills remain as important as cognitive (learned through education) skills.
• Digitally-intensive industries reward workers with relatively higher levels of self-organisation, advanced numeracy skills, and communication and socioemotional skills.
• Bundles of synergistic skills are significant in digitally-intensive industries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The World Economic Forum (WEF) identifies eight specific digital skills domains in which proficiency is likely to be required for people to feel “competent, comfortable, confident and safe in their daily navigation of a digitalised work and life environment

A

• Digital identity (digital citizen, digital co-creator, digital entrepreneur).
• Digital rights (freedom of speech, intellectual property rights, privacy).
• Digital literacy (computational thinking, content creation, critical thinking).
• Digital competencies (online collaboration, online communication, digital footprints).
• Digital emotional intelligence (social and emotional awareness, emotional regulation, empathy).
• Digital security (password protection, internet security, mobile security).
• Digital safety (behavioural risks, content risks, contact risks).
• Digital use (screen time, digital health, community participation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Irrespective of the methodology used, education and training needs analysis typically involve four stages –

A

organisational analysis, operational analysis, person analysis, and training requirements analysis. Each of the first three stages aims to identify needs and ensure that the organisation’s needs, operational requirements, and people align. The fourth considers whether education and training are the best options, and if so, consolidates, and quality assures the requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Training needs analysis - Organisational analysis

A

Analysis of the organisational dimension of training needs aims to clearly articulate what the organisation requires of its people, irrespective of the specific roles they individually play.

Organisational analysis of training needs requires consideration of current performance and intentions (as signalled through strategic planning and other foresight processes).

Techniques for undertaking such organisational analyses include desk research (e.g., the perusal of plans, policies, strategies, performance reports, complaints, etc.), comparative research (e.g., literature searches, competency benchmarking, etc.), staff, consumer, and other stakeholder surveys, interviews, and focus groups. Dialogue, rather than passive data collection, is vital.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Training needs analysis - Operational analysis

A

Essentially, this analysis examines what the organisation, through its people, needs to do to achieve its strategic objectives.

Operational analysis involves examining the organisation’s activities and how they are performed. In the context of evolution towards digital health, this primarily means examining changes expected to what the organisation does (at an operational level) and how it will do it. However, in terms of current practice, it also means examining current performance, identifying existing strengths (for retention and consolidation) and weaknesses (for improvement), and identifying whether education and training gaps are associated with any of these.

Techniques for undertaking operational analysis include desk research (e.g., the perusal of operational documentation), comparative research (e.g., competency benchmarking), staff, consumer, and other stakeholder surveys, interviews, and focus groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Training needs analysis - Person analysis

A

Knowing the competencies and proficiency levels required enables the assessment of the people involved against these standards. Essentially, this means ascertaining which individuals need education and training – which people perform which roles and undertake which activities? What is their assessed proficiency in terms of the competencies required? What are the gaps?

Techniques for person analysis include desk research (e.g., the perusal of performance assessments and education and training records), direct observation of staff in the role, work samples, and staff interviews.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Training needs analysis

A

This training requirements analysis step involves working out the optimal strategies for ending up with the right competencies in the right places at the right time. Once these strategies are determined, the aggregate education and training needs will be visible, and prioritisation can occur. At this point, it is worthwhile:
• Conducting a quality assurance exercise to ensure that the competencies required are well enough specified to enable educators and trainers to determine how they can best be delivered.
• Undertaking ‘due diligence’ – i.e., validating that the costs of the education and training proposed are likely to generate sufficient returns to justify them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

barriers to digital health innovation and education (3)

A

• Lack of content and lack of demand where it does exist.
“Some concern from universities, colleges and accreditation providers about the addition of digital health content in curricula due to ‘curriculum crowding’” and “limited demand for digital health-focused subjects in universities, possibly due to a perception that these are only applicable to health informaticians”.
• Resource constraints.
“Thin margins and, in many areas of the health sector, relatively small business scale (such as small general practices) that impose limitations on investment capacity” and “difficulty accessing training … versions of digital health used in state and territory health systems to provide students with ‘hands-on’ experience”.
• Professional resistance.
“Resistance to innovations that blur existing scope of practice boundaries, or which do not align with [existing] funding models”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Data design - Data objects, attribtes and relationships

A

data objects (data entities or concepts with common properties which are stored and operated upon during the running of a software program, e.g., actors (such as persons, equipment, etc.), roles (such as citizens, patients, health service providers, etc.) and events (such as consultations, admissions, transfers of care, etc.)
and their attributes (descriptions of the objects’ properties, e.g., first, last, and other names, date of birth, gender, etc. in the case of persons)
and their relationships (descriptions of how different data objects may be associated (e.g., a person may be a citizen, a patient and/or a health service provider) or data objects may be related to their attributes (e.g., a person may have multiple other names but can have only one date of birth)

This typically includes documentation in the form of an information model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In data design, the data objects, attributes, and relationships articulated during the analysis phase of the system life cycle are reconceptualised as: (4)

A

data types (e.g., alphanumeric – string, text, or formatted text; date/time – date, time, or timestamp; time-series – date/time range, repeat interval, timing/quantity)
data structures (specific ways of organising data in computer programs so that it can be used efficiently and effectively – more on this shortly)
the integrity rules required to ensure the data is what it purports to be, and
the operations that can be applied to the data structures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

characteristics of information that are associated with fitness for purpose include (9)

A

Provendance
The instititional envrironmenbt
Relevance
Completeness and validity
Timeliness
Accuracy and precision
Coherence
Interpretability
Accessibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Analysis of data needs:

A

consideration of context (regulation, community expectations, applicable data principles, policies, and strategies) and capability (possession of or access to the competencies and resources required to design, develop, manage, and maintain data throughout its life cycle) as well as data functionality (how can it be appropriately used?).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Analysis of data usage:

A

concerns how, where, when, and in what forms various users can access the data and the access rights they have – e.g., to modify or delete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

New data design and development processes begin when existing data does not meet the identified needs. In brief (4)

A

• Data items are specified. They are named and defined in meaningful ways, and their attributes are articulated and documented as metadata (information about the data that helps users understand and accurately interpret it). This should consider relevant standards that facilitate safe and effective use, reuse, and interoperability.
• Data capture and quality assurance instruments and processes are developed or otherwise actioned (e.g., some data may be purchased), and data processing (e.g., cleansing, transformation, manipulation, etc.), storage and retrieval mechanisms are developed, tested, and actioned.
• Data presentation formats and delivery channels are developed, tested, and actioned.
• Data usage and utility (value derived) are then monitored and assessed throughout the data lifecycle, with modifications as required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Essential steps to appraise the structure and design of health information

A
  1. Confirming and validating the different use contexts and ensuring these are documented appropriately.
  2. Identifying the data, information, knowledge, and wisdom required to inform these uses and the characteristics that would make these fit for purpose
  3. Assessing design characteristics – is the metadata readily available? Is it well constructed? Does it comply with regulatory requirements? Does it conform to relevant standards? Do the data types permit and facilitate the processing required (e.g., can arithmetic operations be performed if needed)? Do the data structures allow and facilitate the processing necessary (e.g., do they enable ‘fuzzy logic’ to be applied)?
  4. Assessing usage – does the information satisfy the needs of all its existing and potential users.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Data attributes can be

A

• Simple – attributes that cannot be split into other attributes (e.g., first name).
• Composite –groupings of other attributes (e.g., name comprising first, last, and other names).
• Derived – attributes that are calculated or determined from other attributes, such as age calculated from date of birth.
• Single-value – attributes only captured once (e.g., first name, with alternatives being aliases).
• Multi-Value – attributes that can be captured more than once for an entity (e.g., multiple mobile phone numbers).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Some principles to guide the nature and extent of attribute elaboration include: (5)

A

• Compliance with relevant regulations and policies, including privacy.
• Restricting the attributes to those reasonably necessary for, or directly related to, the organisation’s purpose and functions.
• Recognition that some attributes might involve sensitive information.
• Representing attributes in meaningful ways (relevant, complete, valid, interpretable) that can be captured with high quality (accurate, precise, coherent).
• Documenting metadata appropriately (such that others can unambiguously and sufficiently understand the data).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Metadata relevant to the specification of attributes includes

A

• Name – A meaningful title for the attribute.
• Description – An informative description of the attribute.
• Format – A defined format in which the attribute will be expressed.
• Value domain – The fully specified set of permissible values, or
Classification/Terminology/Vocabulary – The fully-specified, external value domain drawn upon (e.g., SNOMED CT-AU - Common v1.5).
• Value domain or Classification/Terminology/Vocabulary owner – The agency responsible for maintaining the value domain or classification/terminology/vocabulary.
• Derivation – The fully specified means of calculating the attribute if it is derived from other data.
• Source – The origin of the attribute values. Bear in mind here that:
o A data object may have attributes captured from multiple sources.
o A multi-value attribute may have values captured from different sources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Data and metadata standards facilitate

A

• The effective use of data. They typically provide good documentation of data entities, concepts, and their attributes, enabling effective interpretation and highlighting limitations.
• Efficiency in data development and collection – they shortcut the development of data because the specification has already been done. They also shortcut data collection because others have already implemented them and can point to good practice.
• Data quality – the ‘bugs’ have typically been discovered and corrected by the time a data specification becomes a standard, and many different perspectives are usually incorporated in standards development.
• Data sharing and reuse – standardised data entities, concepts, and attributes can be safely exchanged and assimilated across different systems that use them appropriately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

In general, appraisal questions relating to the potential of a new data source or emerging technology explore (5

A

• Appropriateness – Will the new data source or emerging technology be suitable for and compatible with the intended purpose and context?
• Efficiency – Can the new data sources or emerging technology be generated and/or applied within acceptable resource usage limits?
• Effectiveness – Will using the new data source or emerging technology achieve the desired purpose (how likely is it to generate the intended outputs and outcomes)?
• Cost-effectiveness – More precisely, can the value or benefit of using the new data source or emerging technology exceed the cost of producing them? To what extent – i.e., are there alternate uses of the resources that could generate higher value?
• Implementation – Can the new data source or emerging technology be implemented, in practice, as required to achieve the above? Are the underlying assumptions (and there are always underlying assumptions) valid – e.g., does the organisation have the requisite capabilities? How likely is it that the data suppliers will behave as expected?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

appraisal of relevance requires

A

• A fit for purpose definition of relevance, preferably articulating some characteristics associated with it.
• An understanding of purpose and context. The example in F.8.1.3 above illustrates how relevance is purpose dependent. Context is similar. For example, a data collection on a tropical disease or an emerging vector control technology for insects found in tropical zones may be highly relevant in Northern Queensland but irrelevant in Southern Tasmania.
• A frame of reference.
• Evidence to support claims of relevance.
• Methods for evaluating the evidence to inform a decision on relevance. This is addressed in chapter A.5 (evaluating evidence to inform decisions).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

the’ 5 Vs’ model

A

• Volume – The volume of data associated with these four sources alone is enormous. Research firm IDC predicts that the ‘global datasphere’ will grow from less than 20 zettabytes in 2016 to 175 zettabytes (175 trillion gigabytes) by 2025. Furthermore, IDC predicts health to be the fastest-growing contributor to the datasphere over its forecast period, with a 36% compound annual growth rate in data holdings (Reinsel, Gantz & Rydning, 2018).
• Velocity – IDC also predicts that, across all industries, the proportion of data that is captured in real-time will double between 2017 and 2025, from 15% to nearly 30% (Reinsel, Gantz & Rydning, 2018). Data captured via the IoHT and consumer tech are examples of data that is captured in real-time.
• Variety – It should be evident that the array of data coming from these new, high-volume sources is vast. In the past, health services have primarily controlled the data they have captured. However, IoHT and consumer tech data are generated outside the health sector, and genomic and unstructured data contain extensive variety.
• Veracity – Again, IoHT and consumer tech data are generated outside the health sector, and their veracity may be inconsistent. This is why data provenance (knowing the data’s pedigree) is now important. But unstructured data may also contain all sorts of abbreviations, icons, local terms, etc., and emerging genomic data standards are not universally adhered to. So, veracity cannot be taken for granted for any of these new sources.
• Value – Nonetheless, these data are potentially of high value if we accept that health and wellbeing are essentially (e.g., 80%, per CSIRO) determined outside of clinical care settings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Data governance is

A

a system of decision rights and accountabilities for information-related processes, executed according to agreed-upon models which describe who can take what actions, with what information, and when, under what circumstances, using what methods

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Effective data governance ensures that

A

• Data management meets the needs of relevant stakeholders, who are meaningfully engaged to determine objectives and overall direction of data/information activities.
• A clear plan is made for data/information management, with effective prioritisation and decision making.
• Data/information resources are regularly monitored and evaluated according to the overall direction and objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Effective data governance is:

A

• Accountable. It ensures that decisions taken in respect of data and information are taken by those with the responsibility for them, and those responsible are answerable to the organisation and interested parties for their decisions. It ensures that accountabilities, obligations, and the capabilities required are all aligned.
• Compliant. It ensures that decisions and actions taken are consistent with regulatory requirements. It includes respecting data sovereignty – the jurisdictional control or legal authority that can be asserted over data because its sourcing or physical location is within jurisdictional boundaries.
• Coherent. It ensures that data decision making, capabilities, resource allocations, etc., are well aligned with other dimensions of enterprise governance and that enterprise data architecture is well aligned with the business, applications, and technology architectures.
• Open and transparent. Openness may also be described as inclusiveness – the practice of encouraging and facilitating involvement from all interested parties if they wish to be involved. Transparency means making decisions and information about data available to interested parties.
• Responsive and equitable. It ensures that data management recognises and serves the needs of all interested parties, that trade-offs between competing interests are principled, and that it responds to changes in context, circumstances, or directions.
• Ethical. It ensures that the ethical implications of data-related decisions and actions are recognised, understood, and considered, and that appropriate ethical standards are followed.
• Risk managed. It ensures that relevant risks are recognised, understood, and mitigated appropriately. It also ensures that risks are balanced (e.g., security and access risks may conflict), and the enterprise’s risk appetite and tolerances are respected.
• Cultural. It fosters a positive organisational climate that understands, internalises, and acts at all times to enact the enterprise’s data ethos, responsibilities, and context.
• Resourced. It ensures that the organisation’s capabilities and access to resources are consistent with its aspirations, strategies, and responsibilities.
• Agile. It ensures that data governance ethos, characteristics, requirements, structures, processes, etc. – comprising the data governance framework – are adaptable to contextual changes within appropriate time frames.
• Assessed and evaluated. It ensures that data governance and management performance are regularly monitored and adjusted as required, and periodic evaluation takes place to ensure data governance is adding appropriate value to the enterprise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

The elements of data governance – the features that comprise a data governance framework or system – include the following

A

• Strategy and planning. Data and information are assets, and the need for them, their acquisition, lifecycle management and eventually disposal should be strategised and planned for just as for any other asset class. Aims include data coherence (ensuring enterprise-wide congruence of purpose, design, and effectiveness), optimal returns on investment, road mapping and prioritisation.
• Data governance principles. No governance framework can cater for every possible circumstance, so the role of principles is to guide decision-makers as to ‘what’s right’. They describe the enterprise’s values and beliefs with respect to data and information.
• Roles, responsibilities, and accountabilities. People’s decisions and actions dictate whether and how data and information are captured and used. Articulating roles, obligations, and accountabilities provides the basis for controlling behaviours.
People undertake Roles. Accountability means being both responsible and answerable (liable) for something happening, whereas being responsible means being expected to ensure the thing happens. Responsibility can be delegated or outsourced, but accountability cannot.
• Capabilities. The enterprise needs a workforce with the requisite knowledge, skills and experience, appropriate tools and technology, and sufficient funding to undertake effective data governance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Common data governance roles include:

A

o Enterprise data sponsor – accountable and responsible for ensuring effective data governance and management frameworks, approving strategies, policies, protocols, and guidelines in relation to data assets, providing appropriate resources, compliance, and the filling of other data governance roles. May delegate some or all these responsibilities (but remains accountable).
o Data governance committee – responsible for advising the enterprise data sponsor on the data governance framework and its usage.
o Data management committee (may be incorporated within data governance) – responsible to the enterprise data sponsor for oversight and coordination of data management activities across the enterprise.
o Data sponsors/owners – responsible and accountable for approving strategies, policies, protocols, and guidelines in relation to a subset of data assets, providing appropriate resources, compliance, and the filling of subordinate data governance roles.
o Data stewards – responsible for data content, context, and associated business rules. This typically includes data requirement management, metadata definition and management, data quality framework, and data acquisition (including associated contract management where the data is externally procured).
o Data custodians – responsible for the storage and transport of, and access to, the data and applying business rules. This typically includes security, availability and access management, the application of technical standards and policies, and master data management.
o Data users – responsible for safe, authorised, appropriate and effective use of the data. Critical aspects of the user role include maintaining privacy and security, reporting quality issues and data breaches, and complying with enterprise constraints on the use of the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Typical data management functions include

A

• Data architecture development and maintenance – definition of the information flows and storage in ways that optimise their interactions at the enterprise level (i.e., ensuring the whole is greater than the sum of the parts), and the controls applied to them to ensure optimisation. This includes assuring the integrability and interoperability of data.
Data architecture also involves ensuring coherence between business, data, applications, and technology architectures and ensuring data is readily accessible as appropriate.
• Data modelling, design and development and maintenance – the analysis of requirements, data design and validation of design with stakeholders, development or acquisition of the data capture, storage, processing and dissemination capabilities, tools/technologies and processes, the testing of these, and maintenance of these over the entire data lifecycle.
• Metadata management – collecting, categorising, maintaining, integrating, controlling, managing, and ensuring the availability of metadata. Metadata includes information about enterprise data such as its description, lineage, usage, relationships, ownership, and status.
• Assuring data sovereignty – establishing the jurisdictional control or legal authorities that can be asserted over data, ensuring these conform to requirements, and establishing processes to assure sovereignty requirements are adhered to.
• Data storage and operations – ensuring data storage environments are secure, appropriate, and enable information continuity, sharing and re-use, commensurate with the enterprise’s needs and context.
• Assuring data quality – establishing, implementing, and monitoring the standards and procedures via which data quality is made to conform to requirements.
• Managing data security – establishing, implementing, and monitoring standards, policies, infrastructure, and procedures to protect privacy and confidentiality and assure business continuity at all data lifecycle stages.
• Managing reference and master data. Reference data is data that elaborates other data, such as classification and terminology systems and value sets. Master data provides the ‘source of truth’ drawn upon by other systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Indigenous data sovereignty can be defined as

A

“the right of Indigenous Peoples to own, control, access and possess data that derive from them, and which pertain to their members, knowledge systems, customs or territories”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

The 5 principles of the Maiam nayri Wingara Indigenous Data Sovereignty Collective developed an Australian set of Indigenous Data Governance protocols and principles

A

These five principles assert the right of Aboriginal and Torres Strait Islander people to:
• Exercise control of the data ecosystem, including creation, development, stewardship, analysis, dissemination, and infrastructure.
• Data that is contextual and disaggregated.
• Data that is relevant and empowers sustainable self-determination and effective self-governance.
• Data structures that are accountable to Indigenous peoples and First Nations.
• Data that is protective and respects our individual and collective interests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

The NHMRC Guidelines reflect six core values important to all Aboriginal and Torres Strait Islander peoples

A

spirit and integrity, cultural continuity, equity, reciprocity, respect, and responsibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

6 components of an information system

A

hardware, software, and networks.
data, people, and processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Computer software is often categorised as programming, system, or application software, malware or middle ware. Explain the differences

A

• Programming software comprises tools that assist programmers in writing computer programs. These tools
include text editors, debuggers, compilers, and interpreters:
o Compilers translate source code written in a programming language into the language the computer can deal with (often in binary form).
o Interpreters execute source code or precompiled code or translate source code into an intermediate language before execution.
• System Software refers to the computer programs used to start and run computer systems and networks. It includes operating systems, device drivers and utilities.
• Application software refers to computer programs that perform tasks for users. Examples include task-oriented programs such as web browsers, word processors, spreadsheets, and function-oriented ones such as practice management software and rostering programs.
• malware – shorthand for ‘malicious software’. Malware includes computer viruses, worms, trojan horses, scareware, ransomware, and spyware.
• middleware, which connects or mediates between software components in a distributed computing environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Network topologies

A

NAME?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Database structure - Hierachiacl data model

A

This is one of the earliest and simplest data models. It has many drawbacks, however. First, it is rigid – if another node or relationship needs to be added, the whole model may need to be reconfigured. It is best suited to one-to-one and one-to-many relationships. It is much more challenging to depict many-to-many relationships.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Database structure - relational data model

A

which arranges data in linked (related) two-dimensional tables. Each table row holds a record with a unique identifier (its ‘key’), while the columns contain fields (representing data attributes).
Relational databases are highly efficient, minimising redundancy and maximising maintainability and flexibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Database structure - Database schema

A

A schema is effectively a blueprint for a particular database, describing how the database should be implemented – for example, with specific constraints (rules), using specific data types, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Database structure - Database mangement system

A

the software via which users create, modify, and manage databases and/or define, store, manipulate, and retrieve data held in databases

46
Q

Enterprise architecture

A

is a blueprint describing how an enterprise’s* data, applications and technology infrastructures align with business goals and patterns.

47
Q

Information system architecture

A

refers to an information system’s fundamental concepts or properties in its environment, as embodied in its elements and relationships, and in its design and evolution

48
Q

Hardware, software, and network architectures

A

all refer to the fundamental concepts or properties of those components of an information system in its environment.

49
Q

4 principles of IS architecture

A

• an information system can be described from different perspectives to varying levels of abstraction and in terms of various components
• IS architecture models system boundaries, inputs, processes, and outputs. Information systems perform functions. The inputs and outputs associated with those functions can be precisely architected. Whether or not desired outcomes are achieved using the system typically involves other factors and is beyond the control of the IS. Accordingly, it is critical to define the information system boundaries clearly.
[Enterprise architecture, on the other hand, considers the business layer and more comprehensively explores how information systems (applications), data and technologies support the business.]
• Information systems can be disaggregated into subsystems, be linked to other information systems (via interfaces) and be considered in terms of their interactions with other systems.
• Information systems can be considered throughout their entire lifecycles.

50
Q

An architectural pattern is

A

an abstract description of a recommended architectural approach that has been tested and proven successful in different IS and environments

51
Q

characteristics via which to judge IS architecture (10)

A

• Functional suitability – the degree to which the system provides functions that meet agreed requirements when used under specified conditions.
• Performance – the degree to which the system meets these requirements in terms of time, resource usage, and capacity.
• Usability – the degree to which specified users can use the system to achieve specified goals with effectiveness, efficiency, and satisfaction within a specified context of use.
• Compatibility – the degree to which the system (or a system component) can exchange data with other systems (or components) and/or perform effectively while sharing its hardware or software environment.
• Reliability – the degree to which the system continues to operate effectively under specified conditions over a given period.
• Security – the degree to which the system protects data and information consistent with specified data access parameters.
• Maintainability – the degree of ease with which the intended maintainers can modify the system.
• Portability – the degree of ease with which a system (or component) can be transferred from one hardware, software, or other operational or usage environment to another.
• Reusability – the degree to which the system (or a system component) can be deployed in systems with little or no change.
• Scalability – the degree to which the system can accommodate load increases without decreasing performance or the capacity to be rapidly expanded.

52
Q

The Organization for Economic Cooperation and Development (OECD) defines the ‘digital divide’ as

A

“the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard to both their opportunities to access information and communication technologies (ICTs) and to their use of the Internet for a wide variety of activities” (OECD, 2002).

53
Q

Recommendations for implementing digital health initiatives in ways that are at least cognisant of the digital divide, but preferably oriented to closing it, include

A

• Recognising factors such as digital access, affordability, and ability as critical to inclusion and not taking digital inclusion for granted. In particular, this means recognising that digitally excluded people are likely to overlap with those with lower health status.
• Co-designing digital health services and experiences with members of the target population.
• Ensuring sufficient time is built into project plans to identify and build trust relationships with digitally excluded people.
• Incorporate building digital awareness and skills for targeted patients/clients, their carers, and service providers into project planning, design, and development activities.
• Recognise that some people with low health outcomes will elect to remain outside the ‘digisphere’, and alternate strategies will be required to reach them.

54
Q

What do health consumers want

A

• Australians want better access to mobile digital health services for the whole community – not just experienced users of new technology. They want their health information to be confidential, secure and protected.
V3.03 June 2023 558
• “Most (77%) Australians would like their doctor to suggest health information websites, and 73% have already used the internet to research a health issue. However, only a small proportion of the population (6%) manage to find an online health source that they trust” (p.10).
• “Health consumers and carers expressed a strong desire to be increasingly empowered – to take control of decisions regarding their health and to be provided with access to their personal health information that supports them in this.
After witnessing the impact of digital technologies on other industries, health consumers and carers have growing expectations of how digital technologies will facilitate improved access to healthcare services, delivering services in ways that are convenient for them.
Health consumers and carers see healthcare services as including high-quality personal and health information, not just face-to-face appointments” (p.16).
• Australians are tired of repeating their medical histories when they meet with healthcare providers and believe that digital technology can and should facilitate this information being captured once and shared among all their providers.
• “Health consumers and carers have an expectation that innovative digital technologies will continue to improve their experience with the health system, as they have in many other industries” (p.17). The majority want access to their personal health information on mobile apps, laptops, or desktop computers.

55
Q

A patient portal is a

A

a secure website that gives patients convenient, 24-hour, online access to their personal health information. Authenticated patients can typically view health information such as records of health interventions, health status reports, clinical measurements, and educational material

• Common barriers include negative patient attitudes, sub-optimal user interfaces (Zhao et al., 2017), and privacy and security concerns (Powell, 2017).
• Frequently reported facilitators include perceptions of benefit, provider encouragement, perceptions of control over personal health information, and training (Powell, 2017; Zhao et al., 2017). The use of portals to communicate with providers is also associated with positive consumer sentiment (Powell, 2017, Dendere et al., 2019).
• “Patients value the convenience and immediate access to their health information and report feelings of empowerment and increased engagement when this information is readily available. Even more noteworthy were patient perceptions of the portal as a tool for improving confidence in self-management activities” (Powell, 2017).
• Portal users are highly heterogeneous in their use patterns, though portal users seem more likely to have complex care needs than non-users (Powell, 2017).

56
Q

Substantive barriers to the use of virtual care services include

A

• Lack of incentives. Disincentives to virtual care in Australia have typically included a lack of funding for primary care consultations (Covid-19 arrangements remain temporary at this stage) and the up-front and ongoing costs of equipment, broadband and training.
• Clinician and patient resistance. Clinician resistance to the use of telehealth is well documented (e.g., Kumar et al., 2020; Medical Indemnity Protection Society, 2020; Infosys, 2019: Kruse et al., 2018; Marshall & Bidmead, 2018). This is typically associated with perceptions of limited benefits to clinicians, concerns about changing work roles and workflows, reservations about the impacts on therapeutic relationships, concerns about technology usage, and the need to invest in training.

57
Q

Well-documented enablers to the use of virtucal care services include

A

• Clear purpose and effective targeting.
• The engagement of key stakeholders, communication of the benefits, and the alignment of incentives.
• Sustainable business and funding models.
• A sustainable workforce. Under-estimation of staffing requirements has been identified as a common cause of failure.
• Co-design to optimise clinician acceptance and patient experiences.
• Strong leadership and dedicated coordination.

58
Q

ISO defines a personal health record as:

A

“A representation of information regarding, or relevant to, the health, including wellness, development and welfare of that individual, which may be stand-alone or may integrate health information from multiple sources, and for which the individual, or the representative to whom the individual delegated his or her rights, manages and the PHR content and grants permissions for access by, and/or sharing with, other parties”

59
Q

Systematic reviews suggest that Personal Health Records can

A

• Contribute to improved consumer health outcomes through the self-management of health and wellness and enhanced quality of care.
• Generate value for providers and funders by enhancing operational efficiencies.
• Build societal value in the form of improved public health awareness and outcomes.
However, it must be noted that the use of PHRs also carries risk. Typical risks include privacy and security threats, inconsistent data quality, and lack of comprehensive interoperability with other systems, leading to data inconsistencies between systems.

60
Q

clinical safety risks associated with health IT, including

A

• Usability issues, such as poor information display, complicated screen sequences and navigation
• Mismatches between user workflow in the EHR and clinical workflow can contribute to medical error by causing interruptions and distractions.
• Data entry errors created using copy-forward and copy-and-paste.
• Lack of clarity regarding sources and date of information presented.
• Alert fatigue.
• Altered communication patterns.
• Patient misidentification.

61
Q

unintended consequences arising from the use of health ICT

A

More work for clinicians
Unfavourable workflow changes
Never-ending demands for system changes
Conflicts between electronic and paper-based systems
Changes in communication patterns and practices
Negative user emotions
Generation of new kinds of errors
Changes in institutional power structures
Overdependence on technology

62
Q

the review and investigation of Health IT-related patient safety incidents found that “the requirements for HIT safety systems

A

• Are similar to those that apply to existing patient safety systems.
• Should include the ability to identify hazards ahead of time and permit review of incidents after the event.
• Should also provide information about the prevalence of incident reporting and management systems.
• Should allow the opportunity to classify and report on incidents to ensure a continuous open loop of feedback and improvement”.

63
Q

The AIDH guidelines describe six specific requirements (requirements 9-14) for assessing safety risks

A

• Establish risk management.
• Use appropriate methodology for risk assessment.
• Obtain broad stakeholder input to risk assessment.
• Mitigate identified patient safety risks.
• Maintain risk register(s).
• Disclose residual risks.

64
Q

The Healthcare Information and Management Systems Society (HIMSS) defines interoiperatbility as:

A

“The ability of different information systems, devices and applications (systems) to access, exchange, integrate and cooperatively use data in a coordinated manner, within and across organisational, regional and national boundaries, to provide timely and seamless portability of information”

65
Q

The current National Digital Health Strategy describes seven strategic priorities, the first three of which have interoperability at their core.

A
  1. Health Information is available whenever and wherever it is needed.
    The availability of health information – its “timely and seamless portability” – forms part of the HIMSS definition of interoperability above. Information availability requires “different information systems, devices and applications (systems) to access, exchange, integrate and cooperatively use data in a coordinated manner, within and across organisational, regional and national boundaries” (HIMSS, n.d.).
  2. High-quality data with a commonly understood meaning that can be used with confidence.
    Again, this is definitional. Semantic interoperability means that “shared understanding exists of the content of the information” (IEEE, 2017).
  3. Health information that can be exchanged securely.
    This is embodied in the HIMSS description of organisational interoperability – the “governance, policy, social, legal and organisational considerations to facilitate the secure, seamless and timely communication and use of data” HIMSS, n.d.).
66
Q

Open Systems Interconnection (OSI) modell: Abstraction layers

A
  1. Physical
    Concerns the transmission and reception of unstructured raw bit streams over physical media. Describes the electrical/optical, mechanical, and functional interfaces to the physical medium and carries the signals for all the higher layers. Defines things like:
    • Data encoding - e.g., what signal state represents a binary 1?
    • Physical medium attachment - e.g., how many pins do the connectors have, and what is each pin used for?
    Transmission technique - e.g., whether the encoded bits will be transmitted by digital or analogue signalling.
    Physical medium transmission - e.g., how many volts/DB. should be used to represent a given signal state using a given physical medium?
  2. Data Link
    Provides error-free transfer of data frames from one node to another over the physical layer by detecting and possibly correcting errors in the physical layer. The data link layer is divided into:
    • Media Access Control (MAC) - controlling how computers in the network gain access to data and permission to transmit it.
    • Logical Link Control (LLC) layer - controlling error checking and packet synchronisation.
  3. Network
    Controls the operation of the subnet, deciding which physical path the data should take based on network conditions, the priority of service, and other factors. In some cases, the network may (or may not) implement message delivery by splitting the message into several fragments, delivering each piece by a separate route, reassembling the fragments, reporting delivery errors, etc.
    Includes accounting functions to keep track of frames forwarded
  4. Transport
    Ensures that messages are delivered error-free, in sequence, and with no losses or duplications, relieving the higher layer protocols from any concern with the transfer of data between them and their peers. Provides:
    • Message segmentation - accepts messages from the next layer up (session layer), splits them into smaller units if required and passes them down to the network layer. Reassembles the message at its destination.
    • Message acknowledgments.
    • Traffic control.
    • Multiplexing: multiplexes several message streams or sessions onto one logical link and keeps track of which messages belong to which sessions.
  5. Session
    Controls the dialogues between systems. Establishes, manages, and terminates the connections between the source and destination applications. Provides:
    • Session establishment, maintenance and termination: allows two application processes on different machines to establish, use and terminate a connection, called a session.
    • Session support: performs the functions that allow these processes to communicate over the network, performing security, name recognition, logging, etc.
  6. Presentation
    Formats the data to be presented to the application layer, transforming it into the form that the application accepts. Provides:
    • Character code translation – e.g., ASCII to EBCDIC.
    • Data conversion –e.g., integer to floating-point.
    • Data compression.
    • Data encryption.
  7. Application
    Serves as the window for users and application processes to access network services. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronising communication. The application layer [Wikipedia 2014-2]:
    • Determines the identity and availability of communication partners for an application with data to transmit.
    • Determines whether sufficient network or the requested communication exists.
    • Manages communications between applications.
    Provides common functions such as:
    • Resource sharing and device redirection
    • Remote file and printer access
    • Inter-process communication
    • Directory services
67
Q

The standards required for interoperability can be categorised into four sub-domains:

A

• Data content standards describe which data are required for various use cases, their associated metadata, and their relationships.
• Concept representation standards describe how content will be expressed in ways unambiguously understood by disparate parties (human or machine), and meaning is preserved over time, space, context, and reuse.
• Data exchange standards – describing the data structures and formats via which data can be accessed within an information system (e.g., via an API) or otherwise exchanged.
• Data integrity standards – describing the rules that implement privacy, security, and identity management.

68
Q

LOINC codes distinguish six dimensions (“Parts”) for any given observation

A

• Component (analyte) - the substance or entity being measured or observed.
• Property - the characteristic or attribute of the analyte.
• Time - the interval of time over which an observation was made.
• System (specimen) - the specimen or thing upon which the observation was made.
• Scale - how the observation value is quantified/expressed (quantitative, ordinal, nominal).
• Method (optional) - a high-level classification of how the observation was made. This is only needed when the technique affects the clinical interpretation of the results.

69
Q

Knowledge to Action Framework situates

A

knowledge producers and users within a responsive, adaptive, and unpredictable knowledge system. Accordingly, moving evidence into action is iterative, dynamic, and complex.

70
Q

The KTA Framework comprises two interrelated components – knowledge creation and an action cycle. Explain the steps in each (3 & 7)

A

The knowledge creation component comprises three steps:
1. Knowledge inquiry – this refers to primary research by many players.
2. Knowledge synthesis refers to distilling knowledge from various studies into an evidence base, for example, via systematic reviews, scoping reviews, and meta-analyses.
3. Knowledge tools and products – this refers to the creation of more readily consumable forms of knowledge, such as clinical practice guidelines and clinical decision support tools

The seven stepsin the Action Cycle are:
1. Identifying the problem, determining the ‘’know-do gap’, and identifying/ reviewing/ selecting knowledge – as always, this step should be undertaken rigorously. .
2. Adapting knowledge to the local context. There is a range of reasons for which evidence may require localisation. Tensions are inherent in the localisation of evidence. On the one hand, adaptation processes may promote local ownership and uptake. On the other hand, there is a risk that the evidence base could be weakened if adaptation unduly reflects non-acceptance or unwillingness to change. Accordingly, localisation processes must be strongly, openly, and transparently governed.
3. Assessing barriers to and facilitators for knowledge use.
4. Selecting, tailoring, and implementing interventions. Interventions should preferably be evidence-based, but effectiveness in one context does not assure effectiveness in others. .
5. Monitoring the use of knowledge enables iterative adjustment of interventions.
6. Evaluating – This is one of the most overlooked activities in many domains, not just knowledge translation (e.g., Prihodova et al. found this step the ‘least prevalent’ in a systematic review of knowledge translation initiatives). Yet, it is crucial to organisational learning, ongoing improvement, and the final step of the action cycle.
7. Sustaining the use of knowledge. Once the excitement and impetus of a project have dissipated, sustaining the use of knowledge (i.e., making it the new ‘business as usual’ is critical. The status quo is remarkably resilient.

71
Q

The Evidence-based Model for the Transfer & Exchange of Research Knowledge (EMTReK) highlights six primary components of knowledge transfer:

A
  1. The message – involves articulating a credible and actionable message (what needs to change, how, and why).
  2. Identifying and understanding the various stakeholders.
  3. Identifying multiple processes via which knowledge can be translated.
  4. Articulating the local context (e.g., the organisational context).
  5. Articulating the broader social, cultural, and economic context – factors that could impact decision making.
  6. Evaluating the model – see earlier comments.
72
Q

PRISM’ stands for a Practical, Robust Implementation and Sustainability Model for integrating research findings into practice.

A

As for KTA, the model is highly iterative. However, the logical flow starts with thoroughly analysing the intervention required (the problem), the recipients of the intervention, and the contextual factors involved (external environment and relevant infrastructure). Analysis of each of these dimensions requires examination from a range of perspectives. Reach and effectiveness are dependent on ensuring consideration of all such perspectives flow through into adoption, implementation, and maintenance.

73
Q

Some issues of concern for the ‘scientific literature include

A

• Publication bias. Studies submitted or selected for publication are more likely to contain positive results than those that aren’t. This is known as ‘publication bias’. It can be conscious or unconscious, but its existence is well documented (e.g., Scherer et al., 2018).
• Absence of evidence is not evidence of absence. It is difficult to prove a negative – that a phenomenon or outcome does not exist.
• Expert advice isn’t always right or based on the best evidence. After all, many different experts (people with a high level of knowledge or skill in a field) have many different views. It is essential to ask about the evidence. Chapter A.5 provides information about appraising evidence.

74
Q

Systems of systems, and specific challenges

A

“set of systems or system elements that interact to provide a unique capability that none of the constituent systems can accomplish on its own”

• Optimisation. This involves considering what control systems ensure optimal performance in meeting agreed aims. Distributed / decentralised control is typical, but critical challenges at the system of systems level include building consensus on purposes and means, ensuring access to the correct information at the right time and place, and dealing with conflicts of interest, particularly in governance.
• Fluidity. Systems of systems are constantly evolving, with significant implications for feedback requirements. Challenges include timely identification of changes, discernment of changes that have substantive impacts at the system of systems-level, and modelling potential responses across multiple independent systems.

75
Q

Determination of the economic impact of digital health’ can mean several things, all of which are likely to be necessary to health informaticians at some stage. Instances could include:

A

• Economic appraisal of a proposed digital health initiative (i.e., prospective analysis).
• Economic appraisal of a particular digital technology (i.e., prospective health technology assessment).
• Economic appraisal of alternative proposed projects (i.e., prospective analysis).
• Economic analysis of the impact of a digital health project that has been undertaken (i.e., retrospective analysis).
• Socio-economic analysis of any of the above (i.e., examination of social factors in addition to economic ones.

76
Q

economic analysis concerns more than just money. It uses…

A

the concepts of ‘economic cost’ – the ‘opportunity cost’ of not doing something else when a resource is consumed – and economic value – which may include things that do not readily lend themselves monetisation, such as improvements in quality of life.

77
Q

Net present value

A

Net Present Value is the present value of benefits minus the present value of costs. All other things being equal, a project should only be undertaken if its NPV is positive, and the projects with the highest positive NPVs should be preferred.

78
Q

Internal rate of return (IRR)

A

The internal (economic) rate of return is the discount rate that makes the NPV equal to zero – i.e., the rate at which the investment breaks even in NPV terms.
IRR is an indicator of the yield of the investment. A project:
• Could be accepted if IRR exceeds the cost of capital.
• Should be rejected if IRR is lower than the cost of capital.

IRR should not be used to rate projects since some distortions can arise associated with cash flows. Use NPV in preference to IRR in deciding between investments. However, IRR is a valuable measure of a single project’s yield.

79
Q

Payback period

A

Payback period is a gross measure and is demonstrated here only to illustrate its flaws. Not only does it fail to account for the time value of money, but it also ignores costs and, in particular, benefits that occur after the payback period. Therefore, NPV should be used in preference to payback period.

80
Q

Cost-benefit analysis

A

All relevant costs and benefits are compiled for the base (do nothing) case, together with the articulation of other parameters relating to the base case, e.g., relevant benchmarks, throughputs, etc. Costs may include:
• Capital costs for new assets, asset replacements or major maintenance/refurbishment.
• Recurrent costs, e.g., labour, supplies, other fees, and ongoing maintenance.
• Ancillary costs, e.g., costs of disruption, which may not be part of the project itself but are necessary to enable the project to proceed.
Common benefit categories include:
• Savings (reduced or avoided costs) to stakeholders, e.g., travel costs and/or loss of work time avoided by a patient able to receive a telehealth consultation at home rather than travel from a remote location to the nearest city.
• Benefits associated with better health and/or functional status, e.g., increased ongoing income correlating with increased ability to work or reduced reliance on other services.
• Benefits associated with a more attractive workplace, e.g., lower staff turnover.

81
Q

The ‘quadruple aim’, is a commonly used organisational framework in healthcare. It posits that health services should be aiming to achieve four outcomes simultaneously, namely

A
  1. Improved patient experiences.
  2. Better health outcomes (improved population health).
  3. Lower costs.
  4. Improved provider experiences (care team well-being).
82
Q

Sensitivity analysis

A

Sensitivity testing should always be informed by the critical risks identified and how these affect the costs and benefits of the proposal. What does the NPV for a project look like fi the discount rate changes?

83
Q

approaches to assigning monetary value to human lives for cost-benefit analysis. For example

A

• The human capital approach equates the value of life with the productivity of the individual, as measured by a discounted stream of higher future earnings.
• The required compensation approach imputes a value of life from the wage premium workers may require in compensation for jobs involving a higher-than-normal death probability.
• The stated preference approach questions individuals about how much they are willing to pay to reduce the risk of death.

84
Q

Some limitations of cost-benefit analysis are:

A

• The analysis is only as good as the assumptions and the valuations that support it.
• Not all costs and benefits are easily monetised.
• Conventional cost-benefit analysis may ignore distributional effects – the reality that different stakeholders will usually experience differential costs and benefits.

85
Q

Cost-effectiveness analysis

A

Cost-effectiveness analysis differs from cost-benefit analysis in that while the costs are monetised, not all benefits are expressed in money units (though some may be). Benefits may be expressed in other relevant units such as Quality Adjusted Life Years (QALYs). There must be consistency among the units used for various options.

cost-effectiveness analysis provides no absolute criterion for approval or rejection

86
Q

In 2012, the American Medical Informatics Association (AMIA) adopted the following definition for biomedical informatics:

A

“The interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem-solving and decision making, motivated by efforts to improve human health” (Kulikowski et al., 2012).

87
Q

International Medical Informatics Association (IMIA),

A

IMIA is the world body for health and biomedical informatics. It is an “association of associations”, primarily comprising Member Societies, Institutional (Academic and Corporate) and Affiliate Members and Honorary Fellows

IMIA’s objectives include:
• Promoting informatics in health care and research in health, bio- and medical informatics.
• Advancing international cooperation.
• Moving informatics from theory into practice in a full range of health delivery settings.
• Furthering the dissemination and exchange of knowledge, information, and technology.
• Promoting education and responsible behaviour.
• Representing the medical and health informatics field within the World Health Organization and other international professional and governmental organisations.

88
Q

Australasian Institute of Digital Health (AIDH)

A

The Health Information Management Association of Australia Limited (HIMAA) was established in its current form in 1996
The Australasian College of Health Informatics (ACHI) was formed in 2002. Its functions included setting standards for education and professional practice in health informatics and supporting health informatics initiatives
The Australasian Institute of Digital Health (AIDH) launched in February 2020 following a vote by Members and Fellows to merge the HISA and ACHI. AIDH’s vision is ‘healthier lives, digitally enabled’, again reflecting a pivot from the medical model and provider-centricity towards patient-centric health and welfare.

89
Q

the focus of health and medical informatics has shifted over time. Saheb & Saheb (2019) found that:

A

• From 1974 to 2018, the three major themes in health informatics literature were “the utilisation of computer science in healthcare, the impact of health informatics on patient safety and the quality of healthcare, and decision support systems” (p. 61).
• Since around 2016, the dominant focus has shifted to predictive, preventative, personalised, and participatory healthcare systems.
• Future themes may be “patient-generated health data, deep learning algorithms, quantified self and self-tracking tools, and Internet of Things based decision support systems” (p. 61).

90
Q

Informatics specialisation - Aged care informatics

A

Areas receiving specific attention included assistive technology and home modifications, universal adoption by the aged care sector of digital technology and my health record, ICT architecture, and investment in technology and infrastructure.

91
Q

Informatics specialisation - Biomedical imaging and signal processing

A

“Biomedical imaging plays a vital role in patient care, spanning the scale from microscopic and molecular to whole body visualisation, and encompassing many areas of medicine, such as radiology, pathology, dermatology, and ophthalmology. Biomedical imaging informatics is a discipline that focuses on improving patient outcomes through the effective use of images and imaging-derived information in research and clinical care. Developments in the field have implications in diagnosing disease, optimising treatment, tracking disease response, and predicting outcomes”

Contemporary issues in biomedical imaging informatics include machine learning and AI, big data mining, and precision medicine.

92
Q

Informatics specialisation - Clinical informatics

A

“Clinical Informatics is the application of informatics and information technology to deliver healthcare services. It is also referred to as applied clinical informatics and operational informatics.
Clinical informatics includes a wide range of topics ranging from clinical decision support to visual images (e.g., radiological, pathological, dermatological, ophthalmological, etc); from clinical documentation to provider order entry systems; and from system design to system implementation and adoption issues”

93
Q

Informatics specialisation - Clinical research informatics

A

“Clinical Research Informatics involves the use of informatics in the discovery and management of new knowledge relating to health and disease. It includes management of information related to clinical trials and also involves informatics related to secondary research use of clinical data. Clinical research informatics and translational bioinformatics are the primary domains related to informatics activities to support translational research”

94
Q

Informatics specialisation - Health data science/analytics

A

Health’s reputation as an industry drowning in data but starved of usable information and insights is changing. Advances in big data mining, including text and ML/AI, have enabled the industry to leverage the wealth of data captured digitally and in real or near-real time, driving investment. The global healthcare analytics market is estimated to be growing at a compound annual growth rate approaching 30%

95
Q

Informatics specialisation - Health information management

A

Health information management is the practice of acquiring, analysing, and protecting digital and traditional medical information vital to providing quality patient care. It is a combination of business, science, and information technology”

96
Q

Informatics specialisation - Nursing informatics

A

the speciality that integrates nursing science with multiple information and analytical sciences to identify, define, manage, and communicate data, information, knowledge, and wisdom in nursing practice. Nursing informatics supports nurses, consumers, patients, the inter-professional healthcare team, and other stakeholders in their decision making in all roles and settings to achieve desired outcomes

97
Q

Informatics specialisation - Public health informatics

A

Public Health Informatics is the application of informatics in areas of public health, including surveillance, prevention, preparedness, and health promotion. Public health informatics and the related population informatics work on information and technology issues from the perspective of groups of individuals. Public health is extremely broad and can even touch on the environment, work and living places and more”

98
Q

Informatics specialisation - Translational bioinformatics

A

Translational Bioinformatics is the development of storage, analytic, and interpretive methods to optimise the transformation of increasingly voluminous biomedical data and genomic data, into proactive, predictive, preventive, and participatory health.

99
Q

Theory vs framework

A

A theory is “a plausible or scientifically acceptable general principle or body of principles offered to explain phenomena”

A framework is “a basic conceptional structure. frameworks are abstractions and tools, not reality and answers. Their utility may alter with changes in the environment or with enhancements in knowledge.

100
Q

Advantages of using frameworks

A

• Many scenarios follow similar and consistent patterns irrespective of the specifics. For example, health service consultations involve actors (participants, such as providers and subjects of care), places, acts (e.g., diagnosis, treatment, referral), even though the specifics might be different (e.g., the provider might be a doctor, nurse or allied health practitioner, the place might be a hospital or a home).
• Conceptual frameworks form the basis of standards that can be applied across settings and organisations, enabling a wide variety of actors to access, safely and consistently interpret and use data, information, knowledge, and processes.
• The ability to leverage from work done by others.
• The explicit specification of assumptions that might otherwise go unstated and affect usage and interpretation.

101
Q

Cambridge Dictionary of Philosophy defines systems theory as

A

the trans-disciplinary study of the abstract organisation of phenomena, independent of their substance, type, or spatial or temporal scale of existence. It investigates both the principles common to all complex entities, and the (usually mathematical) models which can be used to describe them

102
Q

The concept of ‘hard’ and ‘soft’ systems

A

• Characteristics of “hard” systems include well-defined problems, optimisable solutions, a tendency for technical factors to predominate, and being amenable to scientific approaches to problem-solving.
• “Soft” systems can be viewed more as a set of mental constructs to aid understanding where the issues are less scientifically definable and are intrinsically linked with perspective/ ideology. Soft systems methodologies are oriented to dealing with complex situations, where the stakeholders involved lack agreement on what constitutes the problem and why it has arisen. Many systems, of course, involve both hard and soft elements, and the value of this distinction is that it requires us to think about systems in more than one way.

103
Q

Define system inputs, processes and outputs in relation ot system theory

A

• Inputs are things put into a system or expended in its operation to generate output or a result.
• Processes are actions taken within a system to achieve specific results. They transform inputs into different forms.
• Outputs are things generated by a system for defined purposes, via processes.

104
Q

Define outputs in relation to system theory, and discuss how these differ from outputs

A

Outcomes are the defined purposes of a system to which outputs contribute. While inputs, processes, and outputs are controlled within a system, outcomes may be subject only to influence since they are often impacted by factors outside the system, including other systems.
Many sources put outcomes outside the system’s boundary, which can be argued either way. It may be convenient (e.g., for administrative purposes) to draw a system boundary around the things that can be directly controlled, but ultimately:
o Business cases for systems are rarely justified based on inputs, processes, and outputs alone. Instead, a system’s value is generally in terms of the outcomes it contributes to, and business cases should directly address the nature and strength of the relationships between outputs and outcomes. We argue that subsequently placing the responsibility for outcomes outside the system’s boundaries is tantamount to divorcing a system from its accountabilities (as expressed in the business case).
o System successes, failures and problems are generally perceived through the lens of outcomes, not outputs. So, for example, a transport system may be performing optimally at a point in time. Still, if commuters perceive escalating problems (e.g., delays), then that is the trigger for reviewing the system, not the system’s performance defined in terms of inputs, processes, and outputs.
o Operators are referred to generically here. They include the people and technologies that provide inputs, undertake processes, use outputs, influence outcomes, and govern the system.
o Feedback is essentially communication between elements of a system that focuses on their performance to control their future functioning - to achieve the desired outcomes. Feedback can be designed to encourage or discourage behaviour - positive feedback reinforces behaviours while negative feedback aims to change them.

105
Q

System multuplicity

A

A feature of any system environment is the presence of other systems - competing, complementary and neutral. While this is particularly important when considering the outcomes of a system.

106
Q

Sub systems

A

Systems comprise sub-systems. Important concepts concerning sub-systems include:
• Decomposition is the process of dividing a system into sub-systems, which enables manageable size and optimal utility (including multiple usage/reuse).
• Simplification. Sub-systems cannot work in isolation, so the relationships and interfaces are crucial. Simplification can be defined as managing complexity by reducing interfaces’ number and/or nature.
• Coupling refers to the strength with which sub-systems are connected. A non-computing example of tight coupling is just in time inventory management, where input materials are put directly into production when they arrive – the input and production sub-systems are tightly coupled. In contrast, input inventory allows the input and production sub-systems to work more independently, more loosely coupled.
• Cohesion refers to the extent to which system elements are associated in terms of function or content. For example, an information sub-system could have high cohesion if the data therein is highly related and/or its functions are common.

107
Q

System boundaries

A

System boundaries, depicted by the hard-line in Figure F.2, define the scope of the system itself. Everything within the boundary is part of the system, and everything outside the boundary either forms part of the system context or is irrelevant.
In the real world, boundaries are often not clear cut and sometimes shift. Ultimately, system (and context) boundaries are often settled via consensus rather than science. Relevant considerations include:
• Open vs closed systems. From systems theory, open systems allow interactions between their elements and their environment, while closed systems are isolated from their environment. In practice, particularly in social spheres such as health, the great majority of systems would qualify as open under a rigorous interpretation of this definition, raising questions about the usefulness of the distinction.
• The purpose of the boundary (or boundaries). It may be appropriate to draw multiple boundaries reflecting different perspectives such as system administration by those who can control its operation but not influence outcomes and system governance by those whose accountabilities include such influence.
• The extent of permeability of the boundaries- i.e., the extent to which predictable contextual changes will require a change to the system.

108
Q

Important aspects of the behaviours of systems include (3)

A

• Deterministic versus probabilistic behaviours. Deterministic systems operate predictably, and their performance and impacts can be reliably determined. Probabilistic systems are less predictable, and their impacts can be modelled but not reliably determined. For example, a manufacturing production system can use feedback control systems to determine production accurately. In contrast, a health system can be oriented towards specific objectives but operates in a far less certain world. Its behaviour is subject to the collective results of much autonomous behaviour.
• Entropy. All systems are likely to deteriorate if not maintained. Entropy is the measure of this deterioration/depreciation, and unless maintained, the system is headed for termination.
• Emergent properties. Especially when outcomes are included within the system boundaries, the operation of systems impacts their environment and vice versa. This produces emergent properties, some of which can be anticipated and some not. A contemporary health example is e-safety – managing e-health to avoid unintended negative consequences. The introduction of health information and technology systems has often inadvertently had adverse effects, and the e-safety agenda in health informatics (see chapter F.15 below) is an example of an emergent property.

109
Q

Complex adaptive systems - Characteristics and properties

A

Characteristics of complex adaptive systems include (The Health Foundation, 2010):
• Many elements act autonomously and interact dynamically. Elements in the system are not aware of the system’s behaviour as a whole and respond only to what is available or known locally.
• Each element in the system is affected by and affects several other systems.
• Significance of history - the past helps to shape present behaviour.
• Non-linear interactions, such that small changes can have significant effects.
• Openness - it may be challenging to define system boundaries.
• Entropy - a constant flow of energy to maintain the system’s organisation.

Typical properties of complex adaptive systems include:
• Self-organisation. Complex adaptive systems do not have a single or unified hierarchy of command. Instead, they constantly reorganise themselves to find the best fit with the environment.
• Emergence. Although the agents in the system act autonomously, patterns emerge from their actions and interactions that ultimately guide and change the agents’ behaviour and the system itself.
• Co-evolution. As the environment changes, the system changes to ensure the best fit. This creates constancy of change as the system re-adapts to the environment and vice-versa.
• Connectivity. The relationships between the agents are usually of more concern than the agents themselves in influencing change.
• Importance of diversity. Just as biodiversity boosts ecosystem productivity, the greater the variety within a complex adaptive system, the stronger it is and the more likely it can generate new possibilities and co-evolve.
• Unpredictability. Because interactions are nonlinear, elements are changeable, and behaviours are creative, complex adaptive system futures cannot be deduced (as for predicting the behaviour of a machine). They can be modelled, but the predictions generated are bound to sets of assumptions that may be inherently unstable.

110
Q

High-reliability systems - 5 key concepts

A

• Preoccupation with failure. This includes explicit recognition that the system deals with inherently high risk. Thus, when near-misses occur, they are viewed as indicators of systems that should be improved rather than proof that the existing safeguards are working.
• Sensitivity to operations. This involves maintaining situational awareness - constant awareness of the state of the systems and processes, which is key to noticing risks and managing them.
• Reluctance to simplify. This is the opposite of reductionism and involves explicitly recognising that the system is complex. Avoiding overly simple explanations of failure (e.g., unqualified staff, inadequate training) is essential to understanding the real reasons for failure or risk.
• Deference to expertise. This involves avoiding rigid hierarchies and allowing the greatest expertise to make decisions irrespective of their administrative standing.
• Commitment to resilience. This involves building trust and improvising capability, ensuring the system is trained and prepared to respond when system failures occur.