Mark Scheme Answers Flashcards

1
Q

Data Transmission

A

Multiplexing - combines multiple signals into one so that multiple messages can be transmitted along the same channel at the same time

Parallel Transmission - sending each bit of a byte at the same time along separate channels. Examples include communication between components on the motherboard or video streaming (HDMI)
Faster data transmission than serial.

Serial Transmission - sending series of bits on after another along a single channel. Examples include network communications. Operates reliably over long distances
Requires only two wires. Can travel longer distances. Simpler interface

Data Collision:
- Bus networks are bi-directional.
- Messages might be transmitted simultaneously from two computers and
come into collision.
- Computers will detect that a collision has occurred due to the interference pattern
produced.
- All computers stop transmitting, then wait for a random time interval before attempting
to retransmit.

Ring Network Collisions:
- Ring networks operate in a single direction.
- Token ring networks carry a single circulating token, to which a message must
be attached for transmission. Only one message can be transmitted at a time.
- Larger ring networks are divided into sectors, separated by nodes. Only one
message may be present in each sector. Messages will not be transmitted onwards
until the next sector is clear.

Switch:
A switch is used to connect computers in a local area network.
- The switch is programmed / maintains a table with the IP addresses/machine addresses of connected
devices, so can send data to the required device.
- When a packet of data is received by the switch, it is checked to determine the destination address.

WAP (Wireless Access Point)
- Transmits and receives radio waves to/from devices’ wireless network interface cards.
- Might be connected to a router, but in a large building, several of these might connect to a switch, which would in turn be connected to a router.

Router:
- A router is used to forward data packets between networks.
- Routers control traffic on wide area networks such as the Internet.
- The router determines the destination of a data packet from the IP address in the packet protocol,
then selects an appropriate route for onwards
transmission.
- Routers may hold information about current transmission speeds to adjacent nodes, so that the
fastest path for onward transmission can be selected.

Multiplexor:
- A multiplexor allows multiple messages to be combined, so that they can be sent over a data
link simultaneously, then separated again at the end of the link.
- Time division multiplexing allocates small time slices alternately for data from each of the input
message streams.
- Frequency division multiplexing sends the different messages simultaneously, but using
different transmission frequencies.
- On a mainframe (multi-user) computer, a multiplexor allows input to the system from
different terminals, then routes system output to the correct terminal.
- On a wide area network (e.g. Internet), multiplexing may be used to combine messages
for transmission over the very fast high-capacity backbone of the network.

Half Duplex - Data can be sent in either direction, but only in one direction at a time.
Full Duplex - Data can be sent in either direction at the same time

Network Protocol - Necessary to specify data formats to enable devices to communicate with each other such as
connecting printer to computer or using http or other protocols to transfer data between devices
- Agreed apron set of rules which allows two devices to communicate and transfer data

Parallel Processing - Simultaneous use of several cores or processors to perform a single
task. Used when extremely large and complex calculation is being carried out

Circuit Switching:
- Path is set up between sender and receiver
- All data follows same paths in order
- Path cannot be used by any other data

Packet Switching:
- Data is split into packets
- Each packet has a destination address
- Packets are analysed by each node
- Packets are sent down the most appropriate path
- Each node maintains a routing table
- Each packet may be transmitted over different routes
- packets may arrive out of order and are reassembled
- Better security as it is difficult to intercept
- Makes more efficient use of data lines as their is no waiting during gaps

Contents of packets:
- Actual data
- Destination address
- Source address
- Packet id / Order number
- Checksum
- Length
- Protocol

File Transfer Protocol (FTP):
- Allows the transfer of large files over a network
- Has built in error checking and re-transmission requests if necessary

Hypertext Transfer protocol (HTTP)
- Allows the transfer of multimedia webpages over the internet
- Allows multiple different web browsers to display and format web pages as
the original author intended

Simple mail transfer protocol (SMTP)
- Allows emails to be sent over a network (server to server)
- Provides a standard way of transferring emails between two different servers

Internet Message Access Protocol (IMAP)
- allows emails to be transferred between computer systems
- provides standard way of transferring emails between two different servers
- messages are retrieved from server to computer

DHCP - Ensures each devices that connects to a network has a unique ip address
- Assigns IP addresses to devices on a network
- Ensures unique and dynamic IP addresses are allocated (automatically)
- allows addresses no longer in use to be automatically returned to the pool of IP
addresses available for reallocation

Universal datagram protocol (UDP)
- Sends data across network/internet with very few error recovery services
- important for video and audio streaming as protocols are designed to handle
occasional lost packets and need to receive new packets rather than retransmission
to prevent buffering.

TCP/IP (Transfer control protocol / Internet protocol)
- Allows networked computers to communicate with each other
- Specifies how signals are routed and transported around a network and reduced need
for gateways to convert signals into different protocols.
- a suite of protocols that control exactly how data is broken down for transmission from sender to receiver, across a network

VoIP
Voice Over Internet Protocol. This protocol allows the Internet to be used
as a phone network; users with microphones and headphones can talk to one
another this way.

Handshaking - When a computer system establishes a devices readiness to communicate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

The need for different types of software systems

A

Computer Aided Design (CAD)
- To run specialised graphics software able to carry out the geometric calculations
necessary to produce accurate 2D and 3D screen representations / models; that
can be viewed and manipulated from all angles.
- To improve the efficiency / productivity of the design process; by enabling early
visualisation of design proposals, improve record keeping through better
documentation and version control and promote team working through better
communications.

Computer Generated Animations
- A medical animation - a short educational film, usually based around a physiological
or surgical topic, rendered using 3D computer graphics and most commonly
used as an instructional tool for medical professionals or their patients.
- Education and training. A popular tool in classroom teaching and learning and in
work related training. Use of animation can increase interest & motivation in
learning.
- Forensic animation - The use of computer animation, stills, and other audio visual
aids to recreate incidents to aid investigators and help solve cases.

Expert System
- An expert system uses an inference engine, knowledge base of facts and rules for decision
making.
- Facts and rules should be produced by a specialist with relevant expertise, using the best
available information.
- The user is asked a series of questions. Subsequent questions may vary according to
the answers given.
- Question sequences should be designed so as to gather the necessary information needed for
decision making for all valid sets of input values.
- The user interface should be user-friendly, with adequate help and error trapping during data
entry.
- The system should generate results on screen or on paper in a format which is clearly
understandable to the user.
- The system should list its results in order of suitability, or indicate a relative value or score
for each.
- The system should explain its reasoning in reaching its decisions, so that the accuracy of
the results can be evaluated.

A-Level System
- The expert system allows more students to receive advice than is possible with interviews
alone.
- Students may investigate career/course options using the expert system as preparation before
an interview, to make best use of interview time.
- The expert system can be regularly updated with the latest career/course information.
- Students may feel more comfortable using an expert system rather than consulting a careers
advisor in person, e.g. if they were very uncertain about choice of career and didn’t feel
ready to discuss this yet.
- The expert system can be made available at any time of the day, and from any location by
internet.
- Staff costs will be lower than if additional careers advisors are employed.

Robots in Manufacturing:
- Accurate assembly, e.g. circuit boards.
- Carrying out unhealthy or dangerous activities, e.g. car body welding or spray painting.
- Repetitive operations, e.g. packing food items in boxes.
- Warehouse functions, e.g. collecting selected items from shelves.
- Lower prices due to reduced manufacturing costs.
- Consistent quality due to accurate manufacturing.
- Faster delivery times.
- Quicker innovation for new products.
- Need for retraining of the workforce to operate new technology
- Cost of specialist technicians and programmers
- Risk of breakdown of a complex system affecting production
- Cost of adapting factory premises for automation / Initial setup costs
- Risk of malicious damage by hackers

Input/Output Control Systems
- Monitoring speed, then applying power or braking as necessary.
- Monitoring geographical location, and applying brakes on entering a station.
- Monitoring the track ahead, and applying brakes if an obstruction is detected.
- Monitoring the state of the carriage doors, and not moving from the platform
if doors are open.
- Monitoring fire warning systems and taking emergency action if a fire is detected.

Safety Risks of Data Transfer
- A safety critical system is one in which a malfunction or failure of computer hardware or
software could potentially put persons at risk of injury.
- Exhaustive testing of systems must be carried out before they are brought into
service.
- Systems should have redundancy where possible (e.g. a backup computer can be
brought into use immediately if the main computer fails).
- Systems should be designed to be fail-safe (e.g. a train will safely come to a halt if a
malfunction is detected).
- Regular maintenance and testing should be carried out (e.g. of outdoor cabling which
might be affected by rainwater, or control equipment on a train which might be affected
by vibration)
- High levels of security must be maintained, to guard against malicious attacks.

Driverless trains:
- No possibility of human error (for instance passing a signal at red)
- Train’s control system could apply the brakes at/before a red signal
- Obstruction / train ahead detection could be included
- Could govern the maximum speed
- Could prevent starting with any doors open
- No driver so save salaries, plus no sickness, lateness etc.

Doctor expert systems
- expert system is a software system / type of artificial intelligence
- it is based on facts (Knowledge base)
- it is based on rules (Inference engine)
- can replace the human agent Describe
- would help the doctor reach a diagnosis
- might help them to diagnose unusual conditions / more reliable / up-to-date
- final decision remains with the doctor
- might cause doctors to lose their jobs / doctor does not have to be present
- might save doctors time
* may be legal or ethical issues
- de-skilling

Forecasting Weather
- Inputs from thousands of weather stations e.g. satellites, balloons, ships etc / from huge
geographical area / whole world
- Requires the processing of a huge amount of data
- Requires comparison with huge amounts of historical data
- Requires very complex calculations
- Will require large, complex programs
- Processing has to be done very quickly as weather forecasts are no use if out-of-date
- Weather is often extremely unstable / chaotic / hard to predict
- May require very good graphics for visual representation

Selling Apps on internet
- Can receive feedback on the app
- Can provide updates as soon as they become available
- Can reach potential massive market so app can be sold at cheap price
- App is downloaded so no postage or package costs
- No physical media required so app can be cheaper (more profit)
- Can generate revenue from advertising on free apps
- Can target niche market by exploiting global marketplace
- Environmental benefits improve green credentials for the company
- Can sell directly to customers so no commission to third party app store
- Payment is received immediately
- Programmer could receive recognition of successful app and be ‘head hunted’ and find a very good job
- Can download and use immediately
- Can download anytime (24/7)
- Save time and/or money travelling to shop to buy
- Can read other customer’s reviews before buying
- Can download again if app is lost/corrupted or new device
- Can access updates as soon as they become available (not twice)
- Potential massive market so app can be bought at cheap price (not twice)
- App is downloaded so no postage or package costs (not twice)

Open Sources Software
- Free licence / General public licence
- Relaxed / non-existent copyright restrictions
- Built using community co-operation
- code is available for all to view, debug, rewrite
- free from commercial pressures
- frequent integration with other software packages
- several versions
- high modularisation

Virtual Learning Environment
- Software system designed to help teachers and pupils in the management and use of learning
resources
- Could contain details about homework / coursework / assignments
- school newsletter / achievements made by student
- feedback from teachers
- additional / background teaching materials

Intranet
- can only be accessed by members of organisation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

System Analysis
(Component 1)

A

User documentation
- User documentation should be straight forward and targeted specially for the end user
- It must contain instructions on how to undertake any task using the system
- It must not contain overly technical information and should be user friendly
- It should be an accompanying manual for after the end user has undertaken appropriate training on how to use the system
- Tutorials and step by step instructions on how to perform tasks
- Referencing manual and glossary
- Trouble shooting guide, common errors and problems
- frequently asked questions

Maintenance documentation
- Maintenance documentations should be technical documentation aimed at the person who manages and configure the software or system
- This could include IT technicians who understands all aspects of the system including the software and hardware used
- Pseudocode and annotated listings
- Diagrams such as UML
- Data structure documents
- Algorithm designs including flowcharts
- variable lists
- data dictionaries
- design documents
- installation and configuration instructions and support
- Hardware and software requirements

Waterfall
- Developers draft the design of a system up front and it does not change.
- Only once the analysis and design stages are complete, developers cannot go back to a to make any changes.
- If the analysis or design of the project are inaccurate or incorrect in any way, the project will fail due to the rigidness of the waterfall methodology.
- Requires less communication between the client and the developer.
- Client input is only required during analysis and at times the design stage.
- Sequential process

Agile
- Incremental approach to development
- Developers start with a simple project design and requirements.
- Iterative approach as analysis and design relies on each other.
- Analysis informs design and the design informs further analysis to be undertaken.
- Changes can be made after each phase of development, analysis can be revisited, and designs changed.
- Strong communication between the client and the developer should be regular
- Clients are involved during all stages of development.

Documents produced during analysis
- Questionnaires, These should be undertaken by a variety of stakeholders to support the analysis of the existing system. These questionnaires should measure the effectiveness of the current systems from the viewpoints of various stakeholders.
- Observations, Formal observations should be undertaken by the analysis team. These observers should monitor the interactions stakeholders have of the current systems, making relevant notes.
- Requirements After the analysis has been completed a formal set of requirements should be produced for any proposed changes of an existing system or implementation of a new system.

Documents produced during maintenance
- Annotated code listings To ensure effective maintenance of the source code for any developer. A complete list of the annotated source code is required to resolve issues or extend the
system.
- Algorithm designs A complete collection of all algorithm designs in pseudocode or flow chart format should be required. These can aide a future developer in following the logic of a program for maintenance purposes.
- Data dictionaries A data dictionary is a document that contains the structures of all databases, data types and the relationship between them. This is useful for maintaining, debugging and extending the data within the system
- Variable list
- Data dictionary
- Class diagram
- List of sub routines
- Entity Relationship Diagram

Factors considered when proposing a new system
- The factors that need to be considered when proposing a new system solution include cost, time scale and budget.
- A proposed system should be cost effective, in terms of human resources, finances, technology and time.
- A proposed system needs to be effective in terms of human resource costs. The proposed system must not over or under utilise developers. When developing a new system, each developer should be allocated roles and development activities.
- These activities should be overseen by a lead developer to ensure that human resources are being fully utilised and cost effective.
- A proposed project should be financially cost effective. Developers should research and source the most financially cost-effective methods/resources/technologies when proposing a new system.
- Technologies sources including hardware and software should be cost effective.
- The system should have a specific time scale for development from inception to evaluation.
- The proposed system should follow a suitable development methodology with appropriate and realistic deadlines.
- These deadlines should follow a suitable plan to ensure an effective time scale for the project.
- The system needs to have a controlled budget. This budget should be managed accordingly to ensure the success and economic viability of the project.

Methods of changeover
- When implementing a new solution there are various methods of changeover that can be employed including direct, pilot, phased and parallel.
- Direct changeover is the simplest but most risky method of changeover. This method should only be employed where there is not an existing system already in place.
- New systems always come with a variety of problems including bug and compatibility issues and directly changing to a new system could have a significant impact on business and productivity if these issues occur.
- Pilot changeover is usually employed when a business has the required amount of resources to effectively test a new system by deploying it into one area for example, a new stock management system in one of a company’s many warehouses.
- This method allows bugs and other issues to be confined to just one area and when fixed the system can be rolled out on a much larger scale.
- Phased changeover is used when a system can be deployed in units or modules. This works well when parts of a new system are being developed independently and upgrading an existing system.
- When each module is implemented into an existing system many compatibility issues can occur between the new systems modules and the existing system.
- Parallel changeover is used when there can be opportunity for a system to fail. Phased changeover implements a new system alongside an existing system and if one fails the other takes over.
- System tasks are run concurrently on both the new system and the existing causing a duplication of tasks. These tasks can be used to ensure consistency between the new and existing systems.
- Parallel changeover is employed for critical systems such as those in hospitals and banks where data access and integrity is critical.

Testing
- Alpha testing is conducted in-house by developers and occurs before the customer agrees to accept the final program.
- Alpha builds are not shared with either the end user or with the customer.
- Alpha builds are not final piece of software and often include limited functionality and many bugs.
- Beta testing is conducted after alpha testing and later on in the software development life cycle.
- Beta builds are shared with a limited number of end users to beta test the system with live data.
- Beta builds contain all the main functionality but will still include some bugs.
- Bugs reported by the beta testers are corrected by the development team.
- Acceptance testing occurs is the final phase of testing during the software development life cycle.
- Acceptance testing is undertaken by the actual end users of the system with real data.
- The purpose of acceptance testing is to ensure the system has met the original requirements and specifications of the customer.

Feasibility
- Technical practicality
- Cost effectiveness
- Time scale
- Budget
- To provide information required to support a decision to proceed.

Fact finding techniques
- Observation of a sample of operators as they use the current system.
- Document inspection, including business documents, user manuals and maintenance records.

Stages in program productions
Analysis, descriptions of;
- Abstraction / reduce problem to essential features
- Decomposition / top down approach
- DFD’s / illustration of data flows
Design of,
- Data structures / data types / variables and constants
- Algorithms / pseudo code / flowcharts of processes
- Sub routines
- HCI / inputs / outputs.
- Test data - typical, extreme and erroneous.
- Prototyping
Implementation; consideration of
- Type and level of language and IDE
- Translation method and writing / de-bugging of code
Documentation
- Description of an ongoing process
- User instructions, maintenance manuals Testing, when and by whom
- Alpha
- Beta

Maintenence stages
- Perfective maintenance – to improve a system in use, Making improvements that are not major enough to justify a new system.
- Adaptive maintenance – to change a system in use. Making changes to suit revised working requirements / OS versions / new hardware

Types of languages
- Procedural languages are suitable for both the Agile and Waterfall approach
- Scripting Languages are suitable for both the Agile and Waterfall approach
- Non-Procedural languages would be suitable for the Waterfall approach but some might not work as well with the Agile approach
- Non-procedural programming languages require programmers to specify rules and facts which is more suitable for Waterfall
- Object Orientated languages are suitable for both the Agile and Waterfall approach
- Visual languages would be more suitable for Agile
- 4th Generation languages are suitable for both the Agile and Waterfall approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

System Design
(Component 1)

A

Criteria used to evaluate computer based solutions
Requirements - evaluate the solution against the original requirements. All requirements should be met for a solution to be successful
Cost - evaluate the solution against costs which include financial costs, human costs and resource costs. A solution must not exceed any negotiated costs to be successful
Robustness - evaluate the solutions against its test results. A solution should use error trapping and validation methods to be successfully robust and reduce the chance of system errors and failures
- Usability - evaluate the solution against the ease of user for the end user. A solution should use an intuitive user interface suitable for the end user to be successful
- Performance - evaluate the performance of the solution, it should be fully optimised to reduce memory usage. A solution should complete specific task within a given time frame to be successful
- Functionality – the system must produce correct results for a given set of inputs.

Natural language interface
- A natural language interface is where speech and linguistics is used
- to interact and control a software application.
- One potential use for a natural language interface would be in translation software. Natural language could be processed in real time to allow for a seamless translation service.
- Colloquialisms and words can be interpreted differently regionally.
- Accents could make is difficult for a natural language interface to identify the words being spoken.
- Ambiguity in spoken language where a word may have more than one interpretation.
- Background noise could cause problems.
- Illness such as sore throat
- Two words that sound the same (two, to) homonyms
- Dialect / accents
- Use of proper nouns
- Words from other languages in common use
- Voice patterns

Human computer interface / interactions
- A natural user interface uses relies on intuitive actions related to natural, everyday human behaviour.
- Touch screens, where uses touch or tap graphic icons.
- Gesture recognition systems which track and translate user movements into instructions.
- Speech recognition systems that identify spoken words and phrases and convert them into instructions.
- An immersive interface places one or more of the user’s sense into a computer generated virtual environment.
- Virtual reality headsets or HMDs (head mounted displays) which receive video from a computer, possibly with head tracking (up and down movement).
- Binaural or 3D earphones to filter out natural sound and replace with a chosen selected audio.
- Force feedback and touch controls provide sensation of using hands within a virtual environment.

Natural language in high level language
- Ambiguity is an uncertainty of meaning in which different interpretations are possible.
- High level programming languages must be unambiguous so that there is only one way to interpret each program statement
- and therefore enable accurate translation into machine code.

Voice input interface
- Speech is a very natural way to interact, and it is not necessary to use a keyboard or work with a remote control
- No training required for users
- Voice is hands-free making it suitable for use in a variety of environments e.g. driving
- Suitable for the disabled (qualified)
- Can be used to drive several apps in a sequence e.g. Find John Smith and give me directions to him.
- Faster than typing on a keyboard (must be qualified not just faster).
- Even the best speech recognition systems sometimes make errors e.g. homophones
- If there is noise or some other sound in the room (e.g. the television or a kettle boiling), the number of errors will increase
- Regional accents can affect the outcome
- Requires data connection to interpret speech and return results
- Delivering sensitive information e.g. credit card details could be a security risk.
- Only understands certain foreign languages

Touch Screens
- No need for another pointing device such as a stylus
- Can pinch and expand to scale images/text
- Screen can be used for input as well as output so device can be small
- Intuitive, so easy for beginners to learn to use
- Limits number of peripherals needed

Design review
- Checking the correspondence between the actual design and its specification / user requirements / objectives / safety issues
- Confirming that the most appropriate techniques have been used
- Confirming the HCI is appropriate for the application

Design validation
- check for correspondence between the designed system and the specification
- confirm that the most appropriate techniques have been used
- confirm that the user interface is appropriate

Types of interface
GUI
- GUI system is usually easy to learn for a novice user
- GUI system is usually more intuitive to use e.g. icons relevant to the application
- may be similar to other packages with which users are familiar
- can show images/videos etc to promote the clothing / make it appeal to customers
- can have an on-screen / soft keyboard
Touch screen
- generally more robust than e.g. mouse or keyboard
- easy to use with little comp knowledge/customer may be familiar with touch screen
- can be designed to replicate common mobile phones / tablets (swiping etc)
- takes up less space the keyboard and mouse
- will be attractive to customers
- can have an on-screen / soft keyboard [not twice]
Forms dialogue
- customers can choose items from a list
- may have in-built validation
Text-based
- time consuming
- not attractive to most customers / not likely to have images
- not easy to learn or use in a crowded environment
Speech recognition interface
- not easy to use in a crowded environment - probably too much background noise
- may be ineffective until computer “learns” customer’s speech style: impractical
- may have problems with different accents / different voices, homophones etc
Voice synthesis
- not suitable in noisy environment (particularly if several computers nearby)
Handwriting recognition
- text input may not be appropriate for this application
- not very reliable
- may not be easy to use in a crowded shop
Mouse
- not easy for complete novice users
- easily damaged [not twice]
- could be stolen
Hardware Keyboard
- text input not appropriate for this application
- easily damaged [not twice]
- quite large [but not if used as a benefit of e.g. touchscreen elsewhere in answer]

Forms Dialogue
- Cursor may move automatically to next input field
- Intuitive to fill in - echoes familiar paper form / good for surveys etc
- Allows change to be made while screen still visible
- May include validation – only some entries allowed

Touch pad
- A touchpad can more easily be fitted into a small device like a laptop computer or PDA / does not require extended flat space to move the mouse over / allows multiple gestures, hand swipes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Data security and integrity

A

Cyber attacks
Implications on individuals.
- Email accounts, social media sites, and other personal information have been compromised.
- Cyber-attacks affect personal data and privacy, as well political, economic, and social systems.
- The implications for the average person range from identity theft to financial losses to reputational damage resulting from non-consensual data broadcasts.
- Many cyber-attacks are extremely personal. Hacked emails, social accounts, webcams, and mobile phones provide some of the numerous attack vectors for domestic assailants.
- Due in large part to the proliferation and accessibility of digital weapons, spyware and other surveillance tools can be implanted, and have been used in cases ranging from cyber bullying to domestic abuse.
Implications on society.
- On domestic and international politics, integrating cyber-attacks with data that contain a mixture of real and false information to influence public opinion / election results.
- Theft of business patents and processes cause detrimental impacts on businesses, with measurable financial and productivity implications.
- Usernames, passwords, credit card data, health records – malicious use of this data by criminals can result in the organisations the data was stolen from suffering from loss of reputation, fines, falling sales, and legal proceedings.
- Hackers gaining access to everyday public and utility services in order to cause havoc and disrupt society constitutes a genuine threat to UK national and infrastructure security.
- Hackers gaining access national security and military systems to cause havoc and hold nations to ransom.
Security measures
- Turn on firewalls, use anti-virus / antimalware software
- Password rules. Strong passwords are one of the first lines of defence. Make regular password updates mandatory and use strong passwords.
- Update regularly. Any connection to the Internet is vulnerable. Keep every connection, operating system, and application up to date with patches and enhancements.
- Implement VPNs for all connections. Networks that are protected only by generic security measures are more vulnerable to attack. Implement virtual private network (VPN) connections and make their use easy and mandatory when using public Wi-Fi services.
- Retire unused services When systems are no longer needed, delete the applications, logins, and user credentials associated with them. Turn off unused software features such as a video chat function to limit potential for unauthorised access.

Symmetric (single key) and asymmetric (double-key) encryption methods
- Single key encryption can be faster in use. Double key encryption takes longer to encrypt a document, and longer to decrypt, due to the large amount of calculations involved.
- It can be faster to set up a single key encryption system than a double key system, as the programming involved may be simpler.
- Single key encryption may not be secure if the key value has to be transferred over the internet and is intercepted by an unauthorised person.
- Double key encryption avoids the security risk by only revealing the public encryption key to the sender. The private decryption key is held securely by the receiver and not revealed.
- Single key encryption suitable for personal use in encrypting files on a single computer. No transfer of the key value to another user needed.
- Single key encryption is suitable for use within an office or work group, where the key value can be transferred during personal meetings or over a secure local area network.
- Double key encryption is more suitable for transfer of confidential data over the internet (such as credit card details), e.g. on-line hotel/airline bookings or shop purchases.

Protecting the security and integrity of data and computer system
- The hotel should consider physical security. Keep the computer in a locked area when staff are not present. CCTV might be used to monitor the reception area.
- Individual members of staff should have user names and passwords.
- Access to the network by different users should be recorded in a log file.
- Staff should receive training and sign a code of conduct regarding computer use and confidentiality of data.
- Staff may be given different levels of access to the computer system, according to their job roles.
- Some staff may have read-only access to booking data.
- Sensitive data such as bank account details should be held on the computer system in encrypted format.
- All client data transmitted by e-mail should be sent in an encrypted format.
- If the hotel accepts bookings and payments from its web site, then customers should be able to submit their data through a secure encrypted system.
- The hotel should introduce an efficient backup system, with copies of data made daily. Backup data should be stored off-site, either on a portable storage device or using the ‘cloud’.
- A transaction file should be kept, to help in restoring data in the event of loss.
- The hotel must implement improved security in order to conform with the Data Protection Act.

Types of malicious software
- Viruses. Viruses are programs that can replicate themselves and be spread from one system to another by attaching themselves to host files. They are used to modify or corrupt information on a targeted computer system.
- Worms. Worms are self-replicating programs that identify vulnerabilities in operating systems and enable remote control of the infected computer.
- Spyware. Installed by opening attachments or downloading infected software. Spyware can be used to collect stored data without the user’s knowledge.
- Trojans. A Trojan is a program that appears to perform a useful function, but also provides a ‘backdoor’ that enables data to be stolen.
- Virus and spyware checking software should be installed and kept up to date.
- A firewall can protect against unauthorised access to the computer system.
- E-mail attachments should not be opened unless from a trusted source.
- Users should be cautious of fraudulent e-mails asking for passwords, or fraudulent telephone callers asking for particular web pages to be loaded.
- Password hierarchy
- Access levels
- User policies

Risks during transfer of data
- Data sent over the internet may be intercepted. Sensitive data should be encrypted.
- Data is particularly at risk if sent or received at public Wi-Fi locations. Password protection should be used.
- Data sent by post (e.g. on a DVD or on a USB memory stick) may be intercepted.
- The storage medium should be password protected. Sensitive data should be encrypted.

Biometric data
- Biometric data refers to measurement and recording of some physical characteristic of a person,
- which can be used to uniquely identify that person.
- Facial recognition data. Measurements of the distances between key points on the face, e.g. eyes, nose, ears.
- Fingerprint data. Patterns of whirls and loops in the fingerprint pattern.
- Iris scan data. Colour pattern of the iris at the front of the eye.
- Hand Geometry – identifies users by the shape of their hand.
- Palm vein – patterns of the blood vessels in their palms.
- Signature recognition – characteristic writing style.
- Voice pattern recognition – characteristic frequencies of spoken sounds.
- Human Gait – so you can tell by the way they walk.
- Ear canal.
- Body Odour identification.
- Data capture (e.g. by photography or scanning)
- The data would be digitised and stored on a database.
- During access, data would again be captured and compared to the reference record stored in the database.
- A decision made, based upon the comparison.

Voice recognition and biometric data
- The voice print of each employee will initially be recorded when they join the company.
- This is stored in a secure format (encryption).
- On attempted entry to the building, the original voiceprint record is compared with the current voice print of the employee.
- If they match, entry is permitted.
- A number of attempts are permitted.
- More secure as it is difficult to replicate the data / unique voice print.
- Can’t be lost, stolen or forgotten.
- Can’t be phished or tricked out of someone.
- Can speed up queues at the entrance / exit.
- Not always reliable under some circumstances, e.g. background noise.
- People’s voice change over time.
- Privacy concerns.
- Expensive to set-up.

Disaster planning
- To aid a rapid recovery from disaster, periodic / regular backups should be made, with files archived offsite and/or in a fire-proof environment. An alternative system (computer-based or manual) should be available, as should a back-up power supply.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Economic moral and ethical
(Compnenet 1)

A

Code of conduct
- Sets out the professional standards required by the Institute as a condition of membership.
- a code of conduct includes standards for professional competence and integrity

Code of professional competence
- Only undertake to do work or provide a service that is within your competence.
- NOT claim any level of competence as an ICT Technician that you do not possess.
- Develop your professional knowledge, skills and competence on a continuing basis, maintaining awareness of technological developments, procedures, and standards that are relevant to school ICT systems
- Ensure that you have the knowledge and understanding of Legislation and that you comply with such Legislation, in carrying out your professional responsibilities within the school.

Code of integrity
- Respect and value alternative viewpoints and, seek, accept and offer honest criticisms of work by teachers and management
- Avoid injuring others, their property, reputation, or employment by false or malicious or negligent action or inaction.
- Reject and will not make any offer of bribery or unethical inducement in relation to exams or coursework
- Confidentiality, respect confidentiality of pupils, exams, and staff

General Data Protection Regulation 2018
- A set of rules to protect the privacy of all European Union citizens.
- GDPR is to simplify the data, privacy and consent legislation across the EU in the digital age.
- All private data must be collected lawfully and with consent.
- All data collected and stored must be protect from misuse and exploitation.
- The types of data considered personal under the existing legislation include name, address, and photos.
- GDPR extends the definition of personal data so that something like an IP address can be personal data.
- It also includes sensitive personal data such as genetic data, and biometric data which could be processed to uniquely identify an individual.
- Under the GDPR it is a legal requirement that data breaches such has hacking are reported to the relevant authorities within 72 hours and the consumer has a right to know when a breach occurs.
- Businesses also need to make it easier for consumers to access their data and be very clear on how their data is being processed and used.
- GDPR also acknowledges the right to be forgotten where by a business should delete data help on a consumer if they have no grounds to retain it.
- Parental consent is required for the processing of data of under 16-year olds.
- Data processors can be directly liable for the security of personal data.

Data Protection Act 1998
- Personal data shall be processed fairly and lawfully.
- Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.
- Personal data shall be adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed.
- Personal data shall be accurate and, where necessary, kept up to date.
- Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes.
- Personal data shall be processed in accordance with the rights of data subjects under this DPA.
- Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.
- Personal data shall not be transferred to a country or territory outside the EU unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.

The Regulation of Investigatory Powers Act 2000 and the Investigatory Powers Act 2016
- Internet and communications companies such as internet service providers and mobile telecommunications providers retain customer browsing history for up to one year. This data can be accessed by a range of public bodies including British security services and the police, upon issue of a warrant.
- Allows the GCHQ, MI6 and MI5 to collect bulk personal datasets including NHS Health Records. When information is bulk collected, it will not only contain information on persons of interest but will also contain information on innocent members of the public.
- Allows the GCHQ, MI6 and MI5 to carry out equipment interference also known as ‘hacking’ personal digital devices upon issue of a warrant. These devices include personal computers and mobile phones. If there is encryption on the devices the service provider will have to comply in bypassing the device security to access any personal data.

Human Rights Act 1998 Article 8
- Right to a private and family life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

File organisation

A

The supermarket’s system uses real time transaction processing.
Explain how the stock control system would operate.
- The master file/stock file would be updated as purchased items are processed/scanned at the checkouts.
- Deliveries of goods received from suppliers/central warehouse would be added to the master file/stock file in real time/as they are received.
- Quantities in stock would be compared with a specified minimum quantity each time a purchase is recorded, and items below this stock level would be listed for reordering.
- Items below the stock level will be added to the re-order file which will be updated as transactions take place.

Only one terminal/checkout can update a stock record at a time. Identify the potential problem if customers purchase items with the same stock code at different checkouts at the same time and describe a possible solution to this problem.
- The potential for data integrity to be compromised.
- When one terminal is updating a record, the record will be locked to prevent any other device making changes.
- Further changes to a record will be held in a queue/buffer and processed when the previous update is completed.

Data processing
Inputs:
- Updated meter reading
- New customer / amended customer details
Processes:
- Sorting of transaction file in key field order to match master file.
- Merging of data to update master file
- Calculation of electricity used and cost
Outputs:
- Updated master file
- Customer bills
- Error log

Explain why this hashing algorithm is unsuitable
- Only 10 out of a 1000 memory locations will be accessed
- Product codes will hash to only a few locations so many collisions will be expected.
- Collisions will result in overflow locating a record in overflow will involve slow serial access.
- Will only work up to year 99 before duplicating the year.

Describe a more suitable hashing algorithm
- Delete the last 2 digits ‘20’ and carry out MOD 1000 on the remaining 5 digits.
- Change the hash function to: key field DIV 10 000 so the location is determined by only the first three digits of the product code.
- Modify the hash function so that 20 is not a factor of the modulus, e.g. key field MOD 999

Fixed length and variable length fields
- If records have a fixed length, the position of any record in the file can be calculated by multiplying the record length in bytes by the record sequence number. (There is no fast way of locating a variable length record with a particular sequence number.)
- A fast binary search can be used to locate a fixed length record in a sequential file.
- Variable length records can only be found using a slower linear search method.
- Fixed length records can be quickly updated without affecting other records in the file. There should be empty space present in the record to allow for any increase in the size of the data (e.g. changing an address in a customer record).
- If a variable length record is updated, the size of the record will change. The file will need to be rebuilt and the updated record inserted at the correct point in the sequence.

Variable length records
- Variable length records are preferred when the records in a file are of very different lengths.
- So as to avoid wasting memory / storage / disk space.
- Variable length records are suitable for situations where no searching or updating is necessary e.g. transaction files which will be used later to update a master file.

Random Access
- Explanation of strategy to save records, using hashing to convert a key field value into a memory location/address.
- Explanation of strategy to handle collisions.
- Explanation of strategy to access records.
- If the main file is large and there are few collisions, most records will be found immediately.
- Separate overflow area where records are stored in the next available memory location
- Overflow area can be serial file or linked list
- Searching the separate overflow area is done linearly and may be slow if the area is large
- Progressive overflow within the main file may be used, so records are found very close to their home locations even if overflow has occurred.
- Could be a problem with the random access file if the overflow area becomes too large – main file may need to be restructured with a different hash function.
- Could be faster than obtaining records from the sequential indexed file, where several index blocks must be searched and the required record found amongst other records in a data block.

Indexed sequential
- Explanation of strategy to save records in data blocks on disk, with records sorted into sequential order within each data block.
- Explanation of locating records through use of index block pointers.
- Explanation of multiple levels of index.
- Fast searching using indexes will find the data block containing the required record.
- Could be faster than searching the overflow area of the random access file.
- Easy to add any amount of further records by adding extra data blocks, then setting index pointers.

Overflow
- the overflow area is a separate file.
- the overflow area uses serial storage.
- Records are likely to be stored at or close to the calculated location, so access will be fast.
- If a record is not in the file, this will be known as soon as the first empty location is reached.
- Less total storage space needed
- The main file has a fixed maximum capacity, so storage of further records may be prevented.

A suitable hashing algorithm will map component numbers onto a smaller range of addresses, by generating fewer digit address references.

Use progressive overflow, if the location is occupied use the next available location if the end of the file is reached wrap around and start searching from the beginning again.

Master Files and Transaction files
Master file
- Holds descriptive data; the actual data that is supposed to be processed and holds the resultant data after the process is completed i.e. long term data records which contain data which does not change or data which is periodically updated
- Data is held sequentially, in key field order.
- Example: Customer details for electricity company
Transaction file
- Contains the transactions i.e. changes that are supposed to be made to the data in the master file
- Data is held serially in temporal order i.e. in the order it was collected
- Example: Customer meter readings for electricity company

Multilevel index
- An index is used to improve (read) access times to records
- There is a main that contains the location of the next index
- This process may extend to several levels.
- The last index contains the physical address of the record.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Principles of Programming
(Compnent 1)

A

Procedural and object orientated programming paradigms
- Procedural programming supports a logical step-by-step process such as implementing an algorithm
- Procedural programming follows a top down approach (breaking bigger problems into lots of little sub problems)
- Allows programmer to define precisely each step when performing a task
- Provides close control over the underlying operation of the hardware
- Programs are divided into functions
- Procedural programming relies of the user of iteration, sequence and selection
- Examples of PP include Pascal and C
- Object orientated programs are divided into objects, classes and methods
- Can use inheritance to reduce code duplication and flexibility
- Allows data to be encapsulated making data more secure
- Easier to expand programs and multiple developers can work on one project without affecting others code.
- Examples of OOP are C++ and Java

Functional Programming and Logic programming paradigms
- Functional programming uses a series of function definitions which are evaluated as mathematical expressions to solve a problem
- Functional programming is a declarative language which works by programmers coding what problem they want to solve rather than how they are going to solve a specific problem
- Functional programming is used in research and testing
- An example of a functional programming language is Haskell or SQL
- Logic programming is used to solve programming problems used specific knowledge base
- Logic programming takes a problem or question and will produce a solution based on this knowledge base
- Logic programming is used in expert systems, machine learning and artificial intelligence
- An example of a logic programming language is PROLOG or mercury

Need for standardisation of computer languages
- When new hardware or software is developed it needs to ensure it is compatible with existing hardware and software
- Products developed by different companies need to meet standards to ensure compatibility across platforms.
- There must be interoperability (exchange and make use of information) between new products and with existing products to exchange and use data
- Using standards ensures products can work as part of larger system or network
- Examples of standardised computer languages are HTML5, CSS3 and JavaScript which are maintained by W3C (World Wide Web Consortium)

Difficulties involved in agreeing these standards
- All standards must be very detailed to ensure consistency in their implementation which has high costs in terms of time and money
- companies and businesses will only agree to standards when they are in their best interests
- Many companies have different targets and goals, and this can cause difficulties when agreeing on unified standards (need to meet requirements)
- Standards need to be broad enough to ensure they meet the demands of a wide range of complex problems but specific enough that they are implemented correctly

Advantages and disadvantages of using an object-oriented paradigm (OOP)
- Improved productivity when developing software due to the flexible and extendable nature of OOP.
- Software is easier to maintain as OOP is modular and reusable.
- Development is faster due the reusable code and libraries.
- Development is cheaper.
- Software can be tested more easily making it more high quality.
- Software is easier to design as model the real world.
- OOP is difficult and not as ‘logical’ to some developers, it is complex to create application in.
- Software can become larger – more code - than procedural programs.
- OOP programs can run slower than PP as there is more code to execute.
- OOP cannot be used for all types of software application such as machine learning and AI.
- OOP can be difficult to debug.

Class and an object
- A class is a template or blueprint for a specific object. It defines an object’s instance variables (attributes/properties) and behaviour (methods). An object is an instance of a class.

Relationship between object and method
- A method is a programmed behaviour/subroutine that is included in an object of a class. A method can only access data within its own object (encapsulation).

High level and low level languages
- High level languages are closer to the semantics of spoken language.
- Each line of high level language translates in to multiple lines of machine code.
- Low level languages such as assembly language uses mnemonics.
- Each line of low level language is translated into one machine code instruction.
- Identifiers can be long and meaningful
- They allow use of more powerful commands that perform quite complex tasks
- Allows the creation of modules that can be re-used and accessed by other parts of the program.

Use of low level language
- Device drivers - low level language must be used to directly access memory addresses to fully control hardware.
- Embedded software – software that runs on simple devices using simple microprocessors such as washing machines and microwaves will need direct access to the hardware
- Real-time software – simulators or fly-by-wire systems that require precise processing, timings or accuracy could potential benefit from using a low-level language.
- Assembly language can produce more compact code which can be important when placing on a chip.

Programming Paradigm
- A programming paradigm describes the different types or approaches in programming languages that are needed to solve different problems more effectively.

Procedural and event-driven paradigms
- Procedural languages are those that solve a problem in a linear fashion through a sequence of step-by-step instructions and involve the use of selection, iteration and callable procedures.
- Procedural languages are more suited to problems that require a linear algorithm solution.
- Event-driven programming is used to solve problems that require heavy user interaction through a graphical interface. Listeners are attached to objects i.e. buttons, which in turn will execute a subroutine based on the type of event triggered i.e. a single click.
- Event-driven programming language are more suited to problems that require rapid application development and a graphical user interface

Uses of different paradigms
- Object orientated programming could be use the development of a large distributed software application that requires a large team of developers.
- Event driven programming could be use the development of a graphical user interface software application.
- Logic programming could be use the development of artificial intelligence software.
- Functional programming could be use the development of software applications requiring complex mathematical transformations.
- Procedural programming could be use the development of command line interface software applications.

Advantages of using procedural
- Algorithms/programs can be broken down in to smaller parts.
- These are named reusable pieces of code that can be called any number of times within an algorithm/program to perform a specific task.
- Procedures are used to avoid the duplication of code.
- Procedures are used to make an algorithm/program more efficient and secure.
- Each procedure can be individually tested / debugged

Inheritance
- Inheritance enables new objects to take on the properties of existing objects.
- A superclass is used as the basis for inheritance. A class that inherits from a superclass is called a subclass.
- Inheritance defines relationships between classes and organises classes into groups.
- Inheritance enables classes that are similar to existing classes to be created by indicating differences (rather that starting again) and thereby allows code to be organised and re-used effectively

Concepts of OOP
- Abstraction
- Encapsulation
- Polymorphism
- Object Hierarchy

Procedural languages
- Procedural languages are used in traditional programming based on algorithms or a logical step-by-step process for solving a problem
- They obey (ordered) instructions
- They carry out actions / calculations etc.
- A procedural programming language provides the programmer a way to define precisely each step when performing a task
- Allows tight control over the underlying operation of the hardware
-Used in (large complicated) programs where similar operations may be carried out at varying stages of the program execution

Non-Procedural languages
- Non-procedural programming languages allow programmers to specify the results they want without specifying how to solve the problem
- Non-procedural languages are to do with rules / making queries / facts
- Used in database interrogation where retrieving answers are more important than the exact steps required to calculate the result
- Artificial intelligence, grammar checking and language translation applications are often written in a non-procedural language

Object, Class and Method
- The term ‘class’ refers to the written code which is used to define a template for an object
- An object is an instance of a class (an actual thing created using the template)
- Methods are actions (behaviours) that an object can perform or can be performed on the object

Standardisation of computer languages
- Standardisation allows changes and enhancements to be
incorporated in a controlled manner. Programming languages
are subject to continuous development resulting in multiple
versions that are often not fully compatible with each other.
Standardisation aims to avoid these incompatibilities and
provide advantages in design and programming such as;

Portability of programs. There is a high possibility that
applications written for a particular hardware platform may be
used on different platforms if the applications were developed
in a standardised language because compilers/interpreters for
standardised languages exist for diverse hardware platforms.

Portability of programmers. A programming language is an
interface between the programmer and the computing system
or a hardware platform. If the different platforms support a
standard programming interface, then the skills of the
programmer is portable across these platforms.

Easier to maintain the software. Most software requires
continuous maintenance and enhancements after the original
release. Most of the time, different programmers work on such
maintenance tasks. A standardised language ensures that
there will be sufficient skilled programmers available to carry
out maintenance tasks.

Acceptability. Most business organisations would not
consider using a programming language that is not
standardised. A non-standardised language is a big risk for
business-critical software development.

Faster development. Standardisation promotes standard
ways of working and therefore speeds up team working in
development.

Standard library. In addition to the particular programming
language, a common set of library functions for that language
may be standardised, to support “generic programming”. This
provides a language abstraction a level above the language
itself, promoting re-use and faster programming. Libraries have
been written by experts and thoroughly tested.

Standard algorithms, reference to binary search, quick sorts
etc. and benefits arising in design time and accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Software Engineering
(Component 1)

A

Software for Analysis and Planning
- Used in producing of designs
- Planning a system through flow charts or UML software
- Allow developers to produce planning and design documents for cases such as the end user or developers
- used in requirements engineering and management. Used to record and monitor requirements, use and test cases.
- Example of CASE tool is Rational Rose

Software for Software development
- Integrated development environments (IDEs) are software used in development
- provide wide range of tools including debugging such as automatic error checking and break points
- allow developers to produce test cases for their software as they develop (write code and debug it after while running a process).
- can be used to support multiple developers in the development of a single project.

Software for version management
- used as a repository for different stages of code development
- version can be submitted to a version management software to track and record the changes in the project
- useful when multiple developers are working on a single project, it ensures that a developer does not overwrite someone else’s code.
- can be used to roll-back software if a program becomes corrupt during the development.
- example includes GitHub.

Software used for system design
- Designing a system structure can be completed using flow chart or UML software
- UX and UI designers use wireframing and mock-up tools for user interfaces and experience
- collaborative code editors could be used to produce pseudocode for review by developers
- examples includes rational rose

Software used for system testing
- Control software to test that a solution conforms to internal and external standards
- Test environments can be used to test the portability of software on different platforms such as Linux and windows
- Version control repository can be used to report, monitor and analyse code errors, defects and bugs.
-Built-in automated testing features within IDEs such as breakpoints, to generate unit and system performance testing.

Difference between translation and execution errors
Translation errors - usually identified by a compiler where the instructions given cannot be translated to machine code due to errors:
Syntax error - e.g. IF without ENDIF or punctuation error or spelling error
Linking error - e.g. calling a function where the correct library has not been linked to the program
Semantic error - e.g. variable declared illegally (start with number, have space or contain special characters)
Execution/Runtime errors - even though a program will compile and execute, it could unexpectedly crash or produce incorrect results.
Logical error - e.g. division by 0 or use of incorrect logical/comparative operator
File handling - e.g. When an attempt is made to write to a file that does not exist

Compilers and Interpreters (translators)
- Translators are pieces of software used to convert one type of programming language to another
- Compilers convert high-level programming language source code into object and machine code, run through a single executable file.
- The compilation process can throw multiple errors at a time which can make debugging more difficult than using an interpreter
- Languages such as C++ and VB.net are compiled and produce a single executable targeted to one platform or operating system
- Once an application is compiled it is difficult to review the source code making intellectual property easier to protect
- Interpreters convert high-level programming language source code line-by-line unlike compilers.
- Interpreters translates a single line of code into machine code then executes it before moving onto the next
- an interpreted application does not produce an executable file, meaning source code must be interpreted each time the application is run.
- to execute interpreted source code the code needs to be freely available making intellectual property harder to protect
- an example of interpreted language in Python

Program version management in software engineering
- used to track and save source code throughout the software development process
- Program version management tools are commonly integrated into IDEs such as visual studio
- examples of program version management tools include GitHub and Mercurial.
- version management tools create different versions of source code to track changes and development
- versions can be stored on a local machine which is known as local version control
- versions can be stored on a local server which is known as centralised version control
- each version can include comments on what has been developed in that particular version and how
- can be used to roll-back to a previous version if a program becomes corrupt or a bug is found during the development process
- allows code to be reviewed and checked before they committed to a master version
- Cloud-based repositors can be used to provide distributed version control (GitHub or Bit Bucker)
- cloud-based version control will ensure developers always have access to the most up-to-date versions of the source code.
- Distributed version control useful when a software development team are working on different aspects of a single project
- version control is essential in maintaining quality control and assurance in software development
- version control is essential for tracking bugs and issues in source code.

Compilers, interpreters and assemblers
- Compilers, interpreters and assemblers are all examples of a translators. Translators are pieces of software used to convert one type of programming languages to another.
- Compilers coverts high-level programming language source code into object and machine code, run through a single executable file.
- The compilation process can throw multiple errors which at times can make debugging more difficult.
- Once software is compiled it does not need to go through recompilation unless changes are made to the original source code.
- One executable file produced by compilation can be executed many times.
- Many languages such as C++ and VB.Net produce a single executable targeted to one platform or operating system i.e. an EXE file for a Windows platform.
- If a program needs to be run on a different platform is will need to be recompiled and targeted at the required platform. i.e. Mach-O file for Mac OS.
- However, some programming languages such as Java are executed within its own virtual machine and can be compiled into byte code and executed cross-platform within an installed Java Virtual Machine (JVM).
- Once an application is compiled it is difficult to review the source code making intellectual property easier to protect.
- Unlike compilers, interpreters covert high-level programming language source code line-by-line.
- An interpreter translates a single line of code into machine code then executes it before moving onto the next.
- An interpreted application does not produce an executable file, meaning source code must be interrupted each time the application is run.
- To executed interpreted source code the needs to be a relevant interpreter installed on the running platform.
- The same high-level source could can be interpreted on many different platforms, making the application highly portable.
- Interpreted code could potentially be easier to debug as it will throw an exception at the current line being translated.
- Interrupted applications need the source code to run, making intellectual property harder to protect.
- An assembler is used to translate low-level assembly language mnemonics into machine code to directly program the CPU.
- Each assembly language instruction has a one-to one relationship with a machine code instruction unlike high-level languages where one instruction it translated into multiple machine code instructions.
- This means that assembly is faster that compiling and interrupting and allows greater control over memory usage.
- Directly writing code in binary machine code as this method would be prone to errors and highly time consuming hence using an assembly language and assembler.

Code editor tool in IDE
- Auto completion or code completion, Suggests or completes the function being typed including variables and arguments
- Bracket matching, Useful when coding in a language that uses blocks of code contained within brackets, for detecting missing brackets.
- Syntax checks, Recognises and highlights errors in syntax during code input.
- Formatting e.g. indentation or colour coding of variables

Purpose of code translation
- Converting the source code written by the programmer into machine code / executable code.

Translation and execution errors
- Errors in code syntax / syntax errors will prevent translation.
- e.g. spelling mistakes in command works / incorrect punctuation.
- Logical errors / semantic errors / runtime errors.
- e.g. 2 + 2 = 4 included as 2 * 2 = 8, any error in logic.
- divide by 0, infinite loops, referencing missing files.

Compiling vs Interpreting
Advantage of using a language that requires compiling compared with a language that
requires interpreting are:
- Once compiled the program will run quickly
- the object code will be efficient because the compiler will translate directly to the native code of the specific machine /optimise the code for the target hardware.
- Protection of intellectual property
Two advantages for a program developer of using a language that requires
interpreting compared with language that requires compiling are:
- Debugging can be easier as interpreter will stop translation at the point where the error occurred and highlight the error for the programmer to deal with.
- Code is more portable as it is not machine dependent and will run on different hardware or in a browser (java script)
- For security when downloading code from the Internet so it can be checked before interpreting on the local machine.

Assembler
- The purpose of an assembler is to translate assembly language into machine (executable) code
- An assembler’s source code is low level code, compliers translate high level source code.
- An assembly instruction which will translate to one machine code instruction, whereas single lines of high level code compile to many machine code instructions.

Stepping, Breaking and Variable watch
Stepping. Execution of code one line at a time. Allows the programmer to examine each line of code in isolation to check that it is behaving as intended.
Break points. A special marker that pauses execution of code at a present position. Whilst paused the programmer inspects the test environment (registers, memory, files etc) to check that the program is functioning correctly.
Variable watch. Used to view values in global and local variables as the code is executed in debug mode. Can be set to continually inspect variables which will be updated as the code is stepped through.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Data Structures
(compntent 1)

A

Characteristics of a linked list:
- This linked list is a dynamic data structure as it can grow and shrink in size after declaration.
- Each element in this linked list is known as a node, the first element is the head node.
- Each node consisted of the data itself and the address/reference to the next node.
- The last node 95 references null.
- This linked list uses more memory than an array as it needs to store the address/reference of the next node in addition to the data.
- A node in this linked list cannot be directly accessed and each node needs to be traversed until the correct node is accessed / sequential access.

Most efficient way to traverse a unblanced binary tree when searching
- The most suitable way to traverse the tree is in order.
- In order traversal starts with the left subtree nodes being visited first.
- Then visit the root node and finally the right subtree nodes.
- In order allows every node to be visited in sorted order.

Storing a playlist (songs) other than an array
- A queue would be the most suitable data structure to store each playlist.
- A queue follows the first in first out (FIFO/LILO) principle.
- Data is added (enqueuing) at the rear end of the structure.
- Data is accessed and removed (dequeuing) from the front of the structure which is suitable for storing a sequential playlist.

In-order:
- In-order traversal is applied by visiting the left subtree first, then root and finally the right subtree. This method could be when searching for a file in the file system.
- Sort/search a binary tree, traversing alphabetically

Post-order:
- Post-order traversal is applied by visiting the left subtree first, then right subtree and finally the root. This method could be used to delete all files in the file system.

Pre-order:
- Pre-order traversal is applied by visiting the root first, then left subtree and finally the right subtree. This method could be used to create a copy files in the file system.

Queue data structure:
- A queue data structure operates on the first in first out principle (FIFO) or the last in last out (LILO) principle.
- Data items are added at the end of the queue and removed from the front.

Stack and Queue:
- A stack uses the last in first out (LIFO) principle.
- In a stack the last or most recent item of data to be added to the stack is removed first.
- Adding data to a stack is known as pushing, whilst removing data from a stack is known as popping.
- A queue uses the first in first out (FIFO) principle. In a queue the last or most recent item of data added to a queue is the last to be removed.

Stack data structure:
- A stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) / first-in last-out (FILO) principle.
- It is a limited access data structure - elements can be added and removed from the stack only at the top
- push adds an item to the top of the stack, pop removes the item from the top.
- A stack can be used as a recursive data structure.
- A stack is either empty or it consists of a top and the rest which is a stack
- Underflow occurs when an attempt is made to pop an empty stack / overflow occurs when an attempt is made to add to a full stack

Ordered vs Unordered list:
- When searching an ordered list the search can be terminated when an item greater than the search value (or less than) is reached
- When searching an unordered list the search cannot be terminated until the last item has been reached.
- For an ordered list a binary search can be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Data Bases (Redo)

A

Advantages of structuring data in third normal form
- Removing Data duplication, 3rd normal form removes duplicated data reducing the size of the stored file.
- Protecting Data integrity, Once redundant data is removed, it is easy to change the data since data is present in only one place.
- Reduction of duplicated data decreases the risk of updating some rather than all instances of an item of data.

DBMS
- A database management system stores data accessed by multiple types of user.
- Different users may have different levels of access to different data sets.
- Different users may have different access rights (e.g. read only, read/write).
- Access is password protected
- Different classes of password give different amounts of access.
Tasks carried out by IT staff may include:
- Setting up the database tables, queries and reports for different classes of user.
- Database maintenance and performance management.
- Allocating user names and managing passwords.
- Making regular backups of the data, and restoring data in the event of loss.
- Monitoring use of the network through access logs, and identifying unacceptable or unauthorised use.
- Maintaining security through installing virus checking and firewall.
- Providing encryption of confidential data.
- Updating hardware and software as necessary to maintain the system.
- Providing help desk facilities
- Providing training facilities for users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly