Information Technology Flashcards
Explain Microchips
- A microchip or integrated circuit is a tiny electronic circuit on a semiconductor wafer.
- Jack Kilby at Texas Instruments invented the first microchip in 1958, and today they are used in all common electronic devices including computers, mobile phones, and GPS systems.
- Digital computers perform calculations using transistors that can switch between two states, on and off, representing the binary digits 0 and 1.
- Microchips miniaturize the electronic circuitry required. They are cheap to make because the circuitry is “printed” onto semiconductor wafers by photolithography, rather than being constructed one transistor at a time.
A photo-resist coating is applied to the wafer and ultraviolet light etches out the circuit pattern. Then another etching process lays down conducting metal paths.
- Modern integrated circuits that are just 0.2 in (5 mm) square host millions of transistors, each much tinier than the width of a human hair. They can switch on and off billions of times a second.
Explain analog and digital computing
- Analog computers are old-fashioned ones that work with continuously variable quantities like the strength of an electric current or mechanical rotation of a dial.
- Modern computers are based on digital technology. Information is represented as bits and bytes, sequences of binary 1s and 0s.
Fundamental to the technology is the idea of on/off, true/false.
- Analog computers date back to ancient times, the oldest known example being the Greek “Antikythera” mechanism, which dates to between 150 and 100 BC and was designed to calculate astronomical positions.
- In the mid-1900s, scientists developed analog computers with electrical circuits that could perform calculations. These computers were still in use in the 1960s and performed many of the calculations needed to plan NASA’s Apollo spacecraft missions to the Moon.
- Early digital computers used bulky “thermionic valves” and later transistors to switch currents to perform calculations.
- Microchips revolutionized computer technology, paving the way for small, powerful desktop computers.
What is computer algorithm
- A computer algorithm is a sequence of instructions designed to solve a problem. It might specify the way a computer should calculate monthly payments for employees, for example, and how it should display the results.
- Real computer algorithms are normally very complicated, but this simple example outlines the steps to turn a daylight-sensing streetlamp on at night:
(1) Is it dark? If yes, go to (2), if no, go to (3).
(2) Turn on the light. Go to (3).
(3) End - Genetic algorithms are ones that evolve in a process that mimics natural selection.
An algorithm designed to perform a certain task is tested and rated for its success, then allowed to “breed” with other algorithms by mixing up their attributes.
The most successful “offspring” algorithms then breed and the process repeats until the computer “evolves” the best algorithm for the job.
Explain neural networks in computer science
- In computer science, a neural network is an information-processing concept inspired by the way biological nervous systems process information.
- Many processing elements are connected like a network of biological neurons, and they work in unison to solve specific problems.
- Conventional computers use algorithms to solve a problem, but this restricts their capabilities to problems we already know how to solve.
- Neural networks are like experts in the information they analyze and are good at finding patterns in a large jumble of data.
- A neural network could compare the features of thousands of Hollywood movies in a database with their box-office takings, for instance, and pinpoint the factors that distinguish the hits from the flops.
Another application of neural networks is in face-recognition software. Computers can be trained to recognize a face by analyzing images and comparing positions of features such as eye corners, but neural networks can learn which features are most useful for matching a face to images in a database.
What is quantum computing
- A quantum computer is one that would use the physics of quantum mechanics to increase its computational power beyond that of a conventional computer. Such instruments are still in a very early research phase.
- Conventional computers store data in a binary series of 0s and 1s. Instead, a quantum computer would store information as 0, 1, or a quantum superposition of the two.
These “quantum bits,” or qubits, would allow much faster calculations. While three conventional bits could represent any number from 0 to 7 at one time, three qubits could represent all these numbers simultaneously.
This means a quantum computer could tackle many calculations simultaneously and solve problems that would keep today’s supercomputers busy for millions of years.
- Experimental quantum computers have used a few qubits to perform simple calculations like multiplying 5 by 3. It’s not clear whether they will become a practical option because they rely on complicated and delicate procedures such as quantum entanglement to couple the qubits together.
What is Turing test
- The Turing test is a measure of a machine’s ability to demonstrate intelligence.
- British mathematician, computer pioneer and Second World War code breaker Alan Turing proposed the test in the 1950s.
- Basically, it suggests that a computer has achieved human intelligence if it convincingly responds like a person.
- Turing proposed an experiment in which a volunteer sits with an experiment manager behind a screen. On the other side, out of sight, a second volunteer asks questions. The first volunteer and a computer both answer with text messages, and the manager decides at random which of the two responses the questioner will receive. If the questioner can’t distinguish the human responses from the computer ones, the computer has achieved human intelligence.
- Turing predicted that machines would eventually pass the test. Various commercial text and e-mail programs regularly trick people into thinking they’ve communicated with a person, but no computer has yet passed a rigorous Turing test.
What is hard drive
- Hard disks in computers and servers store changing digital information in a fairly permanent form, giving computers the ability to “remember” data even when they’re switched off.
- They consist of several solid disks or “platters” on which data are stored magnetically, and a read/write head to record and retrieve information.
- The technology was invented in the 1950s and later took the name “hard disks” to distinguish them from floppy disks, which stored data on flexible plastic film.
- The platters in a hard disk are usually made of aluminum or glass with a coating of magnetic recording material, which can be easily erased and rewritten and preserves information for many years.
- When the drive is operating, the platters typically spin 7,200 times a minute. The arm holding the read/write heads can often move between the central hub and the edge of the disk and back up to fifty times per second.
Some desktop computers now have hard disks with more than 1.5 terabytes (1.5 million million bytes) of memory.
Explain flash memory
- Like hard disks, flash memory stores digital information and “remembers” it even when the power is switched off.
- Unlike hard disks, flash memory does not have any moving parts. It doesn’t mind a good hard knock and can withstand large temperature variations, and sometimes even immersion in water. That makes it the ideal memory for portable devices.
- Flash memory works by switching transistors on and off to represent sequences of 0s and 1s. Unlike conventional transistors, which “forget” information when the power is off, the transistors in flash memory have an extra “gate” that can trap electric charge, registering a 1, until another electric field is applied to drain the charge and return the bit to 0.
- Flash memory is used in mobile phones, MP3 players, digital cameras, and memory sticks, which are often used to back up files or transfer files between computers.
Some memory sticks have storage capacities of thirty-two gigabytes, enough to store around twenty hours of video.
Explain optical storage
- Optical storage refers to types of memory, such as CDs and DVDs, that are read by a laser. Today, desktop computers have drives that both read and write these media.
- Both CDs and DVDs have a long, spiraled track up to around 7.5 miles (12 km) long. Mass-produced CDs and DVDs have little bumps around the track that encode digital data as a series of 0s and 1s.
- To read the data, a red laser bounces light off the bumps and a sensor detects height changes by measuring the reflected light.
- CD burners are now standard in personal computers. Write-once CDs are coated with a layer of see-through dye, and a laser burns data onto the disk by turning this dye opaque.
Rewritable CDs use a more complicated chemical trick that allows data to be erased again by laser heating.
Blu-ray disks can store even more information than DVDs because they are read with a blue-violet laser that has a shorter wavelength than a red laser, making it possible to focus the laser spot with much greater precision.
What is holographic memory
- Holographic memory might one day revolutionize high-capacity data storage. Today, magnetic storage and optical storage are the usual ways of storing large amounts of data, recording individual “bits” on a surface and reading them one bit at a time.
- The holographic technique would record information in a 3D volume and read out millions of bits simultaneously, speeding up data transfer enormously.
- To record holographic data, a laser beam is split into two, with one ray passing through a filter carrying raw binary data as transparent and dark boxes. The other “reference” beam takes a separate path, recombining with the data beam to create an interference pattern, recorded as a hologram inside a light-sensitive crystal.
- To retrieve the data, the reference beam shines into the crystal at the same angle that it stored the data, to hit the correct data location inside.
Several companies hope to develop commercial holographic memory, which could one day store many terabytes (millions of millions of bytes) of data in a crystal the size of a sugar cube.
What is radar
- Radar is a technique for detecting objects and measuring their distances and speeds by bouncing radio waves off them.
- It developed rapidly during the Second World War and is still used in a wide range of applications, including air traffic control and weather forecasting as well as satellite mapping of the Earth’s terrain and that of other planets.
- Radar stands for “RAdio Detection And Ranging.”
- A radar dish, or antenna, transmits pulses of radio waves or microwaves that reflect off any object in their path. The reflected part of the wave’s energy returns to a receiver antenna, with the arrival time indicating the object’s distance. If the object is moving toward or away from the radar station, there is a slight difference in the frequencies of the transmitted and reflected waves due to the Doppler effect.
- Marine radars on ships prevent collisions with other ships, while meteorologists use radar to monitor precipitation.
Similar systems operating with visible laser light are called LIDAR and can measure details with higher precision.
What is SONAR
- Sonar is a technique that ships use to navigate and detect other vessels, or to map the ocean floor, using sound waves.
- “Passive” sonar instruments listen out for the sounds made by other ships or submarines, while “active” sonar systems emit sound waves and listen for the echoes.
- Sonar stands for “SOund NAvigation and Ranging,” and the first instruments developed rapidly during the First World War for detection of enemy submarines.
Active sonar creates a pulse of sound, often called a ping, and then listens for reflections of the pulse, the arrival time of the reflections indicating the distance of an obstacle.
Outgoing pings are single-frequency tones or changing-frequency chirps, which allow more information to be extracted from the echo.
Differences in frequency between pings and echoes can allow measurement of a target’s speed, thanks to the Doppler effect.
Fishing boats use sonar to pinpoint groupings of fish, while some animals, including bats and dolphins, use similar natural echo-location to navigate or locate mates, predators, and prey.
Explain Internet and World Wide Web
- The Internet is a global system of interconnected computers that use the “Internet Protocol Suite” as a common language to speak to one another.
- It’s a vast network formed by myriad smaller networks run by organizations including private companies, universities, and government bodies, linked together by fiber-optic cables, phone lines, and wireless technologies.
- The World Wide Web, or usually just the web, is a way of handling documents over the Internet.
- Web browser software allows users to view pages containing text, images, videos, and other multimedia and jump between them via “hyperlinks.”
- British computer scientist Tim Berners-Lee is credited with inventing the web in 1989 while at CERN, the European center for particle physics on the French–Swiss border.
The main mark-up language for web pages is HTML (hypertext mark-up language), which uses tags at either end of text phrases to tell a web browser how to display them—for instance, as a clickable hyperlink.
Estimates suggest more than 2 billion people worldwide currently access the web.
Note on Internet security
- The Internet allows easy transfer of information, but it also allows the spread of “malware”—programs written with malicious intent.
Computer viruses are harmful programs that can transfer between computers via e-mail, aiming to delete files or disable operating systems like Microsoft Windows.
Other malware includes spyware, which might stealthily install itself on a computer and transmit the user’s secret passwords to identity thieves,
while a computer “worm” self-replicates and sends copies of itself to other computers on a network.
Networked computers need constantly updated antivirus software to detect and remove new malware, as well as firewalls that prevent unauthorized access from the outside.
Denial-of-service attacks attempt to make an organization’s website useless by bombarding it with so many communication requests that it can’t cope with legitimate traffic.
People install software agents called bots to launch attacks on specific sites, or make bots infect computers by stealth.
Many countries deem denial-of-service attacks a criminal offence.
Explain distributed computing
- A distributed computing project is one that uses many different computers working together to solve a problem, with each computer taking charge of a small piece of the overall data processing.
- The goal is to complete the task much faster than would be possible with a single computer.
- One type of distributed computing is grid computing, in which many computers cooperate remotely, sometimes using the idle time of ordinary home computers.
An example of this is the “SETI@Home” project launched in 1999. Around eight million people have signed up to download a screensaver-like program that sifts little packets of data from the Arecibo radio telescope in Puerto Rico to look for unusual signals—some of which might be communications from intelligent alien civilizations—and return the results to project organizers.
Folding@home is a similar computing project that invites the public to use their computers to analyze protein folding. This could provide vital information that leads to new treatments for diseases such as cancer and Alzheimer’s.