Exam 1 Flashcards
refers to chapters 1-4
A(n) _____________ is a set of rules that determine what a layer would do and provides a clearly defined set of messages that software at the layer needs to understand. Pick One: agreement standard protocol regulations policy
protocol (correct)
A local area network (LAN) connects other LANs and BNs located in different areas to each other and to wide area networks in a span from 3 to 30 miles. True False
False (correct)
An intranet is a LAN that uses Internet technologies and is publicly available to people outside of the organization. True False
False (correct)
The network layer of the Internet model uses the _____________ protocol to route messages though the network. TCP HTTP FTP SMTP IP
TCP HTTP FTP SMTP IP
A(n) _________ is a LAN that uses the same technologies as the Internet but is open to only those inside the organization. WAN BN extranet intranet MAN
WAN BN extranet intranet MAN
A local area network is: Pick one: a large central network that connects other networks in a distance spanning exactly 5 miles. a group of personal computers or terminals located in the same general area and connected by a common cable (communication circuit) so they can exchange information. a network spanning a geographical area that usually encompasses a city or county area (3 to 30 miles). a network spanning a large geographical area (up to 1000s of miles). a network spanning exactly 10 miles with common carrier circuits.
a group of personal computers or terminals located in the same general area and connected by a common cable (communication circuit) so they can exchange information.
Which of the following is true about ISO: Pick one: It makes technical recommendations about data communications interfaces Its name stands for International Organization for Standardization It is based in Geneva, Switzerland It is one of the most important standards-making bodies You Answered All of these
All of these
In the OSI model, the application layer is the end user’s access to the network. Pick one: True False
False
__________ ensure that hardware and software produced by different vendors work together. Pick one: Delimiters Standards ASPs RBOCs Intranets
Standards
A ___________ is similar to an intranet in that it uses Internet technologies, but is developed for users outside the organization. Pick one: Internet Usenet Wide Area Network Extranet
Extranet
One perspective of data communications and networking as stated in the textbook, examines the management of networking technologies, including security, network design, and managing the network on a day-to-day basis and long term basis. Pick one: True False
True
BYOD stands for Pick one: Bring Your Own Device Bring Your Own Database Build Your Own Device Build Your Own Database
Bring Your Own Device
A(n) _________ is a LAN that uses the same technologies as the Internet but is open to only those inside the organization. Pick one: WAN BN extranet intranet MAN
intranet
__________ ensure that hardware and software produced by different vendors work together. Pick one: Delimiters Standards ASPs RBOCs Intranets
Standards
Which of the following is true about ISO: Pick one: It makes technical recommendations about data communications interfaces Its name stands for International Organization for Standardization It is based in Geneva, Switzerland It is one of the most important standards-making bodies You Answered All of these
All of these
Telecommunications is the transmission of voice and video as well as data and implies transmitting a longer distance than in a data communication network. Pick: True False
True
How can data communication networks affect businesses?
Data communication networks can affect businesses by being the foundations for distributed systems in which information system applications are divided among a network of computers. Data communication networks facilitate more efficient use of computers and improve the day-to-day control of a business by providing faster information flow, aiding strategic competitive advantage. They also provide message transfer services to allow computer users to talk to one another via electronic mail.
- Discuss three important applications of data communication networks in business and personal use.
Three important applications of data communication networks in business and personal use include email, videoconferencing, and the Internet.
- How do LANs differ from WANs, and BNs?
A Local Area Network (LAN) is a group of microcomputers or terminals located in the same general area. A Backbone Network (BN) is a large central network that connects most everything on a single company site. A Metropolitan Area Network (MAN) encompasses a city or county area. A Wide Area Network (WAN) spans cities, states, or national boundaries. Typically, MANs and WANs used leased facilities, while LANs and BNs are often located internally in an organization and used owned facilities.
- What is a circuit?
The circuit is the pathway through which the messages travel. It can be made up of a copper wire, although fiber optic cable and wireless transmission are becoming more common. A circuit can also pass across many types of physical facilities such as copper wire or fiber optic cable, but the single end-to-end connection, no matter what the equipment, is referred to as the circuit. There are many devices along the circuit’s path that perform special functions such as hubs, switches, routers, and gateways.
- What is a client?
The client is the input or output hardware device at the other end of a communication circuit. It typically provides remote users with access to the network and the data and software on the server.
- What is a server?
The server stores data or software that can be accessed by the clients, or remote users of a hardware input or output device. In client-server computing, several servers may work together over the network to support the business application.
- Why are network layers important?
Communication networks are often broken into a series of layers, each of which can be defined separately, to enable vendors to develop software and hardware that can work together in the overall network. These layers enable simplicity in development and also in the comprehension of complex networks. In the end, the strategy of using more simplistic network layers allows vastly different kinds of equipment to be able to have connectivity over a common platform or network, using protocols and standards that are applicable to each narrow slice of the network.
- Describe the seven layers in the OSI network model and what they do.
The application layer is the application software used by the network user. The presentation layer formats the data for presentation to the user by accommodating different interfaces on different terminals or computers so the application program need not worry about them. The session layer is responsible for initiating, maintaining, and terminating each logical session between end users. The transport layer deals with end-to-end issues, such as procedures for entering and departing from the network, by establishing, maintaining, and terminating logical connections for the transfer of data between the original sender and the final destination of the message. The network layer takes the message generated by the application layer and if necessary, breaks it into several smaller messages. It then addresses the message(s) and determines their route through the network, and records message accounting information before passing it to the data link layer. The data link layer formats the message to indicate where it starts and ends, decides when to transmit it over the physical media, and detects and corrects any errors that occur in transmission. The physical layer is the physical connection between the sender and receiver, including the hardware devices (e.g., computers, terminals, and modems) and physical media (e.g., cables, and satellites).
- Explain how a message is transmitted from one computer to another using layers.
The application layer is the application software used by the network user. The transport layer is responsible for obtaining the address of the end user (if needed), breaking a large data transmission into smaller packets (if needed), ensuring that all the packets have been received, eliminating duplicate packets, and performing flow control to ensure that no computer is overwhelmed by the number of messages it receives. The network layer takes the message generated by the application layer and if necessary, breaks it into several smaller messages. It then addresses the message(s) and determines their route through the network, and records message accounting information before passing it to the data link layer. The data link layer formats the message to indicate where it starts and ends, decides when to transmit it over the physical media, and detects and corrects any errors that occur in transmission. The physical layer is the physical connection between the sender and receiver, including the hardware devices (e.g., computers, terminals, and modems) and physical media (e.g., cables, and satellites).
- Describe the three stages of standardization.
The formal standardization process has three stages: specification, identification of choices, and acceptance. The specification stage consists of developing a nomenclature and identifying the problems to be addressed. In the identification of choices stage, those working on the standard identify the various solutions and choose the optimum solution from among the alternatives. Acceptance, which is the most difficult stage, consists of defining the solution and getting recognized industry leaders to agree on a single, uniform solution. ISO standards development is pursued at the national and international levels. Authorized national technical committees can be designated as Technical Advisory Groups (TAGs) to international subcommittees or workgroups.
Examples of national-level standards bodies (with the legal authority for national standards development and articulation with ISO) are:
Standards Designation Name of National Standards Body (ISO Member) ANSI American National Standards Institute SCC Standards Council of Canada DGN Dirección General de Normas BSI British Standards Institution JISC Japanese Industrial Standards Committee AFNOR Association française de normalisation BIS Bureau of Indian Standards CSBTS China State Bureau of Quality and Technical Supervision
- How are Internet standards developed?
The Internet Engineering Task Force (IETF; www.ietf.org) sets the standards that govern how much of the Internet will operate. Developing a standard usually takes 1-2 years. Usually, a standard begins as a protocol developed by a vendor. When a protocol is proposed for standardization, IETF forms a working group of technical experts to study it. The working group examines the protocol to identify potential problems and possible extensions and improvements, and then issues a report to IETF. If the report is favorable, the IETF issues a Request for Comment (RFC) that describes the proposed standard and solicits comments from the entire world. Once no additional changes have been identified, it becomes a Proposed Standard. Once at least two vendors have developed software based on it, and it has proven successful in operation, the Proposed Standard is changed to a Draft Standard. This is usually the final specification, although some protocols have been elevated to Internet Standards, which usually signifies a mature standard not likely to change. There is a correlation of IETF RFCs to ISO standards.
- Describe two important data communications standards-making bodies. How do they differ?
The International Organization for Standardization (ISO) makes technical recommendations about data communication interfaces. The Telecommunications group (ITU-T) is the technical standards-setting organization of the United Nations International Telecommunications Union (ITU). Postal Telephone and Telegraphs (PTTs) are telephone companies outside of the United States. ITU-T establishes recommendations for use by PTTs, other common carriers, and hardware and software vendors. Although a complicated series of acronyms, it is useful to point out that the ISO created the OSI model! Information technology standards contribute to data communications. In the USA, the National Committee for Information Technology Standards (NCITS) has responsibility (under ANSI) for multimedia (MPEG/JPEG), intercommunication among computing devices and information systems (including the Information Infrastructure, SCSI-2 interfaces, Geographic Information Systems), storage media (hard drives, removable cartridges), database (including SQL3), security, and programming languages (such as C++). The NCITS T3 committee on Open Distributed Processing (ODP) is the US Technical Advisory Group (TAG) to JTC 1/SC 6/WG 7 (Subcommittee 6, Workgroup 7). JTC 1 is the ISO/IEC Joint Technical Committee 1 on Information Technology. Among NCITS/T3’s current projects are: Abstract Syntax Notation One (ASN.1), the OSI Directory Services (and protocols), routing information exchange protocols, multicasting (all of considerable interest to the telecommunications industry.) T3 has US TAG responsibility for codes and character sets. IEEE plays an important standards role for data communications, particularly in LAN technology protocols. Note that the HTML specifications state that HTML uses the ISO 8859-1 (Latin 1) character set.
- What is the purpose of a data communications standard?
The use of standards makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time. The software or hardware defined by the standard at one network layer can be easily updated, as long as the interface between that layer and the ones around it remains unchanged.
- What are three of the largest inter-exchange carriers (IXCs) in North America?
Two of the largest inter-exchange carriers (IXCs) in North America are AT&T and Sprint, and the formerly large MCI was acquired in a post bankruptcy merger with Verizon.
- Discuss three trends in communications and networking.
First, pervasive networking will change how and where we work and with whom we do business. Pervasive networking means that we will have high speed communications networks everywhere, and that virtually any device will be able to communicate with any other device in the world. Prices for these networks will drop and the globalization of world economies will continue to accelerate. Second, the integration of voice, video, and data onto the same networks will greatly simplify networks and enable anyone to access any media at any point. Third, the rise in these pervasive, integrated networks will mean a significant increase the availability of information and new information services. It is likely that application service providers will evolve that act as information utilities.
- Why has the Internet model replaced the OSI model?
The Internet model is simpler (effectively collapsing the top three layers of the OSI model into a single model) and easier to remember and understand. Further, the ISO OSI Reference Model is the result of a formal standardization process and is technical in its presentation. By contrast, the Internet model is appropriate for those within the networking community with practical needs related to implementing the Internet and networking. However, only a few years ago the Internet model was commonly understood to have only four layers. Today, the transport layer is now separately identified in the Internet model, yielding an important, fifth layer for comprehension. This evolution in presentation may show that at least one technical distinction from the OSI model is now considered practical as the scope, volume of traffic, and complexity of networking (and of the Internet) grows.
- In the 1980s, when we wrote the first edition of this book, there were many, many more protocols in common use at the data link, network, and transport layers than there are today. Why do you think the number of commonly used protocols at these layers has declined? Do you think this trend will continue? What are the implications for those who design and operate networks?
Today there is convergence around the non-proprietary use of TCP/IP as the protocol of choice for all networks. For the most part, network software is designed to interface with networks using this protocol. By non-proprietary, this means that TCP/IP is an interoperable protocol portable to any manufacturer’s hardware. All manufacturers are developing their products to use TCP/IP as their protocol of choice. This is of great benefit for those operating networks because they do not have to deal with the incompatibilities of various proprietary networks. In the past, network equipment such as IBM’s SNA and Novell’s Netware products had retained proprietary protocols that did not interface with as much ease as today’s more compatible and TCP/IP based products. The decline of the number of competing protocols is related to the emergence of TCP/IP as the universal connector, along with the rise in competition and subsequently better price availability from those vendors who market to this protocol, thus ensuring the viability of this standard for a long time to come for network managers.
- The number of standardized protocols in use at the application layer has significantly increased from the 1980s to today. Why? Do you think this trend will continue? What are the implications for those who design and operate networks?
The biggest reason that there are more standardized protocols at the application layer is related to the predominant use of the Web and its standardized graphic interface (HTTP, DHCP, for example). In a way, many new protocols ride on top of TCP/IP networks, and some of these new protocols have been developed to enable the retrofitting of new technologies on top of an older networking architecture. On the other hand some proprietary protocols connected with such models as IBM’s SNA and DECNet have declined in significance while the importance of Internet-related protocols has grown,
- How many bits (not bytes) are there in a 10 page text document? Hint: There are approximately 350 words on a double-spaced page.
First, some assumptions must be made. Assume each word averages seven letters and there is one space between each word. Next assume we are using 8-bit ASCII. . Multiply 350 words by 8 bytes (7 letters plus a space) to get 2,800 bytes per page. Multiply 2,800 by 10 pages to get 28,000 Multiply 28,000 bytes by 8 bits per byte to get 224,000 bits
Chapter 2: Application Layer Answers to Textbook Exercises 1. What are the different types of application architectures?
Host-based (all processing done on host system and all data on host with terminals providing access), client-based (with processing done on client and all data stored on server), and client-server (balanced processing; usually host provides data access and storage while the client provides application and presentation logic).
- Describe the four basic functions of an application software package.
Data storage, data access logic, application logic, and presentation logic.
- What are the advantages n disadvantages of host-based networks versus client-server networks?
Advantages: Centralized security Integrated architecture from single vendor Simpler, centralized installation Disadvantages: Having all processing on host may lead to overload Cost of software and upgrades; expensive infrastructure Terminal totally dependent on server
- What are the advantages n disadvantages of client-server networks?
Advantages: Balanced processing demands Lower cost; inexpensive infrastructure Can use software and hardware from different vendors Scalability Disadvantages: Problems with using software and/or hardware from different vendors More complex installation or updating (although automated installation software helps greatly in this area).
- What is middleware and what does it do?
Middleware manages client-server message transfer and shields application software from impacts of hardware changes. Middleware provides standard communication between products of different vendors through translation.
- Suppose your organization was contemplating switching from a host-based architecture to client-server. What problems would you foresee?
Infrastructure supporting cabling hardware and software will need to be redesigned to support the client-server approach to the architecture. Someone would need to be designated to manage what would now become the local area network, so there may be a personnel impact. Security would be one area of concern, since processing can be done on individual workstations. There may be somewhat greater complexity of upgrades, although newer software is reducing the impact of this kind of problem.
- Which is less expensive: host-based networks or client-server networks? Explain.
Client-server networks are less expensive because in a competitive market involving multiple vendors, software and hardware upgrades cost substantially less. Upgrades for host-based networks are generally very expensive, and occur in what is generally termed a “step function,” meaning requiring large, discrete steps in expenditure. LANs have the ability to be deployed with a smoother cost curve in less severe increments.
- Compare and contrast two-tiered, three-tiered, and n-tiered client server architectures. What are the technical differences, and what advantages and disadvantages do each offer?
Two-tiered architectures have only clients and servers. Three-tiered architectures typical separate (1) presentation logic, (2) application logic, and (3) and data access logic and storage. In n-tiered architecture more than one tier may be used to support application logic, typically due to a Web server tier being included. Three-tiered or n-tiered architectures place a greater load on the network, but balances server load better and is more scalable.
- How does a thin client differ from a thick client?
Thick clients support all or most application logic while thin clients support little or no application logic. Development and maintenance costs for more complex thick-client environments can be higher than for thin clients.
- What are the benefits of cloud computing?
Benefits include gaining access to experts to manage the cloud, potentially lower costs, scalability, and pay-as-you-g0.
- Compare and contrast the three cloud computing models.
See Figure 2-7
- What is a network computer?
A network computer supports Internet access but has no hard disk local storage.
- For what is HTTP used? What are its major parts?
The standard protocol for communication between a Web browser and a Web server is Hypertext Transfer Protocol (HTTP). An HTTP request from a Web browser to a Web server has three parts. Only the first part is required; the other two are optional. • the request line, which starts with a command (e.g., GET), provides the URL, and ends with the HTTP version number that the browser understands. • the request header, which contains a variety of optional information such as the Web browser being used (e.g., Internet Explorer), the date, and a userid and password for use if the Web page is password-protected. • the request body, which contains information sent to the server, such as information from a form. The format of an HTTP response from the server to the browser is very similar to the browser request. It has three parts, but only the last part is required; the first two are optional: • the response status, which contains the HTTP version number the server has used, a status code (e.g., 200 means OK, 404 means page not found), and reason phrase (a text description of the status code) • the response header, which contains a variety of optional information such as the Web server being used (e.g., Apache), the date, the exact URL of the page in the response body, and the format used for the body (e.g., HTML) • the response body, which is the Web page itself.
- For what is HTML used?
HTML is the language in which web pages are created. The response body of an HTTP response can be in any format, such as text, Microsoft Word, Adobe PDF, or a host of other formats, but the most commonly used format is HTML. The major parts of HTML are the heading (denoted by the tag) and the body (denoted by the tag) of the response.
- Describe how a Web browser and Web server work together to send a Web page to a user.
In order to get a page from the Web, the user must type the Internet Uniform Resource Locator (URL) for the page he or she wants, or click on a link that provides the URL. The URL specifies the Internet address of the Web server and the directory and name of the specific page wanted. In order for the requests from the Web browser to be understood by the Web server, they must use the same standard protocol. The standard protocol for communication between a Web browser and a Web server is Hypertext Transfer Protocol (HTTP).
- Can a mail sender use a two-tier architecture to send mail to a receiver using a three-tier architecture? Explain.
Yes. With e-mail, users with the two-tier architecture will use the user agent software to interface with their email server, which will send out web based, SMTP packets to the far end receiver’s server computer with mail server software. The server at the far end will issue an IMAP or SMTP packet to the receiver’s server computer, which will then arrive at the receiver when they ask for the email with an HTTP request to the web based email application. Thus, a 2-tiered system easily interfaces with a three-tiered architecture over the internet using the appropriate protocols.
- Describe how mail user agents and message transfer agents work together to transfer mail messages.
The sender of an e-mail uses a user agent (an application layer software package) to write the e-mail message. The user agent sends the message to a mail server that runs a special application layer software package called a message transfer agent. These agents read the envelope and then send the message through the network (possibly through dozens of mail transfer agents) until the message arrives at the receiver’s mail server. The mail transfer agent on this server then stores the message in the receiver’s mailbox on the server. When the receiver next accesses his or her e-mail, the user agent on his or her client computer contacts the mail transfer agent on the mail server and asks for the contents of the user’s mailbox. The mail transfer agent sends the e-mail message to the client computer, which the user reads with the user agent.
- What roles do SMTP, POP, and IMAP play in sending and receiving e-mail on the Internet?
SMTP defines how message transfer agents operate and how they format messages sent to other message transfer agents. The SMTP standard covers message transmission between message transfer agents (i.e., mail server to mail server). A different standard called Post Office Protocol (POP) defines how user agents operate and how messages to and from mail transfer agents are formatted. POP is gradually being replaced by a newer standard called Internet Mail Access Protocol (IMAP). While there are several important technical differences between POP and IMAP, the most noticeable difference is that before a user can read a mail message with a POP user agent, the e-mail message must be copied to the client computer’s hard disk and deleted from the mail server. With IMAP, e-mail messages can remain stored on the mail server after they are read.
- What are the major parts of an e-mail message?
The major parts of an e-mail message are: • the header, which lists source and destination e-mail addresses (possibly in text form (e.g., “Susan Smith”) as well as the address itself (e.g., smiths@robert-morris.edu)), date, subject, and so on • the body, which is the message itself.
- What is a virtual server?
A virtual server is one computer that acts as several servers. Using special software like Microsoft Virtual PC, WMWare, or VirtualBox, several operating systems are installed on the same physical computer so that one computer appears as several different ones on the network.
- What is Telnet, and why is it useful?
Telnet enables users on one computer to login into other computers on the Internet. Once Telnet makes the connection from the client to the server, a user can login into the server or host computer in the same way as that person would if they dialed in with a modem; the user must know the account name and password of an authorized user. Telnet enables a person to connect to a remote computer without incurring long distance telephone charges. Telnet can be useful because it enables access to servers or host computers without sitting at the dedicated computer’s keyboard. Most network managers use Telnet to work on their organization’s servers, rather than physically sitting in front of them and using the keyboards.
- What is cloud computing?
With cloud computing, a company contracts with another firm to provide software services over the Internet, rather than installing the software on its own servers. The company no longer buys and manages its own servers and software, but instead pays a monthly subscription fee or a fee based on how much they use the application.
- Explain how instant messaging works.
An instant messaging (client) communicates with an IM server application. Once a user is online, the server application can monitor connections so that multiple pre-identified clients can be notified and decide to participate in real-time messaging. IM may include video or audio. Video exchange, of course, requires cameras. Underlying this application requires a full-duplex connection between destination and host.
- Compare and contrast the application architecture for videoconferencing with the architecture for e-mail.
Videoconferencing must deliver real-time services demanding high capacity data transfer for both image and voice transmission. Specialized hardware (and even rooms) may be required. E-mail messages (typically without large attachments) are relatively small by comparison, can be received by any Internet-capable computer, and do not have to be consumed in real time.
- Which of the common application architectures for e-mail (two-tier client-server, Web-based) is “best”? Explain.
The best architecture for email can depend on how one wants to use e-mail. If a person wants to be able to access their e-mail from anywhere, then Web-based is best. If the person wants professional backup and storage within an organization, then two-tier client-server is best. If the person wants storage of e-mail strictly under their control and they also want to be able to access their e-mail files off-line when there is a network service interruption, then host-based is best. Employers may choose to use client-server architecture for email access within the organization and Web-based architecture for access to the same system for those times when employees are outside the company (at home, at another business, or on travel).
- Some experts argue that thin-client client-server architectures are really host-based architectures in disguise and suffer from the same old problems. Do you agree? Explain.
While thin client have substantially less application logic than thick client, they have sufficient application logic (as, for example, a Web browser possibly with Java applets) to participate in a client-server relationship. The older host-based terminals did not even have this much application logic. While thin-client use today reflects some level of return to a more centralized approach, the client is likely served by multiple servers (and even multiple tiers), rather than a single large host server as in the past. Thus, the two approaches are similar, but not exact, from a technological design perspective.
Chapter 3 Physical layer Answers to Textbook Exercises 1. How does a multipoint circuit differ from a point-to-point circuit?
A point-to-point configuration is so named because it goes from one point to another (e.g., one computer to another computer). These circuits sometimes are called dedicated circuits because they are dedicated to the use of these two computers. In a multipoint configuration (also called a shared circuit), many computers are connected on the same circuit. This means that each must share the circuit with the others, much like a party line in telephone communications. The disadvantage is that only one computer can use the circuit at a time. Multipoint configurations are cheaper than point-to-point configurations.
- Describe the three types of data flows.
The three types of data flows are simplex, half-duplex and full duplex. Simplex is one-way transmission, such as that in radio or TV transmission. Half duplex is two-way transmission, but you can transmit in only one direction at a time. A half duplex communication link is similar to a walkie-talkie link; only one computer can transmit at a time. With full duplex transmission, you can transmit in both directions simultaneously, with no turnaround time.
- Describe three types of guided media.
Guided media are those in which the message flows through a physical media such as a twisted pair wire, coaxial cable, or fiber optic cable; the media “guides” the signal. One of the most commonly used types of guided media is twisted pair wire, insulated or unshielded twisted pairs (UTP) of wires that can be packed quite close together. Bundles of several thousand wire pairs are placed under city streets and in large buildings. Twisted pair wire is usually twisted to minimize the electromagnetic interference between one pair and any other pair in the bundle. Coaxial cable is another type of commonly used guided media. Coaxial cable has a copper core (the inner conductor) with an outer cylindrical shell for insulation. The outer shield, just under the shell, is the second conductor. Coaxial cables have very little distortion and are less prone to interference, they tend to have low error rates. Fiber optics, is becoming much more widely used for many applications, and its use is continuing to expand. Instead of carrying telecommunication signals in the traditional electrical form, this technology utilizes high-speed streams of light pulses from lasers or LEDs (light emitting diodes) that carry information inside hair-thin strands of glass or plastic called optical fibers.
- Describe four types of wireless media.
Wireless media are those in which the message is broadcast through the air, such as radio, infrared, microwave, or satellite. One of the most commonly used forms of wireless media is radio. Radio data transmission uses the same basic principles as standard radio transmission. Each device or computer on the network has a radio receiver/transmitter that uses a specific frequency range that does not interfere with commercial radio stations. The transmitters are very low power, designed to transmit a signal up to 500 feet and are often built into portable or handheld computers. Infrared transmission uses low frequency light waves (below the visible spectrum) to carry the data through the air on a direct line-of-sight path between two points. This technology is similar to the technology used in infrared TV remote controls. It is prone to interference, particularly from heavy rain, smoke, and fog that obscure the light transmission. Infrared is not very common, but it is sometimes used to transmit data from building to building. A microwave is an extremely high frequency radio communication beam that is transmitted over a direct line-of-sight path between any two points. As its name implies, a microwave signal is an extremely short wavelength. Microwave radio transmissions perform the same functions as cables. Similar to visible light waves, microwave signals can be focused into narrow, powerful beams that can be projected over long distances. Transmission via satellite is similar to transmission via microwave except, instead of transmitting to another nearby microwave dish antenna, it transmits to a satellite 22,300 miles in space.
- How does analog data differ from digital data?
Computers produce digital data that are binary, either on or off. In contrast, telephones produce analog data whose electrical signals are shaped like the sound waves they transfer. Analog data are signals that vary continuously within a range of values (e.g., temperature is analog).
- Clearly explain the differences among analog data, analog transmission, digital data, and digital transmission.
Data can be transmitted through a circuit in the same form they are produced. Most computers, for example, transmit their data through digital circuits to printers and other attached devices. Likewise, analog voice data can be transmitted through telephone networks in analog form. In general, networks designed primarily to transmit digital computer data tend to use digital transmission, and networks designed primarily to transmit analog voice data tend to use analog transmission (at least for some parts of the transmission).
- Explain why most telephone company circuits are now digital.
Most telephone company circuits are now digital because digital transmission is “better” than analog transmission. Specifically, digital transmission offers several key benefits over analog transmission: • Digital transmission produces fewer errors than analog transmission. Because the transmitted data is binary (only two distinct values), it is easier to detect and correct errors. • Digital transmission is more efficient. Time division multiplexing (TDM) is more efficient than frequency division multiplexing (FDM) because TDM requires no guardbands. TDM is commonly used for digital transmission, while FDM is used for analog transmission. • Digital transmission permits higher maximum transmission rates. Fiber optic cable, for example, is designed for digital transmission. • Digital transmission is more secure because it is easier to encrypt. • Digital transmission is less expensive than analog in many instances. Finally, and most importantly, integrating voice, video and data on the same circuit is far simpler with digital transmission.
- What is coding?
Coding is the representation of one set of symbols by another set of symbols. In data communications, this coding is a specific arrangement of binary 0s and 1s used to represent letters, numbers, and other symbols that have meaning.
- Briefly describe three important coding schemes.
There are three predominant coding schemes in use today. United States of America Standard Code for Information Interchange (USASCII), or more commonly ASCII, is the most popular code for data communications and is the standard code on most terminals and microcomputers. There are two types of ASCII; one is a 7-bit code that has 128 valid character combinations, and the other is an 8-bit code that has 256 combinations. Extended Binary Coded Decimal Interchange Code (EBCDIC) is IBM’s standard information code. This code has 8 bits, giving 256 valid character combinations.
- How is data transmitted in parallel?
Parallel mode is the way the internal transfer of binary data takes place inside a computer. If the internal structure of the computer is 8-bit, then all eight bits of the data element are transferred between main memory and the central processing unit simultaneously on 8 separate connections. The same is true of computers that use a 32-bit structure; all 32 bits are transferred simultaneously on 32 connections.
- What feature distinguishes serial mode from parallel mode?
Serial mode is distinguished from parallel mode by the time cycle in which the bits are transmitted. Parallel implies that all bits of a character are transmitted, followed by a time delay, and then all bits of the next character are transmitted, followed by a time delay. Serial implies that characters are sent one bit at a time, with each bit followed by a time delay. Put another way, parallel is character-by-character and serial is bit-by-bit.
- How does bipolar signaling differ from unipolar signaling? Why is Manchester encoding more popular than either?
With unipolar signaling, the voltage is always positive or negative (like a dc current). In bipolar signaling, the 1’s and 0’s vary from a plus voltage to a minus voltage (like an ac current). In general, bipolar signaling experiences fewer errors than unipolar signaling because the signals are more distinct. Manchester encoding is a special type of unipolar signaling in which the signal is changed from a high to low or from low to high in the middle of the signal. A change from high to low is used to represent a 1 (or a 0), while the opposite (a change low to high) is used to represent a 0 (or a 1). Manchester encoding is less susceptible to having errors go undetected, because if there is no transition in mid-signal the receive knows that an error must have occurred.
- What are three important characteristics of a sound wave?
Sounds waves have three important characteristics. The first is the height of the wave, called amplitude. Our ears detect amplitude as the loudness or volume of sound. The second characteristic is the length of the wave, usually expressed as the number of waves per second or frequency. Frequency is expressed in hertz (Hz). Our ears detect frequency as the pitch of the sound. Human hearing ranges from about 20 hertz to about 14,000 hertz, although some people can hear up to 20,000 hertz. The third characteristic is the phase, refers to the direction in which the wave begins.
- What is bandwidth? What is the bandwidth in a traditional North American telephone circuit?
Bandwidth refers to a range of frequencies. It is the difference between the highest and the lowest frequencies in a band; thus the bandwidth of human voice is from 20 Hz to 14,000 Hz or 13,880 Hz. The bandwidth of a voice grade telephone circuit is from 0 to 4000 Hz, or 4000 Hz; however, not all of this is available for use by telephone or data communications equipment. To start, there is a 300-hertz guardband at the bottom of the bandwidth and a 700-hertz guardband at the top. These prevent data transmissions from interfering with other transmissions when these circuits are multiplexed using frequency division multiplexing. This leaves the bandwidth from 300 to 3300 hertz or a total of 3000 Hz for voice or data transmission.
- Describe how data could be transmitted using amplitude modulation.
With amplitude modulation (AM) (also called amplitude shift keying (ASK)), the amplitude or height of the wave is changed. One amplitude is defined to be zero, and another amplitude is defined to be a one.
- Describe how data could be transmitted using frequency modulation.
Frequency modulation (FM) (also called frequency shift keying (FSK)), is a modulation technique whereby each 0 or 1 is represented by a number of waves per second (i.e., a different frequency). In this case, the amplitude does not vary. One frequency (i.e., a certain number of waves per second) is defined to be a one, and a different frequency (a different number of waves per second) is defined to be a one.
- Describe how data could be transmitted using phase modulation.
Phase modulation (PM) (also called phase shift keying (PSK)), is the most difficult to understand. Phase refers to the direction in which the wave begins. Until now, the waves we have shown start by moving up and to the right (this is called a 0º phase wave). Waves can also start down and to the right. This is called a phase of 180º. With phase modulation, one phase is defined to be a zero and the other phase is defined to be a one.
- Describe how data could be transmitted using a combination of modulation techniques.
It is possible to use amplitude modulation, frequency modulation, and phase modulation techniques on the same circuit. For example, we could combine amplitude modulation with four defined amplitudes (capable of sending two bits) with frequency modulation with four defined frequencies (capable of sending two bits) to enable us to send four bits on the same symbol.
- Is the bit rate the same as the symbol rate? Explain.
The terms bit rate (i.e., the number bits per second transmitted) and baud rate are used incorrectly much of the time. They often are used interchangeably, but they are not the same. In reality, the network designer or network user is interested in bits per second because it is the bits that are assembled into characters, characters into words and, thus, business information. Because of the confusion over the term baud rate among the general public, ITU-T now recommends the term baud rate be replaced by the term symbol rate. The bit rate and the symbol rate (or baud rate) are the same only when one bit is sent on each symbol. For example, if we use amplitude modulation with two amplitudes, we send one bit on one symbol. Here the bit rate equals the symbol rate. However, if we use QAM, we can send four bits on every symbol; the bit rate would be four times the symbol rate.
- What is a modem?
Modem is an acronym for MOdulator/DEModulator. A modem takes the digital electrical pulses received from a computer, terminal, or microcomputer and converts them into a continuous analog signal that is needed for transmission over an analog voice grade circuit. Modems are either internal (i.e., inside the computer) or external (i.e., connected to the computer by a cable).
- What is quadrature amplitude modulation (QAM)?
One popular technique is quadrature amplitude modulation (QAM). QAM involves splitting the symbol into eight different phases (three bits) and two different amplitudes (one bit), for a total of 16 different possible values. Thus, one symbol in QAM can represent four bits.
- What is 64- QAM?
If we use 64-QAM, the bit rate is six times the symbol rate. A circuit with a 10 MHz bandwidth using 64-QAM can provide up to 60 Mbps.
- What factors affect transmission speed?
The factors that affect the transmission speed are the number of bits per signal sample and the number of samples per second.
- What is oversampling?
For voice digitization, one typically samples at twice the highest frequency transmitted, or a minimum of 8,000 times a second. Sampling more frequently than this will improve signal quality. For example, CDs sample at 44,100 times a second and use 16 bits per sample to produce almost error-free music.
- Why is data compression so useful?
Data compression can increase throughput of data over a communication link literally by compressing the data. A 2:1 compression ratio means that for every two characters in the original signal, only one is needed in the compressed signal (e.g., if the original signal contained 1000 bytes, only 500 would needed in the compressed signal). In 1996, ITU-T revised the V.34 standard to include a higher data rate 33.6 Kbps. This revision is popularly known as V.34+. The faster data rate is accomplished by using a new form of TCM that averages 9.8 bits per symbol (symbol rate remains at 3429).
- What data compression standard uses Lempel-Ziv encoding? Describe how it works.
V.42bis, the ISO standard for data compression, uses Lempel-Ziv encoding. As a message is being transmitted, Lempel-Ziv encoding builds a dictionary of two, three, and four character combinations that occur in the message. Any time the same character pattern reoccurs in the message, the index to the dictionary entry is transmitted rather than sending the actual data. V.42bis compression can be added to almost any modem standard; thus a V.32 modem providing a data rate of 14,400 bps, could provide a data rate of 57,600 bps when upgraded to use V.42bis.
- Explain how pulse code modulation (PCM) works.
Analog voice data must be translated into a series of binary digits before they can be transmitted. With PAM-based methods, the amplitude of the sound wave is sampled at regular intervals, and translated into a binary number. The most commonly used type of PAM is Pulse Code Modulation (PCM). With PCM, the input voice signal is sampled 8000 times per second. Each time the input voice signal is sampled, eight bits are generated. Therefore, the transmission speed on the digital circuit must be 64,000 bits per second (8 bits per sample x 8,000 samples per second) in order to transmit a voice signal when it is in digital form.
- What is quantizing error?
Quantizing error is the difference between the replicated analog signal and its original form, shown with jagged “steps” rather than the original, smooth flow. Voice transmissions using digitized signals that have a great deal of quantizing error will sound metallic or machinelike to the human ear.
- What is the term used to describe the placing of two or more signals on a single circuit?
Multiplexing is the term used to describe the placing of two or more signals on a single circuit.
- What is the purpose of multiplexing?
A multiplexer puts two or more simultaneous transmissions on a single communication circuit. Multiplexing a voice telephone call means that two or more separate conversations are sent simultaneously over one communication circuit between two different cities. Multiplexing a data communication network means that two or more messages are sent simultaneously over one communication circuit. In general, no person or device is aware of the multiplexer; it is “transparent.”
- How does DSL (digital subscriber line) work?
DSL services are quite new, and not all common carriers offer them. In general, DSL services have advanced more quickly in the Canada (and Europe, Australia and Asia) than in the United States due to their newer telephone networks from the end offices to the customer. Unlike other services that operate through the telephone network end-to-end from the sender to the receiver, DSL only operates in the local loop from the carrier’s end office to the customer’s telephone. DSL uses the existing local loop cable, but places one DSL network interface device (called customer premises equipment (CPE)) in the home or business and another one in the common carrier’s end office. The end office DSL device is then connected to a high speed digital line from the end office to elsewhere in the carrier’s network (often an Internet service provider) using some other service (e.g., T carrier, SMDS).
Of the different types of multiplexing, what distinguishes: a. Frequency division multiplexing (FDM)? b. Time division multiplexing (TDM)? c. Statistical time division multiplexing (STDM)? d. Wavelength division multiplexing (WDM)?
a. Frequency division multiplexing (FDM)? Frequency division multiplexing can be described as dividing the circuit “horizontally” so that many signals can travel a single communication circuit simultaneously. The circuit is divided into a series of separate channels, each transmitting on a different frequency, much like series of different radio or TV stations. All signals exist in the media at the same time, but because they are on different frequencies, they do not interfere with each other. b. Time division multiplexing (TDM)? Time division multiplexing shares a communication circuit among two or more terminals by having them take turns, dividing the circuit “vertically.” In TDM, one character is taken from each terminal in turn, transmitted down the circuit, and delivered to the appropriate device at the far end. Time on the circuit is allocated even when data are not be transmitted, so that some capacity is wasted when terminals are idle. c. Statistical time division multiplexing (STDM)? Statistical time division multiplexing is the exception to the rule that the capacity of the multiplexed circuit must equal the sum of the circuits it combines. STDM allows more terminals or computers to be connected to a circuit than FDM or TDM. STDM is called statistical because selecting the transmission speed for the multiplexed circuit is based on a statistical analysis of the usage requirements of the circuits to be multiplexed. STDM is like TDM, except that each frame has a terminal address and no blanks are sent. d. Wavelength division multiplexing (WDM)? Wavelength division multiplexing is a version of FDM used in fiber optic cables. WDM works by using lasers to transmit different frequencies of light (i.e., colors) through the same fiber optic cable; each channel is assigned a different frequency so that the light generated by one laser does not interfere with the light produced by another. WDM permits up to 40 simultaneous circuits each transmitting up to 10 Gbps, giving a total network capacity in one fiber optic cable of 400 Gbps (i.e., 400 billion bits per second).
What is the function of inverse multiplexing (IMUX)?
Inverse multiplexing (IMUX) combines several low speed circuits to make them appear as one high-speed circuit to the user.
- If you were buying a multiplexer, why would you choose either TDM or FDM? Why?
If buying a multiplexer, you would choose TDM over FDM. In general, TDM is preferred to FDM, because it provides higher data transmission speeds and because TDM multiplexers are cheaper. Time division multiplexing generally is more efficient than frequency division multiplexing, because it does not need guardbands. Guardbands use “space” on the circuit that otherwise could be used to transmit data. It is not uncommon to have time division multiplexers that share a line among 32 different low speed terminals. It is easy to change the number of channels in a time division multiplexer. Time division multiplexers generally are less costly to maintain.
- Some experts argue that MODEMs may soon become obsolete. Do you agree? Why or why not?
The traditional context of MODEM no doubt has become obsolete. We no longer can consider a MODEM a device that connects two computers by merely modulating and demodulating transmission signals over the Public Switched Network at speeds ranging up to 56Kbps. This type of MODEM now must be evaluated in light of newer technologies such as xDSL MODEMs and Cable MODEMs each of which generates, propagates and transmits signals at far greater speeds. In addition we must evaluate the traditional MODEMs as well in light of newer protocols and compression techniques which have greatly improved overall bandwidth and throughput with the traditional types of MODEMs. In short, though many new forms of MODEMs and MODEM supported technology have come in to play, the traditional MODEM continues to be a cost-effective and flexible means of networking if your bandwidth requirements remain under the 56Kbps threshold.
- What is the maximum capacity of an analog circuit with a bandwidth of 4,000 Hz using QAM?
Under perfect circumstances, the maximum symbol rate is about 4,000 symbols per second. If we were to use QAM (4 bits per symbol), the maximum data rate would be 4 bits per symbol X 4,000 symbols per second = 16,000 bps.
- What is the maximum data rate of an analog circuit with a 10 MHz bandwidth using 64-QAM and V.44?
A circuit with a 10 MHz bandwidth using 64-QAM can provide up to 60 Mbps. A V.44 modem can provide as much as 6:1 compression ratio, depending on the type of data sent. Thus, the maximum data rate of 64-QAM with compression is 360 Mbps.
- What is the capacity of a digital circuit with a symbol rate of 10 MHz using Manchester encoding?
B = s * n B = (10 MHz) * 1 bit per symbol Capacity is 10 Mbps
- What is the symbol rate of a digital circuit providing 100 Mbps if it uses bipolar NRz signaling?
B = s * n 100 Mbps = s * 1 bit per symbol Symbol rate is 100 Mhz
- What is VoIP?
Voice over IP is commonly used to transmit phone conversations over the digital networks. VoIP uses digital phones with built-in codecs to convert analog to digital.
Chapter 4 Data Link Layer Answers to Textbook Exercises 1. What does the data link layer do?
The data link layer controls the way messages are sent on the physical media. The data link layer handles three functions: media access control, message delineation, and error control. The data link layer accepts messages from the network layer and controls the hardware that actually transmits them. The data link layer is responsible for getting a message from one computer to another without errors. The data link layer also accepts streams of bits from the physical layer and organizes them into coherent messages that it passes to the network layer.
- What is media access control, and why is it important?
Media access control handles when the message gets sent. Media access control becomes important when several computers share the same communication circuit, such as a point-to-point configuration with a half duplex line that requires computers to take turns, or a multipoint configuration in which several computers share the same circuit. Here, it is critical to ensure that no two computers attempt to transmit data at the same time – or if they do, there must be a way to recover from the problem. Media access control is critical in local area networks.
- Under what conditions is media access control unimportant?
With point-to-point full duplex configurations, media access control is unnecessary because there are only two computers on the circuit and full duplex permits either computer to transmit at anytime. There is no media access control.
- Compare and contrast roll-call polling, hub polling (or token passing), and contention.
With roll call polling, the front end processor works consecutively through a list of clients, first polling terminal 1, then terminal 2, and so on, until all are polled. Roll call polling can be modified to select clients in priority so that some get polled more often than others. For example, one could increase the priority of terminal 1 by using a polling sequence such as 1, 2, 3, 1, 4, 5, 1, 6, 7, 1, 8, 9. Hub polling is often used in LAN multipoint configurations (i.e., token ring) that do not have a central host computer. One computer starts the poll and passes it to the next computer on the multipoint circuit, which sends its message and passes the poll to the next. That computer then passes the poll to the next, and so on, until it reaches the first computer, which restarts the process again. Contention is the opposite of controlled access. Computers wait until the circuit is free (i.e., no other computers are transmitting), and then transmit whenever they have data to send. Contention is commonly used in Ethernet local area networks.
- Which is better, controlled access or contention? Explain.
The key consideration for which is better is throughput – which approach will permit the largest amount of user data to be transmitted through the network. In most of the 1990s, contention approaches worked better than controlled approaches for small networks that have low usage. In this case, each computer can transmit when necessary, without waiting for permission. In high volume networks, where many computers want to transmit at the same time, the well-controlled circuit originally prevented collisions and delivered better throughput in such networks. Today contention-based systems have been improved to the point where they deliver substantially better throughput and are competitive because of hardware cost considerations.
- Define two fundamental types of errors.
There are two fundamental types of errors: human errors and network errors. Human errors, such as a mistake in typing a number, usually are controlled through the application program. Network errors, such as those that occur during transmission, are controlled by the network hardware and software. There are two categories of network errors: corrupted data (data that have been changed) and lost data.
- Errors normally appear in ______________________________, which is when more than one data bit is changed by the error-causing condition.
Errors normally appear in bursts, which is when more than one data bit is changed by the error-causing condition.
- Is there any difference in the error rates of lower-speed lines and of higher-speed lines?
Yes, normally lower speed lines have higher error rates because (1) leased lines can be conditioned to prevent noise, but dial-up lines can not and (2) dial-up lines have less stable transmission parameters.
- Briefly define noise.
Noise consists of undesirable electrical signals, or, in the instance of fiber optic cable, undesirable light. Noise is typically introduced by equipment or natural disturbances, and it can seriously degrade the performance of a communication circuit. Noise manifests itself as extra bits, missing bits, or bits that have been “flipped,” (i.e., changed from 1 to 0 or vice versa).
- Describe four types of noise. Which is likely to pose the greatest problem to network managers?
The following list summarizes the major sources of error. The first six are the most important; the last three are more common in analog rather that digital circuits. Line outages are a catastrophic cause of errors and incomplete transmission. Occasionally, a communication circuit fails for a brief period. This type of failure may be caused by faulty telephone end office equipment, storms, loss of the carrier signal, and any other failure that causes a short circuit. When constructing and designing redundant networks that are fault survivable, this is usually called designing for the “farmer with a back hoe” problem. White noise or gaussian noise (the familiar background hiss or static on radios and telephones) is caused by the thermal agitation of electrons and therefore is inescapable. Even if the equipment was perfect and the wires were perfectly insulated from any and all external interference, there still would be some white noise. White noise usually is not a problem unless it becomes so strong that it obliterates the transmission. In this case, the strength of the electrical signal is increased so it overpowers the white noise; in technical terms, we increase the signal to noise ratio. Impulse noise (sometimes called spikes) is the primary source of errors in data communications. Some of the sources of impulse noise are voltage changes in adjacent lines, lightning flashes during thunderstorms, fluorescent lights, and poor connections in circuits. Cross-talk occurs when one circuit picks up signals in another. It occurs between pairs of wires that are carrying separate signals, in multiplexed links carrying many discrete signals, or in microwave links in which one antenna picks up a minute reflection from another antenna. Cross-talk between lines increases with increased communication distance, increased proximity of the two wires, increased signal strength, and higher frequency signals. Wet or damp weather can also increase cross-talk. Like white noise, cross-talk has such a low signal strength that it normally is not bothersome. Echoes can cause errors. Echoes are caused by poor connections that cause the signal to reflect back to the transmitting equipment. If the strength of the echo is strong enough to be detected, it causes errors. Echoes, like cross-talk and white noise, have such a low signal strength that they normally are not bothersome. In networks, echo suppressors are devices that reduce the potential for this type of error. Echoes can also occur in fiber optic cables when connections between cables are not properly aligned. Attenuation is the loss of power a signal suffers as it travels from the transmitting computer to the receiving computer. Some power is absorbed by the medium or is lost before it reaches the receiver. This power loss is a function of the transmission method and circuit medium. High frequencies lose power more rapidly than low frequencies during transmission, so the received signal can thus be distorted by unequal loss of its component frequencies. Attenuation increases as frequency increases or as the diameter of the wire decreases, or as the distance of the transmission increases. Repeaters can be used in a digital environment to correct for attenuation due to distance, where amplifiers can be used to boost diminishing or attenuating analog signals over longer distances. A repeater will perfectly replicate the incoming, distorted digital signal and send it on deeper into the network as if new. An amplifier will boost an attenuating analog signal, but also boost the error noise in the signal as it does so. Fewer repeaters are necessary as compared to amplifiers to correct for attenuation, thus helping to make digital more cost effective when compared to analog transmission in controlling for noise. Intermodulation noise is a special type of cross-talk. The signals from two circuits combine to form a new signal that falls into a frequency band reserved for another signal. On a multiplexed line, many different signals are amplified together, and slight variations in the adjustment of the equipment can cause intermodulation noise. A maladjusted modem may transmit a strong frequency tone when not transmitting data, thus producing this type of noise. Jitter may affect the accuracy of the data being transmitted because minute variations in amplitude, phase, and frequency always occur. The generation of a pure carrier signal in an analog circuit is impossible. The signal may be impaired by continuous and rapid gain and/or phase changes. This jitter may be random or periodic. Harmonic distortion usually is caused by an amplifier on a circuit that does not correctly represent its output with what was delivered to it on the input side. Phase hits are short-term shifts “out of phase,” with the possibility of a shift back into phase.
- How do amplifiers differ from repeaters?
An amplifier takes the incoming signal, increases its strength, and retransmits it on the next section of the circuit. They are typically used on analog circuits such as the telephone company’s voice circuits. On analog circuits, it is important to recognize that the noise and distortion are also amplified, along with the signal. Repeaters are commonly used on digital circuits. A repeater receives the incoming signal, translates it into a digital message, and retransmits the message. Because the message is re-created at each repeater, noise and distortion from the previous circuit are not amplified.
- What are three ways of reducing errors and the types of noise they affect?
Shielding (protecting wires by covering them with an insulating coating) is one of the best ways to prevent impulse noise, cross-talk and intermodulation noise. Moving cables away from sources of noise (especially power sources) can also reduce impulse noise cross-talk and intermodulation noise. For impulse noise, this means avoiding lights and heavy machinery. Locating communication cables away from power cables is always a good idea. For cross-talk, this means physically separating the cables from other communication cables. Cross-talk and intermodulation noise is often caused by improper multiplexing. Changing multiplexing techniques (e.g., from FDM to TDM), or changing the frequencies or size of the guardbands in frequency division multiplexing can help. Many types of noise (e.g., echoes, white noise, jitter, harmonic distortion) can be caused by poorly maintained equipment or poor connections and splices among cables. The solution here is obvious: tune the transmission equipment and redo the connections. To avoid attenuation, telephone circuits have repeaters or amplifiers spaced throughout their length.
- Describe three approaches to detecting errors, including how they work, the probability of detecting an error, and any other benefits or limitations.
Three common error detection methods are parity checking, longitudinal redundancy checking, and polynomial checking (particularly checksum and cyclic redundancy checking). One of the oldest and simplest error detection methods is parity. With this technique, one additional bit is added to each byte in the message. The value of this additional parity bit is based on the number of 1’s in each byte transmitted. This parity bit is set to make the total number of ones in the byte (including the parity bit) either an even number or an odd number. Any single error (a switch of a 1 to a 0 or vice versa) will be detected by parity, but it cannot determine which bit was in error. But, if two bits are switched, the parity check will not detect any error. Parity can detect errors only when an odd number of bits have been switched; any even number of errors cancel each other out. Therefore, the probability of detecting an error, given that one has occurred, is only about 50 percent. Many networks today do not use parity because of its low error detection rate. Polynomial checking adds a character or series of characters to the end of the message based on a mathematical algorithm. With the checksum technique, a checksum (typically one byte) is added to the end of the message. The checksum is calculated by adding the decimal value of each character in the message, dividing the sum by 255, and using the remainder as the checksum. The receiver calculates its own checksum in the same way and compares it with the transmitted checksum. If the two values are equal, the message is presumed to contain no errors. Use of checksum detects close to 95 percent of the errors for multiple bit burst errors. The most popular polynomial error checking scheme is cyclical redundancy check (see the answer to # 16 below for more discussion). The probability of detecting an error is nearly 100% or, in some cases, 100%.
- Briefly describe how even parity and odd parity work.
Even parity is when the seven bits of an ASCII character have an even (2, 4, or 6) number of 1s, and therefore a 0 is placed in the eighth parity position. Odd parity is when the seven bits of an ASCII character have an odd (1, 3, 5, or 7) number of 1s, and therefore a 1 is placed in the eighth parity position.
- Briefly describe how checksum works.
Checksum error checking adds a checksum (typically 1 byte) is added to the end of the message. The checksum is calculated by adding the decimal value of each character in the message, dividing the sum by 255, and then using the remainder as the checksum. The same approach is used at the receiving end. If the receiver gets the same result, the block has been received correctly.
- How does cyclical redundancy checking (CRC) work?
Cyclical redundancy check (CRC) adds 8, 16, 24 or 32 bits to the message. With CRC, a message is treated as one long binary number, P. Before transmission, the data link layer (or hardware device) divides P by a fixed binary number, G, resulting in a whole number, Q, and a remainder, R/G. So, P/G = Q + R/G. For example, if P = 58 and G = 8, then Q = 7 and R = 2. G is chosen so that the remainder R will be either 8 bits, 16 bits, 24 bits, or 32 bits. The remainder, R, is appended to the message as the error checking characters before transmission. The receiving hardware divides the received message by the same G, which generates an R. The receiving hardware checks to ascertain whether the received R agrees with the locally generated R. If it does not, the message is assumed to be in error.
- How does forward error correction work? How is it different from other error-correction methods?
Forward error correction uses codes containing sufficient redundancy to prevent errors by detecting and correcting them at the receiving end without retransmission of the original message. The redundancy, or extra bits required, varies with different schemes. It ranges from a small percentage of extra bits to 100 percent redundancy, with the number of error detecting bits roughly equaling the number of data bits.
- Under what circumstances is forward error-correction desirable?
Forward error correction is commonly used in satellite transmission. A round trip from the Earth station to the satellite and back includes a significant delay. Error rates can fluctuate depending on the condition of equipment, sun spots, or the weather. Indeed, some weather conditions make it impossible to transmit without some errors, making forward error correction essential. Compared to satellite equipment costs, the additional cost of forward error correction is insignificant.
- Compare and contrast stop-and-wait ARQ and continuous ARQ.
With stop-and-wait ARQ, the sender stops and waits for a response from the receiver after eWith continuous ARQ, the sender does not wait for an acknowledgment after sending a message; it immediately sends the next one. While the messages are being transmitted, the sender examines the stream of returning acknowledgments. If it receives an NAK, the sender retransmits the needed messages. Continuous ARQ is by definition a full duplex transmission technique, because both the sender and the receiver are transmitting simultaneously (the sender is sending messages, and the receiver is sending ACKs and NAKs). ach message or data packet. After receiving a packet, the receiver sends either an acknowledgment (ACK) if the message was received without error, or a negative acknowledgment (NAK) if the message contained an error. If it is an NAK, the sender resends the previous message. If it is an ACK, the sender continues with the next message. Stop-and-wait ARQ is by definition, a half duplex transmission technique.
- Which is the simplest (least sophisticated) protocol described in this chapter?
An argument could be made for SDLC, HDLC, or PPP. Each of these are similar in many ways.
- Describe the frame layouts for SDLC, Ethernet, and PPP.
Each SDLC frame begins and ends with a special bit pattern, known as the flag. The address field identifies the destination. The length of the address field is usually 8 bits but can be set at 16 bits; all computers on the same network must use the same length. The control field identifies the kind of frame that is being transmitted, either information or supervisory. An information frame is used for the transfer and reception of messages, frame numbering of contiguous frames, and the like. A supervisory frame is used to transmit acknowledgments (ACKs and NAKs). The message field is of variable length and is the user’s message. The frame check sequence field is a 16-bit or 32-bit cyclical redundancy checking (CRC) code. For a typical Ethernet packet, the destination address specifies the receiver, while the source address specifies the sender. The length indicates the length in 8-bit bytes of the message portion of the packet. The LLC control and SNAP control are used to pass control information between the sender and receiver. These are often used to indicate the type of network layer protocol the packet contains (e.g., TCP/IP or IPX/SPX as described in Chapter 6). The maximum length of the message is 1492 bytes. The packet ends with a CRC-32 frame check sequence used for error detection. The PPP frame is similar to the SDLC frame. The frame starts with a flag and has a one-byte address. It also contains a control field which is rarely used. The protocol field indicates what type of data is contained. The message portion is variable in length and may be up to 1,500 bytes long. The frame check sequence is either CRC-16 or -32. The frame ends with a flag.
- What is transmission efficiency?
Transmission efficiency is defined as the total number of information bits (i.e., bits in the message sent by the user) divided by the total bits in transmission (i.e., information bits plus overhead bits).
- How do information bits differ from overhead bits?
Information bits are those used to convey the user’s meaning. Overhead bits are used for purposes such as error checking, and marking the start and end of characters and packets.
Middleware is the software that sits between the application software on the client and the application software on the server. True or False
True
To interact with the World Wide Web, a client computer needs an application layer software package called a: Pick one: Web browser Web server Telnet package Uniform Resource Locator package Router package
Web Browser
An application program function is __________, or the processing required to access data. Pick One: data storage data access logic application logic presentation logic application access storage
data access logic
The software that runs on the mail server is referred to as the ____________. Pick one: Mail transfer agent Mail user agent Microsoft Outlook Web server SMTP
Mail transfer agent
There are required and optional parts of an HTTP response. They are: Pick one: response status, response header, response body response address, response header, response body response status, response body response address, response header response status, response header
response status, response header, response body
The World Wide Web was conceived at University of Utah as part of the development of the Internet. True or False
False
An N-tiered architecture: Pick one: is generally more “scalable” than a three-tiered architecture is generally less “scalable” than a three-tiered architecture uses only two sets of computers in which the clients are responsible for the application and presentation logic, and the servers are responsible for the data uses exactly three sets of computers in which the client is responsible for presentation, one set of servers is responsible for data access logic and data storage, and application logic is spread across two or more different sets of servers puts less load on a network than a two-tiered architecture because there tends to be less communication among the servers
is generally more “scalable” than a three-tiered architecture
Scalability refers to the ability to increase or decrease the capacity of the computing infrastructure in response to changing capacity needs. True or False
True
The standard protocol for communication between a Web browser and a Web server is the web protocol. True or False
False
The standards H.320, H.323, and MPEG-2 are commonly used with Pick one: Telnet Videoconferencing Email IM Microsoft Office
Videoconferencing
The fundamental problem in client-based networks is that all data on the server must travel to the client for processing. True or False
True
Multiplexing increases the cost of provisioning network circuits. True or False
False
The “local loop” refers to the wires that run from the customer premises to the telephone switch of the telephone company. True or False
True
If each sample uses 16 bits and the number of samples taken each second is 8000; then the transmission speed on the circuit is? Pick one: 128 Kbps 64 Kbps 12800 bps 96 Kbps 32000 bps
128 Kbps
Microwave transmission: Pick one: is a type of high frequency radio communication requires a clear line-of-sight path is typically used for long distance data transmission does not require the laying of any cable all of these
all of these
Statistical time division multiplexing does not require the capacity of the circuit to be equal to the sum of the combined circuits. True or False
True