Virtual Workforce

Assignment 1: Discussion—Virtual Workforce

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
Assignment Summary: Module 7 Due Date
Assignment 1: Discussion—Virtual Workforce Tuesday, December 11, 2012
Assignment 2: IT Strategy Presentation Saturday, December 15, 2012

  

Increased competition is forcing businesses to become lean and at the same time attract the best employees. One of the methods that can be used by organizations to meet both goals is the utilization of remote workers. The use of technology allows organizations to select the best employees from a global workforce and the adaptability to assign these resources to a flexible set of tasks and projects.

Using the readings for this module, the  University online library resources, and the Internet, research how technology has influenced the utilization of remote workers.

Respond to the following:

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
  • What are the three ways in which technology has influenced the utilization of remote workers the most?
  • What are some of the advantages and disadvantages for the virtual employee?
  • What strategies can organizations employ to capitalize on the advantages and minimize the disadvantages of using remote workers?

Give reasons and examples in support of your responses.

Write your initial response in approximately 400 words. Apply APA standards to citation of sources.

Assignment 1 Grading Criteria

Described the three ways in which technology has influenced the utilization of remote workers the most.

Explained the advantages and disadvantages for the virtual employee.

Recommended strategies that organizations can employ to capitalize on the advantages and minimize the disadvantages of using remote workers

Assignment 2: IT Strategy Presentation

An IT strategy should create a relationship between the investment in IT and organizational strategies and objectives. IT systems leverage the value of information for an organization and therefore the strategy should demonstrate how technology provides the organization with a value-added service. In this assignment, you will develop an executive summary to show how your strategy will benefit the business goals and objectives of the organization.

Review the work you completed on your LASA 2 assignment delivered in the previous module.

Create an executive summary of your IT strategy. The presentation should be approximately 10–15 minutes and should include the following:

  • An overview, at least one slide for each section of your strategy
  • A summary of any main conclusions or recommendations made in your IT strategy report
  • Specific details from your IT strategy report to highlight or support the summary

Use the notes feature to include detailed speaker’s notes for your presentation.

Develop  8-slides presentation in Microsoft PowerPoint format. Apply APA standards to citation of sources.

 

 

 

 

 

Assignment 2 Grading Criteria

Summarized the problem/topic, findings, and conclusions/recommendations from your IT strategy report (LASA 2).

Organized the presentation to include a cohesive introduction, solid transitions, and conclusions.

Styled the presentation in a clear, appropriate, and balanced way between text and other visuals.

Wrote in a clear, concise, and organized manner; demonstrated ethical scholarship in accurate representation and attribution of sources; displayed accurate spelling, grammar, and punctuation.

From the textbook, Management information systems: Managing the digital firm (11th ed.), read the following chapter:


Chapter 7
Telecommunications, the Internet, and Wireless Technology

(Laudon 246)

Laudon, Laudon &. Management Information Systems: Managing the Digital Firm, VitalSource eBook for EDMC, 11th Edition. Pearson Learning Solutions. .

VIRGIN MEGASTORES KEEPS SPINNING WITH UNIFIED COMMUNICATIONS

Have you ever been in a Virgin Megastore? Inside you’ll find racks and racks of CDs, DVDs, books, video games, and clothing, with videos playing on overhead screens. You can use Virgin Vault digital kiosks to preview music, videos, and games. You might also see a DJ sitting in a booth overlooking the sales floor and spinning the latest hits or tracks from undiscovered artists. Virgin Megastores are very media-and technology-intensive.

These stores are a carefully orchestrated response to an intensely competitive environment, because the company must compete with “big box” discount chains such as Wal-Mart and online music download services. The business must be able to react instantly to sales trends and operate efficiently to keep prices down. A new CD or DVD release might achieve half of its total sales within the first couple of weeks after its release. Too much or too little of a CD in stores at a specific time can translate into large losses. Although Virgin Megastores’ inventory data warehouse based on Microsoft SQL Server database software provides up-to-the-minute information on sales and current stock levels, acting on a rapidly changing picture of supply and demand requires human communication.

Virgin Megastores USA has 1,400 employees in 11 retail locations throughout the United States. Its Los Angeles-based home office shares information with the retail stores via voice mail, e-mail, and audio weekly conference calls, which are used to discuss upcoming promotions and events, product inventory issues, and current market trends. People shied away from conference calling because of its costs, choosing a less expensive but also less immediate way of communicating, such as sending out a mass e-mail message. Recipients of that message might not respond right away.

To speed up interaction, Virgin Megastores chose unified communications technology that integrated its voice mail, e-mail, conference calling, and instant messaging into a single solution that would be a natural and seamless way of working. In the fall of 2007 it deployed Microsoft’s Office Communication Server, Office Communicator, and RoundTable conferencing and collaboration tools. The technology has presence awareness capabilities that display other people’s availability and status (such as whether the person is already using the phone, in a Web conference, or working remotely) within the Microsoft productivity software they use in the course of their work. Users can see the people they work with in one window of Office Communicator and switch from one type of messaging to another as naturally and easy as picking up a telephone.

Calls integrating audio and video are helping employees resolve issues more quickly. The company is saving $50,000 annually in conferencing costs, and now has in-house video and Web conferencing as well as audio conferencing.

Sources: Lauren McKay, “All Talk,” Customer Relationship Management Magazine, June 2008; John Edwards, “How to Get the Most from Unified Communications,” CIO, February 8, 2008; and “Virgin Megastores USA Turns Up the Volume with Unified Communications,” www.microsoft

.com

, accessed June 19, 2008.

Virgin Megastores USA’s experience illustrates some of the powerful new capabilities—and opportunities—provided by contemporary networking technology. The company used unified communications technology to provide managers and employees with integrated voice, e-mail, and conferencing capabilities, with the ability to switch seamlessly from one type of messaging to another. Using the technology accelerated information sharing and decision making, enabling the company to manage its inventory more precisely.

The chapter-opening diagram calls attention to important points raised by this case and this chapter. The retail music industry is exceptionally competitive and time-sensitive. To stay in the game, Virgn Megastores must be able to respond very rapidly to sales trends. The company’s outdated networking and voice technology made it difficult to do this. Management decided that new technology could provide a solution and selected a new unified communications technology platform. Switching to unified communications technology saved time and facilitated information sharing between managers and employees and between retail outlets and corporate headquarters. With more fresh information, the company is able to respond more rapidly to sales trends and adjust inventory accordingly. These improvements save time and reduce inventory costs. Virgin Megastores had to make some changes in employee job functions and work flow to take advantage of the new technology.

7.1 Telecommunications and Networking in Today’s Business World

If you run or work in a business, you can’t do without networks. You need to communicate rapidly with your customers, suppliers, and employees. Until about 1990, you would have used the postal system or telephone system with voice or fax for business communication. Today, however, you and your employees use computers and e-mail, the Internet, cell phones, and mobile computers connected to wireless networks for this purpose. Networking and the Internet are now nearly synonymous with doing business.

NETWORKING AND COMMUNICATION TRENDS

Firms in the past used two fundamentally different types of networks: telephone networks and computer networks. Telephone networks historically handled voice communication, and computer networks handled data traffic. Telephone networks were built by telephone companies throughout the twentieth century using voice transmission technologies (hardware and software), and these companies almost always operated as regulated monopolies throughout the world. Computer networks were originally built by computer companies seeking to transmit data between computers in different locations.

Thanks to continuing telecommunications deregulation and information technology innovation, telephone and computer networks are slowly converging into a single digital network using shared Internet-based standards and equipment. Telecommunications providers, such as AT&T and Verizon, today offer data transmission, Internet access, wireless telephone service, and television programming as well as voice service. Cable companies, such as Cablevision and Comcast, now offer voice service and Internet access. Computer networks have expanded to include Internet telephone and limited video services. Increasingly, all of these voice, video, and data communications are based on Internet technology.

Both voice and data communication networks have also become more powerful (faster), more portable (smaller and mobile), and less expensive. For instance, the typical Internet connection speed in 2000 was 56 kilobits per second, but today more than 60 percent of U.S. Internet users have high-speed
broadband
connections provided by telephone and cable TV companies running at one million bits per second. The cost for this service has fallen exponentially, from 25 cents per kilobit in 2000, to less than 1 cent today.

Increasingly, voice and data communication as well as Internet access are taking place over broadband wireless platforms, such as cell phones, handheld digital devices, and PCs in wireless networks. In fact, mobile wireless broadband Internet access (2.5G and 3G cellular, which we describe in Section 7.4) was the fastest-growing form of Internet access in 2008, growing at a 96-percent compound annual growth rate. Fixed wireless broadband (Wi-Fi) is growing at a 28-percent compound annual growth rate, the second fastest growing form of Internet access.

WHAT IS A COMPUTER NETWORK?

If you had to connect the computers for two or more employees together in the same office, you would need a computer network. Exactly what is a network? In its simplest form, a network consists of two or more connected computers. Figure 7-1 illustrates the major hardware, software, and transmission components used in a simple network: a client computer and a dedicated server computer, network interfaces, a connection medium, network operating system software, and either a hub or a switch.

Each computer on the network contains a network interface device called a
network interface card (NIC)
. Most personal computers today have this card built into the motherboard. The connection medium for linking network components can be a telephone wire, coaxial cable, or radio signal in the case of cell phone and wireless local-area networks (Wi-Fi networks).

The
network operating system (NOS)
routes and manages communications on the network and coordinates network resources. It can reside on every computer in the network, or it can reside primarily on a dedicated server computer for all the applications on the network. A server computer is a computer on a network that performs important network functions for client computers, such as serving up Web pages, storing data, and storing the network operating system (and hence controlling the network). Server software, such as Microsoft Windows Server, Linux, and Novell NetWare, are the most widely used network operating systems.

Most networks also contain a switch or a hub acting as a connection point between the computers.
Hubs
are very simple devices that connect network components, sending a packet of data to all other connected devices. A
switch
has more intelligence than a hub and can filter and forward data to a specified destination on the network.

FIGURE 7-1 COMPONENTS OF A SIMPLE COMPUTER NETWORK

Illustrated here is a very simple computer network, consisting of computers, a network operating system residing on a dedicated server computer, cable (wiring) connecting the devices, network interface cards (NIC), switches, and a router.

What if you want to communicate with another network, such as the Internet? You would need a router. A
router
is a communications processor used to route packets of data through different networks, ensuring that the data sent gets to the correct address.

Networks in Large Companies

The network we’ve just described might be suitable for a small business. But what about large companies with many different locations and thousands of employees? As a firm grows, and collects hundreds of small local-area networks (LANs), these networks can be tied together into a corporate-wide networking infrastructure. The network infrastructure for a large corporation consists of a large number of these small local-area networks linked to other local-area networks and to firmwide corporate networks. A number of powerful servers support a corporate Web site, a corporate intranet, and perhaps an extranet. Some of these servers link to other large computers supporting backend systems.

Figure 7-2 provides an illustration of these more complex, larger scale corporate-wide networks. Here you can see that the corporate network infrastructure supports a mobile sales force using cell phones; mobile employees linking to the company Web site, or internal company networks using mobile wireless local-area networks (Wi-Fi networks); and a videoconferencing system to support managers across the world. In addition to these computer networks, the firm’s infrastructure usually includes a separate telephone network that handles most voice data. Many firms are dispensing with their traditional telephone networks and using Internet telephones that run on their existing data networks (described later).

FIGURE 7-2 CORPORATE NETWORK INFRASTRUCTURE

Today’s corporate network infrastructure is a collection of many different networks from the public switched telephone network; to the Internet; to corporate local-area networks linking workgroups, departments, or office floors.

As you can see from this figure, a large corporate network infrastructure uses a wide variety of technologies—everything from ordinary telephone service and corporate data networks to Internet service, wireless Internet, and wireless cell phones. One of the major problems facing corporations today is how to integrate all the different communication networks and channels into a coherent system that enables information to flow from one part of the corporation to another, from one system to another. As more and more communication networks become digital, and based on Internet technologies, it will become easier to integrate them.

KEY DIGITAL NETWORKING TECHNOLOGIES

Contemporary digital networks and the Internet are based on three key technologies: client/server computing, the use of packet switching, and the development of widely used communications standards (the most important of which is Transmission Control Protocol/Internet Protocol, or TCP/IP) for linking disparate networks and computers.

Client/Server Computing

We introduced client/server computing in Chapter 5. Client/server computing is a distributed computing model in which some of the processing power is located within small, inexpensive client computers, and resides literally on desktops, laptops, or in handheld devices. These powerful clients are linked to one another through a network that is controlled by a network server computer. The server sets the rules of communication for the network and provides every client with an address so others can find it on the network.

Client/server computing has largely replaced centralized mainframe computing in which nearly all of the processing takes place on a central large mainframe computer. Client/server computing has extended computing to departments, workgroups, factory floors, and other parts of the business that could not be served by a centralized architecture. The Internet is the largest implementation of client/server computing.

Packet Switching


Packet switching
is a method of slicing digital messages into parcels called packets, sending the packets along different communication paths as they become available, and then reassembling the packets once they arrive at their destinations (see Figure 7-3). Prior to the development of packet switching, computer networks used leased, dedicated telephone circuits to communicate with other computers in remote locations. In circuit-switched networks, such as the telephone system, a complete point-to-point circuit is assembled, and then communication can proceed. These dedicated circuit-switching techniques were expensive and wasted available communications capacity—the circuit was maintained regardless of whether any data were being sent.

Packet switching makes much more efficient use of the communications capacity of a network. In packet-switched networks, messages are first broken down into small fixed bundles of data called packets. The packets include information for directing the packet to the right address and for checking transmission errors along with the data. The packets are transmitted over various communications channels using routers, each packet traveling independently. Packets of data originating at one source will be routed through many different paths and networks before being reassembled into the original message when they reach their destinations.

FIGURE 7-3 PACKED-SWITCHED NETWORKS AND PACKET COMMUNICATIONS

Data are grouped into small packets, which are transmitted independently over various communications channels and reassembled at their final destination.

TCP/IP and Connectivity

In a typical telecommunications network, diverse hardware and software components need to work together to transmit information. Different components in a network communicate with each other only by adhering to a common set of rules called protocols. A
protocol
is a set of rules and procedures governing transmission of information between two points in a network.

In the past, many diverse proprietary and incompatible protocols often forced business firms to purchase computing and communications equipment from a single vendor. But today corporate networks are increasingly using a single, common, worldwide standard called
Transmission Control Protocol/Internet Protocol (TCP/IP)
. TCP/IP was developed during the early 1970s to support U.S. Department of Defense Advanced Research Projects Agency (DARPA) efforts to help scientists transmit data among different types of computers over long distances.

TCP/IP uses a suite of protocols, the main ones being TCP and IP. TCP refers to the Transmission Control Protocol (TCP), which handles the movement of data between computers. TCP establishes a connection between the computers, sequences the transfer of packets, and acknowledges the packets sent. IP refers to the Internet Protocol (IP), which is responsible for the delivery of packets and includes the disassembling and reassembling of packets during transmission. Figure 7-4 illustrates the four-layered Department of Defense reference model for TCP/IP.


1.
Application layer. The application layer enables client application programs to access the other layers and defines the protocols that applications use to exchange data. One of these application protocols is the Hypertext Transfer Protocol (HTTP), which is used to transfer Web page files.


2.
Transport layer. The transport layer is responsible for providing the application layer with communication and packet services. This layer includes TCP and other protocols.

FIGURE 7-4 THE TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP) REFERENCE MODEL

This figure illustrates the four layers of the TCP/IP reference model for communications.


3.
Internet layer. The Internet layer is responsible for addressing, routing, and packaging data packets called IP datagrams. The Internet Protocol is one of the protocols used in this layer.


4.
Network interface layer. At the bottom of the reference model, the network interface layer is responsible for placing packets on and receiving them from the network medium, which could be any networking technology.

Two computers using TCP/IP are able to communicate even if they are based on different hardware and software platforms. Data sent from one computer to the other passes downward through all four layers, starting with the sending computer’s application layer and passing through the network interface layer. After the data reach the recipient host computer, they travel up the layers and are reassembled into a format the receiving computer can use. If the receiving computer finds a damaged packet, it asks the sending computer to retransmit it. This process is reversed when the receiving computer responds.

7.2 Communications Networks

Let’s look more closely at alternative networking technologies available to businesses.

SIGNALS: DIGITAL VS. ANALOG

There are two ways to communicate a message in a network: either an analog signal or a digital signal. An analog signal is represented by a continuous waveform that passes through a communications medium and has been used for voice communication. The most common analog devices are the telephone handset, the speaker on your computer, or your iPod earphone, all of which create analog wave forms that your ear can hear.

A digital signal is a discrete, binary waveform, rather than a continuous waveform. Digital signals communicate information as strings of two discrete states: one bit and zero bits, which are represented as on—off electrical pulses.

FIGURE 7-5 FUNCTIONS OF THE MODEM

A modem is a device that translates digital signals from a computer into analog form so that they can be transmitted over analog telephone lines. The modem also translates analog signals back into digital form for the receiving computer.

Computers use digital signals, so if you want to use the analog telephone system to send digital data, you’ll need a device called a
modem
to translate digital signals into analog form (see Figure 7-5). Modem stands for modulator-demodulator.

TYPE

S OF NETWORKS

There are many different kinds of networks and ways of classifying them. One way of looking at networks is in terms of their geographic scope (see Table 7-1).

Local-Area Networks

If you work in a business that uses networking, you are probably connecting to other employees and groups via a local-area network. A
local-area network (LAN)
is designed to connect personal computers and other digital devices within a half-mile or 500-meter radius. LANs typically connect a few computers in a small office, all the computers in one building, or all the computers in several buildings in close proximity. LANs can link to long-distance wide-area networks (WANs, described later in this section) and other networks around the world using the Internet.

Review Figure 7-1, which could serve as a model for a small LAN that might be used in an office. One computer is a dedicated network file server, providing users with access to shared computing resources in the network, including software programs and data files. The server determines who gets access to what and in which sequence. The router connects the LAN to other networks, which could be the Internet or another corporate network, so that the LAN can exchange information with networks external to it. The most common LAN operating systems are Windows, Linux, and Novell. Each of these network operating systems supports TCP/IP as their default networking protocol.

TABLE 7-1 TYPES OF NETWORKS

TYPE

AREA

Local-area network (LAN)

Up to 500 meters (half a mile); an office or floor of a building

Campus-area network (CAN)

Up to 1,000 meters (a mile); a college campus or corporate facility

Metropolitan-area network (MAN)

A city or metropolitan area

Wide-area network (WAN)

A transcontinental or global area

Ethernet is the dominant LAN standard at the physical network level, specifying the physical medium to carry signals between computers; access control rules; and a standardized set of bits used to carry data over the system. Originally, Ethernet supported a data transfer rate of 10 megabits per second (Mbps). Newer versions, such as Fast Ethernet and Gigabit Ethernet, support data transfer rates of 100 Mbps and 1 gigabits per second (Gbps), respectively, and are used in network backbones.

The LAN illustrated in Figure 7-1 uses a client/server architecture where the network operating system resides primarily on a single file server, and the server provides much of the control and resources for the network. Alternatively, LANs may use a
peer-to-peer
architecture. A peer-to-peer network treats all processors equally and is used primarily in small networks with 10 or fewer users. The various computers on the network can exchange data by direct access and can share peripheral devices without going through a separate server.

In LANs using the Windows Server family of operating systems, the peer-to-peer architecture is called the workgroup network model in which a small group of computers can share resources, such as files, folders, and printers, over the network without a dedicated server. The Windows domain network model, in contrast, uses a dedicated server to manage the computers in the network.

Larger LANs have many clients and multiple servers, with separate servers for specific services, such as storing and managing files and databases (file servers or database servers), managing printers (print servers), storing and managing e-mail (mail servers), or storing and managing Web pages (Web servers).

Sometimes LANs are described in terms of the way their components are connected together, or their
topology
. There are three major LAN topologies: star, bus, and ring (see Figure 7-6).

In a
star topology
, all devices on the network connect to a single hub. Figure 7-6 illustrates a simple star topology in which all network traffic flows through the hub. In an extended star network, multiple layers or hubs are organized into a hierarchy.

In a
bus topology
, one station transmits signals, which travel in both directions along a single transmission segment. All of the signals are broadcast in both directions to the entire network. All machines on the network receive the same signals, and software installed on the client’s enables each client to listen for messages addressed specifically to it. The bus topology is the most common Ethernet topology.

A
ring topology
connects network components in a closed loop. Messages pass from computer to computer in only one direction around the loop, and only one station at a time may transmit. The ring topology is primarily found in older LANs using Token Ring networking software.

Metropolitan- and Wide-Area Networks


Wide-area networks (WANs)
span broad geographical distances—entire regions, states, continents, or the entire globe. The most universal and powerful WAN is the Internet. Computers connect to a WAN through public networks, such as the telephone system or private cable systems, or through leased lines or satellites. A
metropolitan-area network (MAN)
is a network that spans a metropolitan area, usually a city and its major suburbs. Its geographic scope falls between a WAN and a LAN.

FIGURE 7-6 NETWORK TOPOLOGIES

The three basic network topologies are the bus, star, and ring.

PHYSICAL TRANSMISSION MEDIA

Networks use different kinds of physical transmission media, including twisted wire, coaxial cable, fiber optics, and media for wireless transmission. Each has advantages and limitations. A wide range of speeds is possible for any given medium depending on the software and hardware configuration.

Twisted Wire

Twisted wire

consists of strands of copper wire twisted in pairs and is an older type of transmission medium. Many of the telephone systems in buildings had twisted wires installed for analog communication, but they can be used for digital communication as well. Although an older physical transmission medium, the twisted wires used in today’s LANs, such as CAT5, can obtain speeds up to 1 Gbps. Twisted-pair cabling is limited to a maximum recommended run of 100 meters (328 feet).

Coaxial Cable

Coaxial cable

, similar to that used for cable television, consists of thickly insulated copper wire, which can transmit a larger volume of data than twisted wire. Cable was used in early LANs and is still used today for longer (more than 100 meters) runs in large buildings. Coaxial has speeds up to 1 Gbps.

Fiber Optics and Optical Networks

Fiber-optic cable

consists of bound strands of clear glass fiber, each the thickness of a human hair. Data are transformed into pulses of light, which are sent through the fiber-optic cable by a laser device at rates varying from 500 kilobits to several trillion bits per second in experimental settings. Fiber-optic cable is considerably faster, lighter, and more durable than wire media, and is well suited to systems requiring transfers of large volumes of data. However, fiber-optic cable is more expensive than other physical transmission media and harder to install.

Until recently, fiber-optic cable had been used primarily for the high-speed network backbone, which handles the major traffic. Now telecommunications companies are starting to bring fiber lines into the home for new types of services, such as ultra high-speed Internet access (5 to 50 Mbps) and on-demand video.

Wireless Transmission Media

Wireless transmission is based on radio signals of various frequencies.

Microwave

systems, both terrestrial and celestial, transmit high-frequency radio signals through the atmosphere and are widely used for high-volume, long-distance, point-to-point communication. Microwave signals follow a straight line and do not bend with the curvature of the earth. Therefore, long-distance terrestrial transmission systems require that transmission stations be positioned about 37 miles apart. Long-distance transmission is also possible by using communication satellites as relay stations for microwave signals transmitted from terrestrial stations.

Communication satellites are typically used for transmission in large, geographically dispersed organizations that would be difficult to network using cabling media or terrestrial microwave. For instance, the global energy company BP p.l.c. uses satellites for real-time data transfer of oil field exploration data gathered from searches of the ocean floor. Using geosynchronous satellites, exploration ships transfer these data to central computing centers in the United States for use by researchers in Houston, Tulsa, and suburban Chicago. Figure 7-7 illustrates how this system works.

Cellular systems use radio waves to communicate with radio antennas (towers) placed within adjacent geographic areas called cells. Communications transmitted from a
cell phone
to a local cell pass from antenna to antenna—cell to cell—until they reach their final destination.

FIGURE 7-7 BP’S SATELLITE TRANSMISSION SYSTEM

Communication satellites help BP transfer seismic data between oil exploration ships and research centers in the United States.

Wireless networks are supplanting traditional wired networks for many applications and creating new applications, services, and business models. In Section 7.4 we provide a detailed description of the applications and technology standards driving the “wireless revolution.”

Transmission Speed

The total amount of digital information that can be transmitted through any telecommunications medium is measured in bits per second (bps). One signal change, or cycle, is required to transmit one or several bits; therefore, the transmission capacity of each type of telecommunications medium is a function of its frequency. The number of cycles per second that can be sent through that medium is measured in
hertz
—one hertz is equal to one cycle of the medium.

The range of frequencies that can be accommodated on a particular telecommunications channel is called its
bandwidth
. The bandwidth is the difference between the highest and lowest frequencies that can be accommodated on a single channel. The greater the range of frequencies, the greater the bandwidth and the greater the channel’s transmission capacity. Table 7-2 compares the transmission speeds of the major types of media.

7.3 The Global Internet

We all use the Internet, and many of us can’t do without it. It’s become an indispensable personal and business tool. But what exactly is the Internet? How does it work, and what does Internet technology have to offer for business? Let’s look at the most important Internet features.

WHAT IS THE INTERNET?

The Internet has become the world’s most extensive, public communication system that now rivals the global telephone system in reach and range. It’s also the world’s largest implementation of client/server computing and internetworking, linking millions of individual networks all over the world. This gigantic network of networks began in the early 1970s as a U.S. Department of Defense network to link scientists and university professors around the world.

TABLE 7-2 TYPICAL

SPEED

S OF TELECOMMUNICATIONS TRANSMISSION MEDIA

MEDIUM

SPEED
Twisted wire

Up to 1 Gbps


Microwave

Up to 600 + Mbps

Satellite

Up to 600 + Mbps
Coaxial cable
Up to 1 Gbps
Fiber-optic cable

Up to 6 + Tbps

Mbps = megabits per second1

Gbps = gigabits per second

Tbps = terabits per second

Most homes and small businesses connect to the Internet by subscribing to an Internet service provider. An
Internet service provider (ISP)
is a commercial organization with a permanent connection to the Internet that sells temporary connections to retail subscribers. EarthLink, NetZero, AT&T, and Microsoft Network (MSN) are ISPs. Individuals also connect to the Internet through their business firms, universities, or research centers that have designated Internet domains.

There are a variety of services for ISP Internet connections. Connecting via a traditional telephone line and modem, at a speed of 56.6 kilobits per second (Kbps) used to be the most common form of connection worldwide, but it is quickly being replaced by broadband connections. Digital subscriber line (DSL), cable, and satellite Internet connections, and T lines provide these broadband services.


Digital subscriber line (DSL)
technologies operate over existing telephone lines to carry voice, data, and video at transmission rates ranging from 385 Kbps all the way up to 9 Mbps.
Cable Internet connections
provided by cable television vendors use digital cable coaxial lines to deliver high-speed Internet access to homes and businesses. They can provide high-speed access to the Internet of up to 10 Mbps. In areas where DSL and cable services are unavailable, it is possible to access the Internet via satellite, although some satellite Internet connections have slower upload speeds than these other broadband services.

T1 and T3 are international telephone standards for digital communication. They are leased, dedicated lines suitable for businesses or government agencies requiring high-speed guaranteed service levels. T1 lines offer guaranteed delivery at 1.54 Mbps, and T3 lines offer delivery at 45 Mbps.

INTERNET ADDRESSING AND ARCHITECTURE

The Internet is based on the TCP/IP networking protocol suite described earlier in this chapter. Every computer on the Internet is assigned a unique
Internet Protocol (IP) address
, which currently is a 32-bit number represented by four strings of numbers ranging from 0 to 255 separated by periods. For instance, the IP address of www.microsoft.com is 207.46.250.119.

When a user sends a message to another user on the Internet, the message is first decomposed into packets using the TCP protocol. Each packet contains its destination address. The packets are then sent from the client to the network server and from there on to as many other servers as necessary to arrive at a specific computer with a known address. At the destination address, the packets are reassembled into the original message.

The Domain Name System

Because it would be incredibly difficult for Internet users to remember strings of 12 numbers, a
Domain Name System (DNS)
converts IP addresses to domain names. The
domain name
is the English-like name that corresponds to the unique 32-bit numeric IP address for each computer connected to the Internet. DNS servers maintain a database containing IP addresses mapped to their corresponding domain names. To access a computer on the Internet, users need only specify its domain name.

DNS has a hierarchical structure (see Figure 7-8). At the top of the DNS hierarchy is the root domain. The child domain of the root is called a top-level domain, and the child domain of a top-level domain is called is a second-level domain. Top-level domains are two-and three-character names you are familiar with from surfing the Web, for example, .com,

.edu

,

.gov

, and the various country codes such as .ca for Canada or .it for Italy. Second-level domains have two parts, designating a top-level name and a second-level name—such as buy.com, nyu.edu, or amazon.ca. A host name at the bottom of the hierarchy designates a specific computer on either the Internet or a private network.

FIGURE 7-8 THE DOMAIN NAME SYSTEM

Domain Name System is a hierarchical system with a root domain, top-level domains, second-level domains, and host computers at the third level.

The most common domain extensions currently available and officially approved are shown in the following list. Countries also have domain names such as .uk, .au, and .fr (United Kingdom, Australia, and France, respectively). In the future, this list will expand to include many more types of organizations and industries.

.com

Commercial organizations/businesses

.edu

Educational institutions

.gov

U.S. government agencies

.mil

U.S. military

.net

Network computers

.org

Nonprofit organizations and foundations

.biz

Business firms

.info

Information providers

Internet Architecture and Governance

Internet data traffic is carried over transcontinental high-speed backbone networks that generally operate today in the range of 45 Mbps to 2.5 Gbps (see Figure 7-9). These trunk lines are typically owned by long-distance telephone companies (called network service providers) or by national governments. Local connection lines are owned by regional telephone and cable television companies in the United States that connect retail users in homes and businesses to the Internet. The regional networks lease access to ISPs, private companies, and government institutions.

FIGURE 7-9 INTERNET NETWORK ARCHITECTURE

The Internet backbone connects to regional networks, which in turn provide access to Internet service providers, large firms, and government institutions. Network access points (NAPs) and metropolitan area exchanges (MAEs) are hubs where the backbone intersects regional and local networks and where backbone owners connect with one another.

Each organization pays for its own networks and its own local Internet connection services, a part of which is paid to the long-distance trunk line owners. Individual Internet users pay ISPs for using their service, and they generally pay a flat subscription fee, no matter how much or how little they use the Internet. A debate is now raging on whether this arrangement should continue or whether heavy Internet users who download large video and music files should pay more for the bandwidth they consume. The Interactive Session on Organizations explores this topic, as it examines the pros and cons of network neutrality.

No one “owns” the Internet, and it has no formal management. However, worldwide Internet policies are established by a number of professional organizations and government bodies, including the Internet Architecture Board (IAB), which helps define the overall structure of the Internet; the Internet Corporation for Assigned Names and Numbers (ICANN), which assigns IP addresses; and the

World Wide Web

Consortium (W3C), which sets Hypertext Markup Language (HTML) and other programming standards for the Web.

These organizations influence government agencies, network owners, ISPs and software developers with the goal of keeping the Internet operating as efficiently as possible. The Internet must also conform to the laws of the sovereign nation-states in which it operates, as well as the technical infrastructures that exist within the nation-states. Although in the early years of the Internet and the Web there was very little legislative or executive interference, this situation is changing as the Internet plays a growing role in the distribution of information and knowledge, including content that some find objectionable.

INTERACTIVE SESSION: ORGANIZATIONS SHOULD NETWORK NEUTRALITY CONTINUE?

What kind of Internet user are you? Do you primarily use the Net to do a little e-mail and look up phone numbers? Or are you online all day, watching YouTube videos, downloading music files, or playing massively multiplayer online games? If you’re the latter, you are consuming a great deal of bandwidth, and hundreds of millions of people like you might start to slow the Internet down. YouTube consumed as much bandwidth in 2007 as the entire Internet did in 2000. That’s one of the arguments being made today for charging Internet users based on the amount of transmission capacity they use.

According to one November 2007 report, a research firm projected that user demand for the Internet could outpace network capacity by 2011. If this happens, the Internet might not come to a screeching halt, but users would be faced with sluggish download speeds and slow performance of YouTube, Facebook, and other data-heavy services. Other researchers believe that as digital traffic on the Internet grows, even at a rate of 50 percent per year, the technology for handling all this traffic is advancing at an equally rapid pace.

In addition to these technical issues, the debate about metering Internet use centers around the concept of network neutrality. Network neutrality is the idea that Internet service providers must allow customers equal access to content and applications, regardless of the source or nature of the content. Presently, the Internet is indeed neutral: all Internet traffic is treated equally on a first-come, first-serve basis by Internet backbone owners. The Internet is neutral because it was built on phone lines, which are subject to ‘common carriage’ laws. These laws require phone companies to treat all calls and customers equally. They cannot offer extra benefits to customers willing to pay higher premiums for faster or clearer calls, a model known as tiered service.

Now telecommunications and cable companies want to be able to charge differentiated prices based on the amount of bandwidth consumed by content being delivered over the Internet. In June 2008, Time Warner Cable started testing metered pricing for its Internet access service in the city of Beaumont, Texas. Under the pilot program, Time Warner charged customers an additional $1 per month for each gigabyte of content they downloaded or sent over the bandwidth limit of their monthly plan. The company reported that 5 percent of its customers had been using half the capacity on its local lines without paying any more than low-usage customers, and that metered pricing was “the fairest way” to finance necessary investments in its network infrastructure.

This is not how Internet service has worked traditionally and contradicts the goals of network neutrality. Advocates of net neutrality are pushing Congress to regulate the industry, requiring network providers to refrain from these types of practices. The strange alliance of net neutrality advocates includes MoveOn.org, the Christian Coalition, the American Library Association, every major consumer group, many bloggers and small businesses, and some large Internet companies like Google and Amazon. Representative Ed Markey and Senators Byron Dorgan and Olympia Snowe have responded to these concerns by drafting the Internet Freedom Preservation Act and the Net Neutrality Act, which would ban discriminatory methods of managing Internet traffic. However, any legislation regarding net neutrality is considered unlikely to be passed quickly because of significant resistance by Internet service providers.

Internet service providers point to the upsurge in piracy of copyrighted materials over the Internet. Comcast, the second largest Internet service provider in the United States, reported that illegal file sharing of copyrighted material was consuming 50 percent of its network capacity. At one point Comcast slowed down transmission of BitTorrent files, used extensively for piracy and illegal sharing of copyrighted materials, including video. Comcast drew fierce criticism for its handling of BitTorrent packets, and later switched to a “plaform-agnostic” approach. It currently slows down the connection of any customer who uses too much bandwidth during congested periods without singling out the specific services the customer is using. In controlling piracy and prioritizing bandwidth usage on the Internet, Comcast claims to be providing better service for its customers who are using the Web legally.

Net neutrality advocates argue that the risk of censorship increases when network operators can selectively block or slow access to certain content. There are already many examples of Internet providers restricting access to sensitive materials (such as anti-Bush comments from an online Pearl Jam concert, a text-messaging program from pro-choice group NARAL, or access to competitors like Vonage). Pakistan’s government blocked access to anti-Muslim sites and YouTube as a whole in response to content they deemed defamatory to Islam.

Proponents of net neutrality also argue that a neutral Internet encourages everyone to innovate without permission from the phone and cable companies or other authorities, and this level playing field has spawned countless new businesses. Allowing unrestricted information flow becomes essential to free markets and democracy as commerce and society increasingly move online.

Network owners believe regulation like the bills proposed by net neutrality advocates will impede U.S. competitiveness by stifling innovation and hurt customers who will benefit from ‘discriminatory’ network practices. U.S. Internet service lags behind other many other nations in overall speed, cost, and quality of service, adding credibility to the providers’ arguments.

Network neutrality advocates counter that U.S. carriers already have too much power due to lack of options for service. Without sufficient competition, the carriers have more freedom to set prices and policies, and customers cannot seek recourse via other options. Carriers can discriminate in favor of their own content. Even broadband users in large metropolitan areas lack many options for service. With enough options for Internet access, net neutrality would not be such a pressing issue. Dissatisfied consumers could simply switch to providers who enforce net neutrality and allow unlimited Internet use.

The issue is a long way from resolution. Even notable Internet personalities disagree, such as the co-inventors of the Internet Protocol, Vint Cerf and Bob Kahn. Cerf favors net neutrality, saying that variable access to content would detract from the Internet’s continued ability to thrive (“allowing broadband carriers to control what people see and do online would fundamentally undermine the principles that have made the Internet such a success”). Kahn is more cautious, saying that net neutrality removes the incentive for network providers to innovate, provide new capabilities, and upgrade to new technology. Who’s right, who’s wrong? The debate continues.

Sources: Andy Dornan, “Is Your Network Neutral?” Information Week, May 18, 2008; Rob Preston, “Meter is Starting to Tick on Internet Access Pricing,” Information Week, June 9, 2008; Damian Kulash, Jr. “Beware of the New New Thing,” The New York Times, April 5, 2008; Steve Lohr, “Video Road Hogs Stir Fear of Internet Traffic Jam,” The New York Times, March 13, 2008; Peter Burrows, “The FCC, Comcast, and Net Neutrality,” Business Week, February 26, 2008; S. Derek Turner, “Give Net Neutrality a Chance,” Business Week, July 12, 2008; K.C. Jones, “Piracy Becomes Focus of Net Neutrality Debate,” Information Week, May 6, 2008; Jane Spencer, “How a System Error in Pakistan Shut YouTube,” The Wall Street Journal, February 26, 2008.

CASE STUDY QUESTIONS

1.
What is network neutrality? Why has the Internet operated under net neutrality up to this point in time?

2.
Who’s in favor of network neutrality? Who’s opposed? Why?

3.
What would be the impact on individual users, businesses, and government if Internet providers switched to a tiered service model?

4.
Are you in favor of legislation enforcing network neutrality? Why or why not?

MIS IN ACTION

1.
Visit the Web site of the Open Internet Coalition and select five member organizations. Then visit the Web site of each of these organizations or surf the Web to find out more information about each. Write a short essay explaining why each organization is in favor of network neutrality.

2.
Calculate how much bandwidth you consume when using the Internet every day. How many e-mails do you send daily and what is the size of each? (Your e-mail program may have e-mail file size information.) How many music and video clips do you download daily and what is the size of each? If you view YouTube often, surf the Web to find out the size of a typical YouTube file. Add up the number of e-mail, audio, and video files you transmit or receive on a typical day.

The Future Internet: IPv6 and Internet2

The Internet was not originally designed to handle the transmission of massive quantities of data and billions of users. Because many corporations and governments have been given large blocks of millions of IP addresses to accommodate current and future workforces, and because of sheer Internet population growth, the world will run out of available IP addresses using the existing addressing convention by 2012 or 2013. Under development is a new version of the IP addressing schema called Internet Protocol version 6 (IPv6), which contains 128-bit addresses (2 to the power of 128), or more than a quadrillion possible unique addresses.


Internet2
and Next-Generation Internet (NGI) are consortia representing 200 universities, private businesses, and government agencies in the United States that are working on a new, robust, high-bandwidth version of the Internet. They have established several new high-performance backbone networks with bandwidths ranging from 2.5 Gbps to 9.6 Gbps. Internet2 research groups are developing and implementing new technologies for more effective routing practices; different levels of service, depending on the type and importance of the data being transmitted; and advanced applications for distributed computation, virtual laboratories, digital libraries, distributed learning, and tele-immersion. These networks do not replace the public Internet, but they do provide test beds for leading-edge technology that may eventually migrate to the public Internet.

INTERNET SERVICES AND COMMUNICATION TOOLS

The Internet is based on client/server technology. Individuals using the Internet control what they do through client applications on their computers, such as Web browser software. The data, including e-mail messages and Web pages, are stored on servers. A client uses the Internet to request information from a particular Web server on a distant computer, and the server sends the requested information back to the client over the Internet. Chapters 5 and 6 describe how Web servers work with application servers and database servers to access information from an organization’s internal information systems applications and their associated databases. Client platforms today include not only PCs and other computers but also cell phones, small handheld digital devices, and other information appliances.

Internet Services

A client computer connecting to the Internet has access to a variety of services. These services include e-mail, electronic discussion groups, chatting and instant messaging,

Telnet

,

File Transfer Protocol (FTP)

, and the World Wide Web. Table 7-3 provides a brief description of these services.

Each Internet service is implemented by one or more software programs. All of the services may run on a single server computer, or different services may be allocated to different machines. Figure 7-10 illustrates one way that these services can be arranged in a multitiered client/server architecture.

TABLE 7-3 MAJOR INTERNET SERVICES

CAPABILITY

FUNCTIONS SUPPORTED

E-mail

Person-to-person messaging; document sharing

Chatting and instant messaging

Interactive conversations

Newsgroups

Discussion groups on electronic bulletin boards


Telnet

Logging on to one computer system and doing work on another


File Transfer Protocol (FTP)

Transferring files from computer to computer

World Wide Web

Retrieving, formatting, and displaying information (including text, audio, graphics, and video) using hypertext links

FIGURE 7-10 CLIENT/SERVER COMPUTING ON THE INTERNET

Client computers running Web browser and other software can access an array of services on servers over the Internet. These services may all run on a single server or on multiple specialized servers.


E-mail
enables messages to be exchanged from computer to computer, with capabilities for routing messages to multiple recipients, forwarding messages, and attaching text documents or multimedia files to messages. Although some organizations operate their own internal electronic mail systems, most e-mail today is sent through the Internet. The costs of e-mail is far lower than equivalent voice, postal, or overnight delivery costs, making the Internet a very inexpensive and rapid communications medium. Most e-mail messages arrive anywhere in the world in a matter of seconds.

Nearly 90 percent of U.S. workplaces have employees communicating interactively using
chat
or instant messaging tools. Chatting enables two or more people who are simultaneously connected to the Internet to hold live, interactive conversations. Chat systems now support voice and video chat as well as written conversations. Many online retail businesses offer chat services on their Web sites to attract visitors, to encourage repeat purchases, and to improve customer service.


Instant messaging
is a type of chat service that enables participants to create their own private chat channels. The instant messaging system alerts the user whenever someone on his or her private list is online so that the user can initiate a chat session with other individuals. Instant messaging systems for consumers include Yahoo! Messenger and AOL Instant Messenger. Companies concerned with security use proprietary instant messaging systems such as Lotus Sametime.

Newsgroups are worldwide discussion groups posted on Internet electronic bulletin boards on which people share information and ideas on a defined topic, such as radiology or rock bands. Anyone can post messages on these bulletin boards for others to read. Many thousands of groups exist that discuss almost all conceivable topics.

Employee use of e-mail, instant messaging, and the Internet is supposed to increase worker productivity, but the accompanying Interactive Session on Management shows that this may not always be the case. Many company managers now believe they need to monitor and even regulate their employees’ online activity. But is this ethical? Although there are some strong business reasons why companies may need to monitor their employees’ e-mail and Web activities, what does this mean for employee privacy?

Voice over IP

The Internet has also become a popular platform for voice transmission and corporate networking.
Voice over IP (VoIP)
technology delivers voice information in digital form using packet switching, avoiding the tolls charged by local and long-distance telephone networks (see Figure 7-11). Calls that would ordinarily be transmitted over public telephone networks would travel over the corporate network based on the Internet Protocol, or the public Internet. Voice calls can be made and received with a desktop computer equipped with a microphone and speakers or with a VoIP-enabled telephone.

FIGURE 7-11 HOW VOICE OVER IP WORKS

An VoIP phone call digitizes and breaks up a voice message into data packets that may travel along different routes before being reassembled at the final destination. A processor nearest the call’s destination, called a gateway, arranges the packets in the proper order and directs them to the telephone number of the receiver or the IP address of the receiving computer.

INTERACTIVE SESSION: MANAGEMENT MONITORING EMPLOYEES ON NETWORKS: UNETHICAL OR GOOD BUSINESS?

As Internet use has exploded worldwide, so have the use of e-mail and the Web for personal business at the workplace. Several management problems have emerged: First, checking e-mail, responding to instant messages, or sneaking in a brief YouTube or MySpace video create a series of nonstop interruptions that divert employee attention from the job tasks they are supposed to be performing. According to Basex, a New York City business research company, these distractions take up as much as 28 percent of the average U.S. worker’s day and result in $650 billion in lost productivity each year.

Second, these interruptions are not necessarily work-related. A number of studies have concluded that at least 25 percent of employee online time is spent on non-work-related Web surfing, and perhaps as many as 90 percent of employees receive or send personal e-mail at work.

Many companies have begun monitoring their employee use of e-mail, blogs, and the Internet, sometimes without their knowledge. A recent American Management Association (AMA) survey of 304 U.S. companies of all sizes found that 66 percent of these companies monitor employee e-mail messages and Web connections. Although U.S. companies have the legal right to monitor employee Internet and e-mail activity while they are at work, is such monitoring unethical, or is it simply good business?

Managers worry about the loss of time and employee productivity when employees are focusing on personal rather than company business. Too much time on personal business, on the Internet or not, can mean lost revenue or overbilled clients. Some employees may be charging time they spend trading their personal stocks online or pursuing other personal business to clients, thus overcharging the clients.

If personal traffic on company networks is too high, it can also clog the company’s network so that legitimate business work cannot be performed. Schemmer Associates, an architecture firm in Omaha, Nebraska, and Potomac Hospital in Woodridge, Virginia, found their computing resources were limited by a lack of bandwidth caused by employees using corporate Internet connections to watch and download video files.

When employees use e-mail or the Web at employer facilities or with employer equipment, anything they do, including anything illegal, carries the company’s name. Therefore, the employer can be traced and held liable. Management in many firms fear that racist, sexually explicit, or other potentially offensive material accessed or traded by their employees could result in adverse publicity and even lawsuits for the firm. Even if the company is found not to be liable, responding to lawsuits could cost the company tens of thousands of dollars.

Companies also fear leakage of confidential information and trade secrets through e-mail or blogs. Ajax Boiler, based in Santa Ana, California, learned that one of its senior managers was able to access the network of a former employer and read the e-mail of that company’s human resources manager. The Ajax employee was trying to gather information for a lawsuit against the former employer.

Companies that allow employees to use personal e-mail accounts at work face legal and regulatory trouble if they do not retain those messages. E-mail today is an important source of evidence for lawsuits, and companies are now required to retain all of their e-mail messages for longer periods than in the past. Courts do not discriminate about whether e-mails involved in lawsuits were sent via personal or business e-mail accounts. Not producing those e-mails could result in a five-to six-figure fine.

U.S. companies have the legal right to monitor what employees are doing with company equipment during business hours. The question is whether electronic surveillance is an appropriate tool for maintaining an efficient and positive workplace. Some companies try to ban all personal activities on corporate networks—zero tolerance. Others block employee access to specific Web sites or limit personal time on the Web using software that enables IT departments to track the Web sites employees visit, the amount of time employees spend at these sites, and the files they download. Ajax uses software from SpectorSoft Corporation that records all the Web sites employees visit, time spent at each site, and all e-mails sent. Schemmer Associates uses OpenDNS to categorize and filter Web content and block unwanted video.

Some firms have fired employees who have stepped out of bounds. One-third of the companies surveyed in the AMA study had fired workers for misusing the Internet on the job. Among managers who fired employees for Internet misuse, 64 percent did so because the employees’ e-mail contained inappropriate or offensive language, and more than 25 percent fired workers for excessive personal use of e-mail.

No solution is problem free, but many consultants believe companies should write corporate policies on employee e-mail and Internet use. The policies should include explicit ground rules that state, by position or level, under what circumstances employees can use company facilities for e-mail, blogging, or Web surfing. The policies should also inform employees whether these activities are monitored and explain why.

The rules should be tailored to specific business needs and organizational cultures. For example, although some companies may exclude all employees from visiting sites that have explicit sexual material, law firm or hospital employees may require access to these sites. Investment firms will need to allow many of their employees access to other investment sites. A company dependent on widespread information sharing, innovation, and independence could very well find that monitoring creates more problems than it solves.

Sources: Nancy Gohring, “Over 50 Percent of Companies Fire Workers for E-Mail, Net Abuse,” InfoWorld, February 28, 2008; Bobby White, “The New Workplace Rules: No Video-Watching,” The Wall Street Journal, March 4, 2008; Maggie Jackson, “May We Have Your Attention, Please?” Business Week, June 23, 2008; Katherine Wegert, “Workers Can Breach Security Knowingly Or Not,” Dow Jones News Service, June 24, 2007; Andrew Blackman, “Foul Sents,” The Wall Street Journal, March 26, 2007.

CASE STUDY QUESTIONS

1.
Should managers monitor employee e-mail and Internet usage? Why or why not?

2.
Describe an effective e-mail and Web use policy for a company.

MIS IN ACTION

Explore the Web site of online employee monitoring software such as SpectorSoft or SpyTech NetVizor and answer the following questions.

1.
What employee activities does this software track? What can an employer learn about an employee by using this software?

2.
How can businesses benefit from using this software?

3.
How would you feel if your employer used this software where you work to monitor what you are doing on the job? Explain your response.

Telecommunications service providers (such as Verizon) and cable firms (such as Time Warner and Cablevision) provide VoIP services. Skype, acquired by eBay, offers free VoIP worldwide using a peer-to-peer network, and Google has its own free VoIP service.

Although there are up-front investments required for an IP phone system, VoIP can reduce communication and network management costs by 20 to 30 percent. For example, VoIP saves Virgin Entertainment Group $700,000 per year in long-distance bills. In addition to lowering long-distance costs and eliminating monthly fees for private lines, an IP network provides a single voice-data infrastructure for both telecommunications and computing services. Companies no longer have to maintain separate networks or provide support services and personnel for each different type of network.

Another advantage of VoIP is its flexibility. Unlike the traditional telephone network, phones can be added or moved to different offices without rewiring or reconfiguring the network. With VoIP, a conference call is arranged by a simple click-and-drag operation on the computer screen to select the names of the conferees. Voice mail and e-mail can be combined into a single directory.

Unified Communications

In the past, each of the firm’s networks for wired and wireless data, voice communications, and videoconferencing operated independently of each other and had to be managed separately by the information systems department. Now, however, firms are able to merge disparate communications modes into a single universally accessible service using
unified communications
technology. As the chapter-opening case on Virgin Megastores points out, unified communications integrates disparate channels for voice communications, data communications, instant messaging, e-mail, and electronic conferencing into a single experience where users can seamlessly switch back and forth between different communication modes. Presence technology shows whether a person is available to receive a call. Companies will need to examine how work flows and business processes will be altered by this technology in order to gauge its value.

Virtual Private Networks

What if you had a marketing group charged with developing new products and services for your firm with members spread across the United States? You would want to be able to e-mail each other and communicate with the home office without any chance that outsiders could intercept the communications. In the past, one answer to this problem was to work with large private networking firms who offered secure, private, dedicated networks to customers. But this was an expensive solution. A much less-expensive solution is to create a virtual private network within the public Internet.

A
virtual private network (VPN)
is a secure, encrypted, private network that has been configured within a public network to take advantage of the economies of scale and management facilities of large networks, such as the Internet (see Figure 7-12). A VPN provides your firm with secure, encrypted communications at a much lower cost than the same capabilities offered by traditional non-Internet providers who use their private networks to secure communications. VPNs also provide a network infrastructure for combining voice and data networks.

Several competing protocols are used to protect data transmitted over the public Internet, including Point-to-Point Tunneling Protocol (PPTP). In a process called tunneling, packets of data are encrypted and wrapped inside IP packets. By adding this wrapper around a network message to hide its content, business firms create a private connection that travels through the public Internet.

THE WORLD WIDE WEB

You’ve probably used the World Wide Web to download music, to find information for a term paper, or to obtain news and weather reports. The Web is the most popular Internet service. It’s a system with universally accepted standards for storing, retrieving, formatting, and displaying information using a client/server architecture. Web pages are formatted using hypertext with embedded links that connect documents to one another and that also link pages to other objects, such as sound, video, or animation files. When you click a graphic and a video clip plays, you have clicked a hyperlink. A typical
Web site
is a collection of Web pages linked to a home page.

FIGURE 7-12 A VIRTUAL PRIVATE NETWORK USING THE INTERNET

This VPN is a private network of computers linked using a secure “tunnel” connection over the Internet. It protects data transmitted over the public Internet by encoding the data and “wrapping” them within the Internet Protocol (IP). By adding a wrapper around a network message to hide its content, organizations can create a private connection that travels through the public Internet.

Hypertext

Web pages are based on a standard Hypertext Markup Language (HTML), which formats documents and incorporates dynamic links to other documents and pictures stored in the same or remote computers (see Chapter 5). Web pages are accessible through the Internet because Web browser software operating your computer can request Web pages stored on an Internet host server using the
Hypertext Transfer Protocol (HTTP)
. HTTP is the communications standard used to transfer pages on the Web. For example, when you type a Web address in your browser, such as www.sec.gov, your browser sends an HTTP request to the sec.gov server requesting the home page of sec.gov.

HTTP is the first set of letters at the start of every Web address, followed by the domain name, which specifies the organization’s server computer that is storing the document. Most companies have a domain name that is the same as or closely related to their official corporate name. The directory path and document name are two more pieces of information within the Web address that help the browser track down the requested page. Together, the address is called a
uniform resource locator (URL)
. When typed into a browser, a URL tells the browser software exactly where to look for the information. For example, in the URL http://www.megacorp.com/content/features/082602.html, http names the protocol used to display Web pages,
www.megacorp.com
is the domain name, content/features is the directory path that identifies where on the domain Web server the page is stored, and 082602.html is the document name and the name of the format it is in (it is an HTML page).

Web Servers

A Web server is software for locating and managing stored Web pages. It locates the Web pages requested by a user on the computer where they are stored and delivers the Web pages to the user’s computer. Server applications usually run on dedicated computers, although they can all reside on a single computer in small organizations.

The most common Web server in use today is Apache HTTP Server, which controls 60 percent of the market. Apache is an open source product that is free of charge and can be downloaded from the Web. Microsoft’s product Internet Information Services is the second most commonly used Web server, with a 40-percent market share.

Searching for Information on the Web

No one knows for sure how many Web pages there really are. The surface Web is the part of the Web that search engines visit and about which information is recorded. For instance, Google visited about 50 billion in 2008 although publicly it acknowledges indexing more than 25 billion. But there is a “deep Web” that contains an estimated 800 billion additional pages, many of them proprietary (such as the pages of The Wall Street Journal Online, which cannot be visited without an access code) or that are stored in protected corporate databases.

Search Engines Obviously, with so many Web pages, finding specific Web pages that can help you or your business, nearly instantly, is an important problem. The question is, how can you find the one or two pages you really want and need out of billions of indexed Web pages?
Search engines
attempt to solve the problem of finding useful information on the Web nearly instantly, and, arguably, they are the “killer app” of the Internet era. Today’s search engines can sift through HTML files, files of Microsoft Office applications, and PDF files, with developing capabilities for searching audio, video, and image files. There are hundreds of different search engines in the world, but the vast majority of search results are supplied by three top providers: Google, Yahoo!, and Microsoft.

Web search engines started out in the early 1990s as relatively simple software programs that roamed the nascent Web, visiting pages and gathering information about the content of each page. The first search engines were simple keyword indexes of all the pages they visited, leaving the user with lists of pages that may not have been truly relevant to their search.

In 1994, Stanford University computer science students David Filo and Jerry Yang created a hand-selected list of their favorite Web pages and called it “Yet Another Hierarchical Officious Oracle,” or Yahoo!. Yahoo! was not initially a search engine but rather an edited selection of Web sites organized by categories the editors found useful, but it has since developed its own search engine capabilities.

In 1998, Larry Page and Sergey Brin, two other Stanford computer science students, released their first version of Google. This search engine was different: Not only did it index each Web page’s words but it also ranked search results based on the relevance of each page. Page patented the idea of a page ranking system (PageRank System), which essentially measures the popularity of a Web page by calculating the number of sites that link to that page. Brin contributed a unique Web crawler program that indexed not only keywords on a page but also combinations of words (such as authors and the titles of their articles). These two ideas became the foundation for the Google search engine. Figure 7-13 illustrates how Google works.

Web sites for locating information such as Yahoo!, Google, and MSN have become so popular and easy to use that they also serve as major portals for the Internet (see Chapter 10). Their search engines have become major shopping tools by offering what is now called
search engine marketing
. When users enter a search term at Google, MSN, Yahoo!, or any of the other sites serviced by these search engines, they receive two types of listings: sponsored links, for which advertisers have paid to be listed (usually at the top of the search results page), and unsponsored “organic” search results. In addition, advertisers can purchase tiny text boxes on the side of the Google and MSN search results page. The paid, sponsored advertisements are the fastest-growing form of Internet advertising and are powerful new marketing tools that precisely match consumer interests with advertising messages at the right moment (see the chapter-ending case study). Search engine marketing monetizes the value of the search process.

FIGURE 7-13 HOW GOOGLE WORKS

The Google search engine is continuously crawling the Web, indexing the content of each page, calculating its popularity, and storing the pages so that it can respond quickly to user requests to see a page. The entire process takes about one-half second.

In 2008, 71 million people each day in the United States alone used a search engine, producing over 10 billion searches a month. There are hundreds of search engines but the top three (Google, Yahoo!, and MSN) account for 90 percent of all searches (see Figure 7-14).

Although search engines were originally built to search text documents, the explosion in online video and images has created a demand for search engines that can quickly find specific videos. The words “dance,” “love,” “music,” and “girl” are all exceedingly popular in titles of YouTube videos, and searching on these keywords produces a flood of responses even though the actual contents of the video may have nothing to do with the search term. Searching videos is challenging because computers are not very good or quick at recognizing digital images. Some search engines have started indexing movies scripts so it will be possible to search on dialogue to find a movie. One of the most popular video search engines is Blinkx.com, which stores 18 million hours of video and employs a large group of human classifiers who check the contents of uploaded videos against their titles.

FIGURE 7-14 TOP U.S. WEB SEARCH ENGINES

Google is the most popular search engine on the Web, handling 60 percent of all Web searches.

Sources: Based on data from Nielsen Online and MegaView Search, 2008.

Intelligent Agent Shopping Bots Chapter 11 describes the capabilities of software agents with built-in intelligence that can gather or filter information and perform other tasks to assist users.
Shopping bots
use intelligent agent software for searching the Internet for shopping information. Shopping bots such as MySimon or Froogle can help people interested in making a purchase filter and retrieve information about products of interest, evaluate competing products according to criteria the users have established, and negotiate with vendors for price and delivery terms. Many of these shopping agents search the Web for pricing and availability of products specified by the user and return a list of sites that sell the item along with pricing information and a purchase link.

Web 2.0

If you’ve shared photos over the Internet at Flickr or another photo site, blogged, looked up a word on Wikipedia, or contributed information yourself, you’ve used services that are part of
Web 2.0
. Today’s Web sites don’t just contain static content—they enable people to collaborate, share information, and create new services online. Web 2.0 refers to these second-generation interactive Internet-based services.

The technologies and services that distinguish Web 2.0 include cloud computing, software mashups and widgets, blogs, RSS, and wikis. Mashups and widgets, which we introduced in Chapter 5, are software services that enable users and system developers to mix and match content or software components to create something entirely new. For example, Yahoo’s photo storage and sharing site Flickr combines photos with other information about the images provided by users and tools to make it usable within other programming environments.

These software applications run on the Web itself instead of the desktop and bring the vision of Web-based computing closer to realization. With Web 2.0, the Web is not just a collection of destination sites, but a source of data and services that can be combined to create applications users need. Web 2.0 tools and services have fueled the creation of social networks and other online communities where people can interact with one another in the manner of their choosing.

A
blog
, the popular term for a Weblog, is an informal yet structured Web site where subscribing individuals can publish stories, opinions, and links to other Web sites of interest. Blogs have become popular personal publishing tools, but they also have business uses (see Chapters 10 and 11). For example, Wells Fargo uses blogs to help executives communicate with employees and customers. One of these blogs is dedicated to student loans.

If you’re an avid blog reader, you might use RSS to keep up with your favorite blogs without constantly checking them for updates.
RSS
, which stands for Rich Site Summary or Really Simple Syndication, syndicates Web site content so that it can be used in another setting. RSS technology pulls specified content from Web sites and feeds it automatically to users’ computers, where it can be stored for later viewing.

To receive an RSS information feed, you need to install aggregator or news reader software that can be downloaded from the Web. (Microsoft Internet Explorer 7 includes RSS reading capabilities.) Alternatively, you can establish an account with an aggregator Web site. You tell the aggregator to collect all updates from a given Web page, or list of pages, or gather information on a given subject by conducting Web searches at regular intervals. Once subscribed, you automatically receive new content as it is posted to the specified Web site. A number of businesses use RSS internally to distribute updated corporate information. Wells Fargo uses RSS to deliver news feeds that employees can customize to see the business news of greatest relevance to their jobs.

Blogs allow visitors to add comments to the original content, but they do not allow visitors to change the original posted material.
Wikis
, in contrast, are collaborative Web sites where visitors can add, delete, or modify content on the site, including the work of previous authors. Wiki comes from the Hawaiian word for “quick.” Probably the best-known wiki site is Wikipedia, the massive online open-source encyclopedia to which anyone can contribute. But wikis are also used for business. For example, Motorola sales representatives use wikis for sharing sales information. Instead of developing a different pitch for every client, reps reuse the information posted on the wiki.

Web 3.0: The Future Web

Every day about 75 million Americans enter 330 million queries to search engines. How many of these 330 million queries produce a meaningful result (a useful answer in the first three listings)? Arguably, fewer than half. Google, Yahoo!, Microsoft, and Amazon are all trying to increase the odds of people finding meaningful answers to search engine queries. But with over 50 billion Web pages indexed, the means available for finding the information you really want are quite primitive, based on the words used on the pages, and the relative popularity of the page among people who use those same search terms. In other words, it’s hit and miss.

To a large extent, the future of the Web involves developing techniques to make searching the 50 billion Web pages more productive and meaningful for ordinary people. Web 1.0 solved the problem of obtaining access to information. Web 2.0 solved the problem of sharing that information with others, and building new Web experiences.
Web 3.0
is the promise of a future Web where all this digital information, all these contacts, can be woven together into a single meaningful experience.

Sometimes this is referred to as the
Semantic Web
. “Semantic” refers to meaning. Most of the Web’s content today is designed for humans to read and for computers to display, not for computer programs to analyze and manipulate. Search engines can discover when a particular term or keyword appears in a Web document, but they do not really understand its meaning or how it relates to other information on the Web. You can check this out on Google by entering two searches. First, enter “Paris Hilton”. Next, enter “Hilton in Paris”. Because Google does not understand ordinary English, it has no idea that you are interested in the Hilton Hotel in Paris in the second search. Because it cannot understand the meaning of pages it has indexed, Google’s search engine returns the most popular pages for those queries where ‘Hilton’ and ‘Paris’ appear on the pages.

First described in a 2001 Scientific American article, the Semantic Web is a collaborative effort led by the World Wide Web Consortium to add a layer of meaning atop the existing Web to reduce the amount of human involvement in searching for and processing Web information (Berners-Lee et al., 2001).

Views on the future of the Web vary, but they generally focus on ways to make the Web more “intelligent,” with machine-facilitated understanding of information promoting a more intuitive and effective user experience. For instance, let’s say you want to set up a party with your tennis buddies at a local restaurant Friday night after work. One problem is that you had earlier scheduled to go to a movie with another friend. In a Semantic Web 3.0 environment, you would be able to coordinate this change in plans with the schedules of your tennis buddies, the schedule of your movie friend, and make a reservation at the restaurant all with a single set of commands issued as text or voice to your handheld smartphone. Right now, this capability is beyond our grasp.

Work proceeds slowly on making the Web a more intelligent experience, in large part because it is difficult to make machines, including software programs, that are truly intelligent like humans. But there are other views of the future Web. Some see a 3D Web where you can walk through pages in a 3D environment. Others point to the idea of a pervasive Web that controls everything from the lights in your living room, to your car’s rear view mirror, not to mention managing your calendar and appointments.

Other complementary trends leading toward a future Web 3.0 include more widespread use of cloud computing and SaaS business models, ubiquitous connectivity among mobile platforms and Internet access devices, and the transformation of the Web from a network of separate siloed applications and content into a more seamless and interoperable whole. These more modest visions of the future Web 3.0 are more likely to be realized in the near term.

INTRANETS AND EXTRANETS

Organizations use Internet networking standards and Web technology to create private networks called intranets. We introduced intranets in Chapter 1, explaining that an intranet is an internal organizational network that provides access to data across the enterprise. It uses the existing company network infrastructure along with Internet connectivity standards and software developed for the World Wide Web. Intranets create networked applications that can run on many different kinds of computers throughout the organization, including mobile handheld computers and wireless remote access devices.

Whereas the Web is available to anyone, an intranet is private and is protected from public visits by
firewalls
—security systems with specialized software to prevent outsiders from entering private networks. Intranet software technology is the same as that of the World Wide Web. A simple intranet can be created by linking a client computer with a Web browser to a computer with Web server software using a TCP/IP network and a firewall.

Extranets

A firm creates an extranet to allow authorized vendors and customers to have limited access to its internal intranet. For example, authorized buyers could link to a portion of a company’s intranet from the public Internet to obtain information about the costs and features of the company’s products. The company uses firewalls to ensure that access to its internal data is limited and remains secure; firewalls also authenticate users, making sure that only authorized users access the site.

Both intranets and extranets reduce operational costs by providing the connectivity to coordinate disparate business processes within the firm and to link electronically to customers and suppliers. Extranets often are employed for collaborating with other companies for supply chain management, product design and development, and training efforts.

7.4 The Wireless Revolution

If you have a cell phone, do you use it for taking and sending photos, sending text messages, or downloading music clips? Do you take your laptop to class or to the library to link up to the Internet? If so, you’re part of the wireless revolution! Cell phones, laptops, and small handheld devices have morphed into portable computing platforms that let you perform some of the computing tasks you used to do at your desk.

Wireless communication helps businesses more easily stay in touch with customers, suppliers, and employees and provides more flexible arrangements for organizing work. Wireless technology has also created new products, services, and sales channels, which we discuss in Chapter 10.

If you require mobile communication and computing power or remote access to corporate systems, you can work with an array of wireless devices: cell phones, personal digital assistants, and smartphones. Personal computers are also starting to be used in wireless transmission.


Personal digital assistants (PDAs)
are small, handheld computers featuring applications such as electronic schedulers, address books, memo pads, and expense trackers. Models with digital cell phone capabilities such as e-mail messaging, wireless access to the Internet, voice communication, and digital cameras are called
smartphones
.

CELLULAR SYSTEMS

Cell phones and smartphones have become all-purpose devices for digital data transmission. In addition to voice communication, mobile phones are now used for transmitting text and e-mail messages, instant messaging, digital photos, and short video clips; for playing music and games; for surfing the Web; and even for transmitting and receiving corporate data. For example, Aflac, the giant insurance company, has an application that delivers information on policy servicing questions, the status of claims payments, and customers’ existing or past policies to the smartphones of its entire field force (Sacco, 2008).

Within a few years, a new generation of mobile processors and faster mobile networks will enable these devices to function as digital computing platforms performing many of the tasks of today’s PCs. Smartphones will have the storage and processing power of a PC and be able to run all of your key applications and access all of your digital content.

Cellular Network Standards and Generations

Digital cellular service uses several competing standards. In Europe and much of the rest of the world outside the United Sates, the standard is Global System for Mobile Communications (GSM). GSM’s strength is its international roaming capability. There are GSM cell phone systems in the United States, including T-Mobile and AT&T.

The major standard in the United States is Code Division Multiple Access (CDMA), which is the system used by Verizon and Sprint. CDMA was developed by the military during World War II. It transmits over several frequencies, occupies the entire spectrum, and randomly assigns users to a range of frequencies over time. In general, CDMA is cheaper to implement, is more efficient in its use of spectrum, and provides higher quality throughput of voice and data than GSM.

Earlier generations of cellular systems were designed primarily for voice and limited data transmission in the form of short text messages. Wireless carriers are now rolling out more powerful cellular networks called third-generation or
3G networks
, with transmission speeds ranging from 144 Kbps for mobile users in, say, a car, to more than 2 Mbps for stationary users. This is sufficient transmission capacity for video, graphics, and other rich media, in addition to voice, making 3G networks suitable for wireless broadband Internet access. Many of the cellular handsets available today are 3G-enabled, including the newest version of Apple’s iPhone.

3G networks are widely used in Japan, South Korea, Taiwan, Hong Kong, Singapore, and parts of northern Europe, but such services are not yet available in many U.S. locations. To compensate, U.S. cellular carriers have upgraded their networks to support higher-speed transmission. These interim 2.5G networks provide data transmission rates ranging from 60 to 354 Kbps, enabling cell phones to be used for Web access, music downloads, and other broadband services. AT&T’s EDGE network used by the first-generation iPhone is an example. PCs equipped with a special card can use these broadband cellular services for ubiquitous wireless Internet access.

The next complete evolution in wireless communication, termed 4G, will be entirely packet-switched and capable of providing between 1 Mbps and 1 Gbps speeds, with premium quality and high security. Voice, data, and high-quality streaming video will be available to users anywhere, anytime. International telecommunications regulatory and standardization bodies are working for commercial deployment of 4G networks between 2012 and 2015.

WIRELESS COMPUTER NETWORKS AND INTERNET ACCESS

If you have a laptop computer, you might be able to use it to access the Internet as you move from room to room in your dorm, or table to table in your university library. An array of technologies provide high-speed wireless access to the Internet for PCs and other wireless handheld devices as well as for cell phones. These new high-speed services have extended Internet access to numerous locations that could not be covered by traditional wired Internet services.

Bluetooth


Bluetooth
is the popular name for the 802.15 wireless networking standard, which is useful for creating small
personal-area networks (PANs)
. It links up to eight devices within a 10-meter area using low-power, radio-based communication and can transmit up to 722 Kbps in the 2.4-GHz band.

Wireless phones, pagers, computers, printers, and computing devices using Bluetooth communicate with each other and even operate each other without direct user intervention (see Figure 7-15). For example, a person could direct a notebook computer to send a document file wirelessly to a printer. Bluetooth connects wireless keyboards and mice to PCs or cell phones to earpieces without wires. Bluetooth has low-power requirements, making it appropriate for battery-powered handheld computers, cell phones, or PDAs.

Although Bluetooth lends itself to personal networking, it has uses in large corporations. For example, FedEx drivers use Bluetooth to transmit the delivery data captured by their handheld PowerPad computers to cellular transmitters, which forward the data to corporate computers. Drivers no longer need to spend time docking their handheld units physically in the transmitters, and Bluetooth has saved FedEx $20 million per year.


Wi-Fi

FIGURE 7-15 A BLUETOOTH NETWORK (PAN)

Bluetooth enables a variety of devices, including cell phones, PDAs, wireless keyboards and mice, PCs, and printers, to interact wirelessly with each other within a small 30-foot (10-meter) area. In addition to the links shown, Bluetooth can be used to network similar devices to send data from one PC to another, for example.

The 802.11 set of standards for wireless LANs is also known as
Wi-Fi
. There are three standards in this family: 802.11a, 802.11b, and 802.11g. 802.11n is an emerging standard for increasing the speed and capacity of wireless networking.

The 802.11a standard can transmit up to 54 Mbps in the unlicensed 5-GHz frequency range and has an effective distance of 10 to 30 meters. The 802.11b standard can transmit up to 11 Mbps in the unlicensed 2.4-GHz band and has an effective distance of 30 to 50 meters, although this range can be extended outdoors by using tower-mounted antennas. The 802.11g standard can transmit up to 54 Mbps in the 2.4-GHz range. 802.11n will transmit at more than 100 Mbps.

802.11b was the first wireless standard to be widely adopted for wireless LANs and wireless Internet access. 802.11g is increasingly used for this purpose, and dual-band systems capable of handling 802.11b and 802.11g are available.

In most Wi-Fi communications, wireless devices communicate with a wired LAN using access points. An access point is a box consisting of a radio receiver/transmitter and antennas that links to a wired network, router, or hub.

Figure 7-16 illustrates an 802.11 wireless LAN operating in infrastructure mode that connects a small number of mobile devices to a larger wired LAN. Most wireless devices are client machines. The servers that the mobile client stations need to use are on the wired LAN. The access point controls the wireless stations and acts as a bridge between the main wired LAN and the wireless LAN. (A bridge connects two LANs based on different technologies.) The access point also controls the wireless stations.

Laptop PCs now come equipped with chips to receive Wi-Fi signals. Older models may need an add-in wireless network interface card.

FIGURE 7-16 AN 802.11 WIRELESS LAN

Mobile laptop computers equipped with network interface cards link to the wired LAN by communicating with the access point. The access point uses radio waves to transmit network signals from the wired network to the client adapters, which convert them into data that the mobile device can understand. The client adapter then transmits the data from the mobile device back to the access point, which forwards the data to the wired network.

Wi-Fi and Wireless Internet Access

The 802.11 standard also provides wireless access to the Internet using a broadband connection. In this instance, an access point plugs into an Internet connection, which could come from a cable TV line or DSL telephone service. Computers within range of the access point use it to link wirelessly to the Internet.

Businesses of all sizes are using Wi-Fi networks to provide low-cost wireless LANs and Internet access. Wi-Fi hotspots are springing up in hotels, airport lounges, libraries, cafes, and college campuses to provide mobile access to the Internet. Dartmouth College is one of many campuses where students now use Wi-Fi for research, course work, and entertainment.


Hotspots
typically consist of one or more access points positioned on a ceiling, wall, or other strategic spot in a public place to provide maximum wireless coverage for a specific area. Users in range of a hotspot are able to access the Internet from laptops, handhelds, or cell phones that are Wi-Fi enabled, such as Apple’s iPhone. Some hotspots are free or do not require any additional software to use; others may require activation and the establishment of a user account by providing a credit card number over the Web.

Wi-Fi technology poses several challenges, however. Right now, users cannot freely roam from hotspot to hotspot if these hotspots use different Wi-Fi network services. Unless the service is free, users would need to log on to separate accounts for each service, each with its own fees.

One major drawback of Wi-Fi is its weak security features, which make these wireless networks vulnerable to intruders. We provide more detail about Wi-Fi security issues in Chapter 8.

Another drawback of Wi-Fi networks is susceptibility to interference from nearby systems operating in the same spectrum, such as wireless phones, microwave ovens, or other wireless LANs. Wireless networks based on the 802.11n specification will solve this problem by using multiple wireless antennas in tandem to transmit and receive data and technology to coordinate multiple simultaneous radio signals. This technology is called MIMO (multiple input multiple output).

WiMax

A surprisingly large number of areas in the United States and throughout the world do not have access to Wi-Fi or fixed broadband connectivity. The range of Wi-Fi systems is no more than 300 feet from the base station, making it difficult for rural groups that don’t have cable or DSL service to find wireless access to the Internet.

The IEEE developed a new family of standards known as WiMax to deal with these problems.
WiMax
, which stands for Worldwide Interoperability for Microwave Access, is the popular term for IEEE Standard 802.16, known as the “Air Interface for Fixed Broadband Wireless Access Systems.” WiMax has a wireless access range of up to 31 miles, compared to 300 feet for Wi-Fi and 30 feet for Bluetooth, and a data transfer rate of up to 75 Mbps. The 802.16 specification has robust security and quality-of-service features to support voice and video.

WiMax antennas are powerful enough to beam high-speed Internet connections to rooftop antennas of homes and businesses that are miles away. Sprint Nextel is building a national WiMax network to support video, video calling, and other data-intensive wireless services, and Intel has a special chips that facilitate WiMax access from mobile computers.

RFID AND WIRELESS SENSOR NETWORKS

Mobile technologies are creating new efficiencies and ways of working throughout the enterprise. In addition to the wireless systems we have just described, radio frequency identification systems and wireless sensor networks are having a major impact.

Radio Frequency Identification (RFID)


Radio frequency identification (RFID)
systems provide a powerful technology for tracking the movement of goods throughout the supply chain. RFID systems use tiny tags with embedded microchips containing data about an item and its location to transmit radio signals over a short distance to RFID readers. The RFID readers then pass the data over a network to a computer for processing. Unlike bar codes, RFID tags do not need line-of-sight contact to be read.

The RFID tag is electronically programmed with information that can uniquely identify an item plus other information about the item, such as its location, where and when it was made, or its status during production. Embedded in the tag is a microchip for storing the data. The rest of the tag is an antenna that transmits data to the reader.

The reader unit consists of an antenna and radio transmitter with a decoding capability attached to a stationary or handheld device. The reader emits radio waves in ranges anywhere from 1 inch to 100 feet, depending on its power output, the radio frequency employed, and surrounding environmental conditions. When an RFID tag comes within the range of the reader, the tag is activated and starts sending data. The reader captures these data, decodes them, and sends them back over a wired or wireless network to a host computer for further processing (see Figure 7-17). Both RFID tags and antennas come in a variety of shapes and sizes.

FIGURE 7-17 HOW RFID WORKS

RFID uses low-powered radio transmitters to read data stored in a tag at distances ranging from 1 inch to 100 feet. The reader captures the data from the tag and sends them over a network to a host computer for processing.

Active RFID tags are powered by an internal battery and typically enable data to be rewritten and modified. Active tags can transmit for hundreds of feet but cost $5 and upward per tag. Automated toll-collection systems such as New York’s E-ZPass use active RFID tags.

Passive RFID tags do not have their own power source and obtain their operating power from the radio frequency energy transmitted by the RFID reader. They are smaller, lighter, and less expensive than active tags, but only have a range of several feet.

In inventory control and supply chain management, RFID systems capture and manage more detailed information about items in warehouses or in production than bar coding systems. If a large number of items are shipped together, RFID systems track each pallet, lot, or even unit item in the shipment. This technology may help companies such as Wal-Mart improve receiving and storage operations by improving their ability to “see” exactly what stock is stored in warehouses or on retail store shelves.

Wal-Mart has installed RFID readers at store receiving docks to record the arrival of pallets and cases of goods shipped with RFID tags. The RFID reader reads the tags a second time just as the cases are brought onto the sales floor from backroom storage areas. Software combines sales data from Wal-Mart’s point-of-sale systems and the RFID data regarding the number of cases brought out to the sales floor. The program determines which items will soon be depleted and automatically generates a list of items to pick in the warehouse to replenish store shelves before they run out. This information helps Wal-Mart reduce out-of-stock items, increase sales, and further shrink its costs.

The cost of RFID tags used to be too high for widespread use, but now it is approaching 10 cents per passive tag in the United States. As the price decreases, RFID is starting to become cost-effective for some applications.

In addition to installing RFID readers and tagging systems, companies may need to upgrade their hardware and software to process the massive amounts of data produced by RFID systems—transactions that could add up to tens or hundreds of terabytes.

Special software is required to filter, aggregate, and prevent RFID data from overloading business networks and system applications. Applications will need to be redesigned to accept massive volumes of frequently generated RFID data and to share those data with other applications. Major enterprise software vendors, including SAP and Oracle-PeopleSoft, now offer RFID-ready versions of their supply chain management applications.

Wireless Sensor Networks

If your company wanted state-of-the art technology to monitor building security or detect hazardous substances in the air, it might deploy a wireless sensor network.
Wireless sensor networks (WSNs)
are networks of interconnected wireless devices that are embedded into the physical environment to provide measurements of many points over large spaces. These devices have built-in processing, storage, and radio frequency sensors and antennas. They are linked into an interconnected network that routes the data they capture to a computer for analysis.

These networks range from hundreds to thousands of nodes. Because wireless sensor devices are placed in the field for years at a time without any maintenance or human intervention, they must have very low power requirements and batteries capable of lasting for years.

Figure 7-18 illustrates one type of wireless sensor network, with data from individual nodes flowing across the network to a server with greater processing power. The server acts as a gateway to a network based on Internet technology.

FIGURE 7-18 A WIRELESS SENSOR NETWORK

The small circles represent lower-level nodes and the larger circles represent high-end nodes. Lower-level nodes forward data to each other or to higher-level nodes, which transmit data more rapidly and speed up network performance.

Wireless sensor networks are valuable in areas such as monitoring environmental changes; monitoring traffic or military activity; protecting property; efficiently operating and managing machinery and vehicles; establishing security perimeters; monitoring supply chain management; or detecting chemical, biological, or radiological material.

7.5 Hands-on MIS Projects

The projects in this section give you hands-on experience evaluating and selecting communications technology, using spreadsheet software to improve selection of telecommunications services, and using Web search engines for business research.

Management Decision Problems

1.
Your company supplies ceramic floor tiles to Home Depot, Lowe’s, and other home improvement stores. You have been asked to start using radio frequency identification tags on each case of the tiles you ship to help your customers improve the management of your products and those of other suppliers in their warehouses. Use the Web to identify the cost of hardware, software, and networking components for an RFID system for your company. What factors should be considered? What are the key decisions that have to be made in determining whether your firm should adopt this technology?

2.
BestMed Medical Supplies Corporation sells medical and surgical products and equipment from over 700 different manufacturers to hospitals, health clinics, and medical offices. The company employs 500 people at seven different locations in western and midwestern states, including account managers, customer service and support representatives, and warehouse staff. Employees communicate via traditional telephone voice services, e-mail, instant messaging, and cell phones. Management is inquiring about whether the company should adopt a system for unified communications. What factors should be considered? What are the key decisions that have to be made in determining whether to adopt this technology? Use the Web, if necessary to find out more about unified communications and its costs.

Improving Decision Making: Using Spreadsheet Software to Evaluate Wireless Services

Software skills: Spreadsheet formulas, formatting

Business skills: Analyzing telecommunications services and costs

In this project, you’ll use the Web to research alternative wireless services and use spreadsheet software to calculate wireless service costs for a sales force.

You would like to equip your sales force of 35 based in Cincinnati, Ohio, with mobile phones that have capabilities for voice transmission, text messaging, and taking and sending photos. Use the Web to select a wireless service provider that provides nationwide service as well as good service in your home area. Examine the features of the mobile handsets offered by each of these vendors. Assume that each of the 35 salespeople will need to spend three hours per day during business hours (8 a.m. to 6 p.m.) on mobile voice communication, send 30 text messages per day, and five photos per week. Use your spreadsheet software to determine the wireless service and handset that will offer the best pricing per user over a two-year period. For the purposes of this exercise, you do not need to consider corporate discounts.

Achieving Operational Excellence: Using Web Search Engines for Business Research

Software skills: Web search tools

Business skills: Researching new technologies

This project will help develop your Internet skills in using Web search engines for business research.

You want to learn more about ethanol as an alternative fuel for motor vehicles. Use the following search engines to obtain that information: Yahoo!, Google, and MSN. If you wish, try some other search engines as well. Compare the volume and quality of information you find with each search tool. Which tool is the easiest to use? Which produced the best results for your research? Why?

Learning Track Modules

The following Learning Tracks provide content relevant to topics covered in this chapter:

1. Computing and Communications Services Provided by Commercial Communications Vendors

2. Broadband Network Services and Technologies

3. Cellular System Generations

4. Wireless Applications for Customer Relationship Management, Supply Chain Management, and Healthcare

5. Web 2.0

Review Summary

1.

What are the principal components of telecommunications networks and key networking technologies?

A simple network consists of two or more connected computers. Basic network components include computers, network interfaces, a connection medium, network operating system software, and either a hub or a switch. The networking infrastructure for a large company includes the traditional telephone system, mobile cellular communication, wireless local-area networks, videoconferencing systems, a corporate Web site, intranets, extranets, and an array of local and wide-area networks, including the Internet.

Contemporary networks have been shaped by the rise of client/server computing, the use of packet switching, and the adoption of Transmission Control Protocol/Internet Protocol (TCP/IP) as a universal communications standard for linking disparate networks and computers, including the Internet. Protocols provide a common set of rules that enable communication among diverse components in a telecommunications network.

2.

What are the main telecommunications transmission media and types of networks?

The principal physical transmission media are twisted copper telephone wire, coaxial copper cable, fiber-optic cable, and wireless transmission. Twisted wire enables companies to use existing wiring for telephone systems for digital communication, although it is relatively slow. Fiber-optic and coaxial cable are used for high-volume transmission but are expensive to install. Microwave and communications satellites are used for wireless communication over long distances.

Local-area networks (LANs) connect PCs and other digital devices together within a 500-meter radius and are used today for many corporate computing tasks. Network components may be connected together using a star, bus, or ring topology. Wide-area networks (WANs) span broad geographical distances, ranging from several miles to continents, and are private networks that are independently managed. Metropolitan-area networks (MANs) span a single urban area.

Digital subscriber line (DSL) technologies, cable Internet connections, and T1 lines are often used for high-capacity Internet connections.

Cable Internet connections provide high-speed access to the Web or corporate intranets at speeds of up to 10 Mbps. A T1 line supports a data transmission rate of 1.544 Mbps.

3.

How do the Internet and Internet technology work and how do they support communication and e-business?

The Internet is a worldwide network of networks that uses the client/server model of computing and the TCP/IP network reference model. Every computer on the Internet is assigned a unique numeric IP address. The Domain Name System (DNS) converts IP addresses to more user-friendly domain names. Worldwide Internet policies are established by organizations and government bodies, such as the Internet Architecture Board and the World Wide Web Consortium.

Major Internet services include e-mail, newgroups, chatting, instant messaging, Telnet, FTP, and the World Wide Web. Web pages are based on Hypertext Markup Language (HTML) and can display text, graphics, video, and audio. Web site directories, search engines, and RSS technology help users locate the information they need on the Web. RSS, blogs, and wikis are features of Web 2.0. Web technology and Internet networking standards provide the connectivity and interfaces for internal private intranets and private extranets that be accessed by many different kinds of computers inside and outside the organization.

Firms are also starting to realize economies by using Internet VoIP technology for voice transmission and by using virtual private networks (VPNs) as low-cost alternatives to private WANs.

4.

What are the principal technologies and standards for wireless networking, communication and Internet access?

Cellular networks are evolving toward high-speed, high-bandwidth, digital packet-switched transmission. Broadband 3G networks are capable of transmitting data at speeds ranging from 144 Kbps to more than 2 Mbps. However, 3G services are still not available in most U.S. locations, so U.S. cellular carriers have upgraded their networks to support higher-speed transmission. These interim 2.5G networks provide data transmission rates ranging from 60 to 354 Kbps, enabling cell phones to be used for Web access, music downloads, and other broadband services.

Major cellular standards include Code Division Multiple Access (CDMA), which is used primarily in the United States, and Global System for Mobile Communications (GSM), which is the standard in Europe and much of the rest of the world.

Standards for wireless computer networks include Bluetooth (802.15) for small personal-area networks (PANs), Wi-Fi (802.11) for local-area networks (LANs), and WiMax (802.16) for metropolitan-area networks (MANs).


5.

Why are radio frequency identification (RFID) and wireless sensor networks valuable for business?

Radio frequency identification (RFID) systems provide a powerful technology for tracking the movement of goods by using tiny tags with embedded data about an item and its location. RFID readers read the radio signals transmitted by these tags and pass the data over a network to a computer for processing. Wireless sensor networks (WSNs) are networks of interconnected wireless sensing and transmitting devices that are embedded into the physical environment to provide measurements of many points over large spaces.

Key Terms


3G networks, 278


Bandwidth, 259


Blog, 275


Bluetooth, 279


Broadband, 249


Bus topology, 256


Cable Internet connections, 260


Cell phone, 258


Chat, 266


Coaxial cable, 257


Digital subscriber line (DSL), 260


Domain name, 260


Domain Name System (DNS), 260


E-mail, 266


Fiber-optic cable, 257


File Transfer Protocol (FTP), 265


Firewalls, 277


Hertz, 259


Hotspots, 281


Hubs, 250


Hypertext Transfer Protocol (HTTP), 271


Instant messaging, 267


Internet Protocol (IP) address, 260


Internet service provider (ISP), 260


Internet2, 265


Local area network (LAN), 255


Metropolitan-area network (MAN), 256


Microwave, 258


Modem, 255


Network interface card (NIC), 250


Network operating system (NOS), 250


Packet switching, 252


Peer-to-peer, 256


Personal-area networks (PANs), 279


Personal digital assistants (PDAs), 277


Protocol, 253


Radio frequency identification (RFID), 282


Ring topology, 256


Router, 250


RSS, 275


Search engines, 272


Search engine marketing, 272


Semantic Web, 276


Shopping bots, 274


Smartphones, 277


Star topology, 256


Switch, 250


T1 lines, 260


Telnet, 265


Topology, 256


Transmission Control Protocol/Internet Protocol (TCP/IP), 253


Twisted wire, 257


Unified communications, 270


Uniform resource locator (URL), 271


Virtual private network (VPN), 270


Voice over IP (VoIP), 267


Web 2.0, 274


Web 3.0, 276


Web site, 270


Wide area networks (WANs), 256


Wi-Fi, 279


Wiki, 275


WiMax, 281


Wireless sensor networks (WSNs), 283

Review Questions

1. What are the principal components of telecommunications networks and key networking technologies?

• Describe the features of a simple network and the network infrastructure for a large company.

• Name and describe the principal technologies and trends that have shaped contemporary telecommunications systems.

2. What are the main telecommunications transmission media and types of networks?

• Name the different types of physical transmission media and compare them in terms of speed and cost.

• Define a LAN, and describe its components and the functions of each component.

• Name and describe the principal network topologies.

3. How do the Internet and Internet technology work and how do they support communication and e-business?

• Define the Internet, describe how it works, and explain how it provides business value.

• Explain how the Domain Name System (DNS) and IP addressing system work.

• List and describe the principal Internet services.

• Define and describe VoIP and virtual private networks, and explain how they provide value to businesses.

• List and describe alternative ways of locating information on the Web.

• Compare Web 2.0 and Web 3.0.

• Define and explain the difference between intranets and extranets. Explain how they provide value to businesses.

4. What are the principal technologies and standards for wireless networking, communications, and Internet access?

• Define Bluetooth, Wi-Fi, WiMax, and 3G networks.

• Describe the capabilities of each and for which types of applications each is best suited.

5. Why are RFID and wireless sensor networks (WSNs) valuable for business?

• Define RFID, explain how it works and how it provides value to businesses.

• Define WSNs, explain how they work, and describe the kinds of applications that use them.

Discussion Questions

1. It has been said that within the next few years, smartphones will become the single most important digital device we own. Discuss the implications of this statement.

2. Should all major retailing and manufacturing companies switch to RFID? Why or why not?

Video Cases

You will find video cases illustrating some of the concepts in this chapter on the Laudon Web site along with questions to help you analyze the cases.

Collaboration and Teamwork: Evaluating Smartphones

Form a group with three or four of your classmates. Compare the capabilities of Apple’s iPhone with a smartphone handset from another vendor with similar features. Your analysis should consider the purchase cost of each device, the wireless networks where each device can operate, service plan and handset costs, and the services available for each device. You should also consider other capabilities of each device, including the ability to integrate with existing corporate or PC applications. Which device would you select? What criteria would you use to guide your selection? If possible, use Google Sites to post links to Web pages, team communication announcements, and work assignments; to brainstorm; and to work collaboratively on project documents. Try to use Google Docs to develop a presentation of your findings for the class.

Google Versus Microsoft: Clash of the Technology Titans :CASE STUDY

Google and Microsoft, two of the most prominent technology companies to arise in the past several decades, are poised to square off for dominance of the workplace, the Internet, and the technological world. In fact, the battle is already well underway. Both companies have already achieved dominance in their areas of expertise. Google has dominated the Internet, while Microsoft has dominated the desktop. But both are increasingly seeking to grow into the other’s core businesses. The competition between the companies promises to be fierce.

The differences in the strategies and business models of the two companies illustrate why this conflict will shape our technological future. Google began as one search company among many. But the effectiveness of its PageRank search algorithm and online advertising services, along with its ability to attract the best and brightest minds in the industry, have helped Google become one of the most prominent companies in the world. The company’s extensive infrastructure allows it to offer the fastest search speeds and a variety of Web-based products.

Microsoft grew to its giant stature on the strength of its Windows operating system and Office desktop productivity applications, which are used by 500 million people worldwide. Sometimes vilified for its anti-competitive practices, the company and its products are nevertheless staples for businesses and consumers looking to improve their productivity with computer-based tasks.

Today, the two companies have very different visions for the future, influenced by the continued development of the Internet and increased availability of broadband Internet connections. Google believes that the maturation of the Internet will allow more and more computing tasks to be performed via the Web, on computers sitting in data centers rather than on your desktop. This idea is known as cloud computing, and it is central to Google’s business model going forward. Microsoft, on the other hand, has built its success around the model of desktop computing. Microsoft’s goal is to embrace the Internet while persuading consumers to retain the desktop as the focal point for computing tasks.

Only a small handful of companies have the cash flow and manpower to manage and maintain a cloud, and Google and Microsoft are among them. With a vast array of Internet-based products and tools for online search, online advertising, digital mapping, digital photo management, digital radio broadcasting, and online video viewing, Google has pioneered cloud computing. It is obviously banking that Internet-based computing will supplant desktop computing as the way most people work with their computers. Users would use various connectivity devices to access applications from remote servers stored in data centers, as opposed to working locally from their machine.

One advantage to the cloud computing model is that users would not be tied to a particular machine to access information or do work. Another is that Google would be responsible for most of the maintenance of the data centers that house these applications. But the disadvantages of the model are the requirement of an Internet connection to use the applications, as well as the security concerns surrounding Google’s handling of your information. Google is banking on the increasing ubiquity of the Internet and availability of broadband and Wi-Fi connections to offset these drawbacks.

Microsoft already has several significant advantages to help remain relevant even if cloud computing is as good as Google advertises. The company has a well-established and popular set of applications that many consumers and businesses feel comfortable using. When Microsoft launches a new product, users of Office products and Windows can be sure that they will know how to use the product and that it will work with their system.

And Google itself claims that it isn’t out to supplant Microsoft, but rather provide products and services that will be used in tandem with Microsoft applications. Dave Girouard, president of Google’s Enterprise division, says that “people are just using both [Google products and Office] and they use what makes sense for a particular task.”

But cloud computing nevertheless represents a threat to Microsoft’s core business model, which revolves around the desktop as the center for all computing tasks. If, rather than buying software from Microsoft, consumers can instead buy access to applications stored on remote servers for a much cheaper cost, the desktop suddenly no longer occupies that central position. In the past, Microsoft used the popularity of its Windows operating system (found on 95 percent of the world’s personal computers) and Office to destroy competing products such as Netscape Navigator, Lotus 1-2-3, and WordPerfect. But Google’s offerings are Web-based, and thus not reliant on Windows or Office. Google believes that the vast majority of computing tasks, around 90 percent, can be done in the cloud. Microsoft disputes this claim, calling it grossly overstated.

Microsoft clearly wants to bolster its Internet presence in the event that Google is correct. Their recent attempts to acquire Internet portal Yahoo! indicate this desire. No other company would give Microsoft more Internet search market share than Yahoo!. Google controls over 60 percent of the Internet search market, with Yahoo! a distant second at just over 20 percent, and Microsoft third at under 10 percent. While Microsoft-Yahoo! would still trail Google by a wide margin, the merger would at least increase the possibility of dethroning Google. Microsoft’s initial buyout attempts were met with heavy resistance from Yahoo!.

With its attempted acquisition of Yahoo!, Microsoft wanted not only to bolster its Internet presence but also to end the threat of an advertising deal between Google and Yahoo!. In June 2008, those chances diminished further due to a partnership between Google and Yahoo! under which Yahoo! will outsource a portion of its advertising to Google. Google plans to deliver some of its ads alongside some of the less profitable areas of Yahoo!’s search, since Google’s technology is far more sophisticated and generates more revenue per search than any competitor. Yahoo! recently introduced a comprehensive severance package that critics dismissed as a ‘poison pill’ intended to make them less appealing for acquisition to Microsoft. In response to this and other moves he considered to be incompetent, billionaire investor Carl Icahn has built up a large stake in the company and has agitated for change in Yahoo! leadership and reopening of negotiations with Microsoft, but the advertising deal between the two companies casts doubt over whether Microsoft can actually pull off a buyout.

With or without Yahoo!, the company’s online presence will need a great deal of improvement. Microsoft’s online services division’s performance has worsened while Google’s has improved. Microsoft lost $732 million in 2007 and was on track for an even worse year in 2008. Google gained $4.2 billion in profits over the same 2007 span.

Microsoft’s goals are to “innovate and disrupt in search, win in display ads, and reinvent portal and social media experiences.” Its pursuit of Yahoo! suggests skepticism even on Microsoft’s own part that the company can do all of this on its own. Developing scale internally is far more difficult than simply buying it outright. In attempting to grow into this new area, Microsoft faces considerable challenges. The industry changes too quickly for one company to be dominant for very long, and Microsoft has had difficulty sustaining its growth rates since the Internet’s inception. Even well-managed companies encounter difficulties when faced with disruptive new technologies, and Microsoft may be no exception.

Google faces difficulties of its own in its attempts to encroach on Microsoft’s turf. The centerpiece of their efforts is their Google Apps suite. These are a series of Web-based applications that include Gmail, instant messaging, calendar, word processing, presentation, and spreadsheet applications (Google Docs), and tools for creating collaborative Web sites. These applications are simpler versions of Microsoft Office applications, and Google is offering basic versions of them for free, and ‘Premier’ editions for a fraction of the price. Subscribing to the Premier edition of Google Apps costs $50 per year per person, as opposed to approximately $500 per year per person for Microsoft Office.

Google believes that most Office users don’t need the advanced features of Word, Excel, and other Office applications, and have a great deal to gain by switching to Google Apps. Small businesses, for example, might prefer cheaper, simpler versions of word processing, spreadsheet, and electronic presentation applications because they don’t require the complex features of Microsoft Office. Microsoft disputes this, saying that Office is a result of many years and dollars of research indicating what consumers want, and that consumers are very satisfied with their products. Many businesses agree, saying that they are reluctant to move away from Office because it is the ‘safe choice’. These firms are often concerned that their data is not stored on-site and that they may be in violation of laws like Sarbanes-Oxley as a result, which requires that companies maintain and report their data to the government upon request.

Microsoft is also offering more software features and Web-based services to bolster its online presence. These include SharePoint, a Web-based collaboration and document management platform, and Microsoft Office Live, providing Web-based services for e-mail, project management, and organizing information, and online extensions to Office.

The battle between Google and Microsoft isn’t just being waged in the area of office productivity tools. The two companies are trading blows in a multitude of other fields, including Web browsers, Web maps, online video, cell phone software, and online health recordkeeping tools. Salesforce.com (see the Interactive Session in Chapter 5) represents the site of another conflict between the two giants. Microsoft has attempted to move in on the softwareas-a-service model popularized by Salesforce.com, offering a competing CRM product for a fraction of the cost. Google has gone the opposite route, partnering with Salesforce to integrate their CRM applications with Google Apps and creating a new sales channel to market Google Apps to businesses that have already adopted Salesforce CRM software.

Additionally, both companies are attempting to open themselves up as platforms to developers. Google has already launched its Google App Engine, which allows outside programmers to develop and launch their own applications for minimal cost. In a move that represented a drastic change from their previous policy, Microsoft announced that they would reveal many key details of its software that they had previously kept secret. Programmers will have an easier time building services that work with Microsoft programs. Microsoft’s secrecy once helped them control the marketplace by forcing other companies to use Windows rather than develop alternatives, but if they can’t do the same to Google Apps, it makes sense to try a different approach to attract developers.

Time will tell whether or not Microsoft is able to fend off Google’s challenge to its dominance in the tech industry. Many other prominent companies have fallen victim to paradigm shifts, such as mainframes to personal computers, traditional print media to Internet distribution, and, if Google has its way, personal computers to cloud computing.


Sources:
Clint Boulton, “Microsoft Marks the Spot,” eWeek, May 5, 2008; Andy Kessler, “The War for the Web,” The Wall Street Journal, May 6, 2008; John Pallatto and Clint Boulton, “An On-Demand Partnership” and Clint Boulton, “Google Apps Go to School,” eWeek, April 21, 2008; Miguel Helft, “Ad Accord for Yahoo! and Google,” The New York Times, June 13, 2008 and “Google and Salesforce Join to Fight Microsoft,” The New York Times, April 14, 2008; Clint Boulton, “Google Tucks Jotspot into Apps,” eWeek, March 3, 2008; Robert A. Guth, Ben Worthen, and Charles Forelle, “Microsoft to Allow Software Secrets on Internet,” The Wall Street Journal, February 22, 2008; J. Nicholas Hoover, “Microsoft-Yahoo! Combo Would Involve Overlap—and Choices,” Information Week, February 18, 2008; Steve Lohr, “Yahoo! Offer is Strategy Shift for Microsoft,” The New York Times, February 2, 2008; and John Markoff, “Competing as Software Goes to Web,” The New York Times, June 5, 2007.

CASE STUDY QUESTIONS

1. Define and compare the business strategies and business models of Google and Microsoft.

2. Has the Internet taken over the PC desktop as the center of the action? Why or why not?

3. Why did Microsoft attempt to acquire Yahoo!? How did it affect its business model? Do you believe this was a good move?

4. What is the significance of Google Apps to Google’s future success?

5. Would you use Google Apps instead of Microsoft Office applications for computing tasks? Why or why not?

6. Which company and business model do you believe will prevail in this epic struggle? Justify your answer.

(Laudon 247)

Laudon, Laudon &. Management Information Systems: Managing the Digital Firm, VitalSource eBook for EDMC, 11th Edition. Pearson Learning Solutions. .

Futurists and innovators can teach each other lessons to help their ideas succeed

Innovators and futurists ought to have a symbiotic relationship. Often, they do not.

The futurist aims to help us understand how trends and events will shape the future, so that we can chart our business and policy courses to bring us to a future that most appeals to us. The innovator, on the other hand, aims to realize a possible future by getting ideas (i.e., possibilities for the future) adopted as practice in our communities.

Many would-be innovators ask in frustration, Why do my own good ideas often go by the wayside and other people’s bad ideas get adopted? Why do I invest enormous time and resources to systematically generate new ideas, only to see much of my effort go to waste? Leaders in all fields fret and fume over these questions. They want to improve their innovation success rates.

Increasing success and reducing wasted effort on the path to innovation are very important goals. Many people believe innovation is the key to economic development, technological progress, competitiveness, and business survival. Policies that enhance a nation’s ability to be innovative are constantly in public discussion and are hot topics among politicians and business leaders. Futurists collaborating with innovators can contribute to these goals.

I have been investigating these questions for many years and have learned many things that I wish I knew when I was younger. Based on these investigations, my colleague, Robert Dunham, and I wrote a book, The Innovator’s Way (MIT Press, 2010, innovators-way.com). I will share here some excerpts from the book as a guide for innovators — and futurists — who are trying to get their ideas adopted.


The Work of Futurists

Most futurists see their mission as investigating how social, economic, and technological developments will shape the future. Futurists help others understand and respond to the coming changes. They also help apply anticipatory thinking to issues facing education, business, and government. They do this by a variety of methods, of which these three are the most common:

1. Revelation of current realities. Sometimes the prevailing common-sense interpretation of what is happening and how it will shape the future is not well grounded. It is a belief, but is not supported by data and observation. Futurists examine the data and propose new, well-grounded interpretations. They then examine how policy and action might change to align with the reality.

Peter Drucker was a master at this. His book The New Realities (Harper-Business, 1989) is loaded with examples. My favorite was his chapter “When the Russian Empire Is Gone,” in which he analyzed economic data, conversations of politicians and the media, and moods of Russian citizens to conclude that the Soviet Union would soon fall. The collapse occurred within a year of when the book was published, much sooner than he expected.

2. Extrapolation of trends. When a trend can be detected in some measure of performance, futurists can calculate future values of that measure and draw conclusions about the consequences. In 1965, Gordon Moore noticed a trend in computer chips: Every year, the transistor count doubled for about the same price (“Cramming More Components into Integrated Circuits,” in Electronics Magazine 38, April 1965). Many people started using Moore’s law to gauge whether the computing power available in a few years would support their new technology offerings. Though not a law of nature, it became a guiding principle that has sustained the computer chip industry for nearly 50 years.

In The Age of Spiritual Machines (Viking, 1999), Ray Kurzweil claimed that the same trend was evident in four previous generations of information technologies and would be present in technologies that supersede silicon. Based on that, he extrapolated 50 years into the future to predict a “singularity” around 2030, when he believes artificial brains will become intelligent.

In A Vision for 2012 (Fulcrum, 2008), John L. Petersen noticed deep trends in economic data that would lead to crushing public debt, unsustainable government programs, rising food prices, rising fuel prices, and social unrest. Many of his predictions have borne out.

On the other side, in The Social Life of Information (Harvard Business, 2000), John Seely Brown and Paul Duguid warn against overconfi-dence in trend extrapolation because social systems often resist and redirect changes in technology. They exposed a series of major predictions that never happened and led to the dot-com bust in 2002.

3. Scenarios. A scenario is a story that lays out in some detail what the future might look like under certain assumptions about trends and other factors. Futurists usually offer several scenarios under different assumptions. The method is useful to help people see how they might react to different futures, and then try to influence policies and trends so that the most attractive futures come to be. Futurists do not offer scenarios as predictions. They often evaluate the probabilities of the various futures they lay out.

Let’s take a look at the work of innovators for overlaps. Before we do that, we need to have a good definition of innovation.


In Search of the Meaning of Innovation

Innovation is one of the most studied subjects of all time, but there is considerable disagreement about what innovation is. The most common notions are that innovation is a mysterious talent, a disposition of some people’s DNA, a process that can be controlled by savvy managers, or a flash of genius. Less common notions about innovation involve adoption, diffusion, and new behaviors. Thus, the recommendations of different authors about how to achieve innovation lead in conflicting directions.

There is agreement that success of an innovation means adoption. However, successes are few and precious. Business surveys reveal that only about 4% of innovation initiatives meet their financial objectives. Patent office statistics show that only about 0.2% of patents make a return on the inventor’s investment. The National Research Council reported in 1986 that the U.S. government’s track record of promoting innovation through university research is not as good as is commonly believed: Fewer than 25% of innovations can be connected to published research ideas.

It appears that we collectively share a misunderstanding of innovation and therefore experience great difficulty in achieving it. No wonder our methods are ineffective.

The low success rate of innovation initiatives is often explained as an inevitable consequence of the uncertainty of the marketplace. We are often asked to rejoice that the prevailing 4% success rate is so high. If low success is certain, a company’s best strategy is to “take many shots on goal.” However, this strategy is available to only a few companies that can afford to let 96% of their research and development go to waste. For the rest of us, achieving innovation looks like a crapshoot.

In The Innovator’s Way, Bob Dunham and I concluded that the notions based on idea generation led to the fewest successes, whereas the notions based on adoption led to the most successes. Since we were interested in success and in the innovator skills that generate it, we used the second notion as our definition: Innovation is adoption of new practice in a community. There are three key words in this definition:

1. Community. The set of people who change. The community can be small, such as a family; medium, such as a business’s customers; or large, such as a nation or the world.

2. Practice. Habits, routines, and processes that people embody. Embody means that people engage with the practice skillfully and without conscious thought. The ability to perform is not the same is applying a mental concept.

3. Adoption. The members of the community make a commitment to learn and embody a new practice. They will make such a commitment only if they see sufficient value in the new practice and are willing to sacrifice the previous practice to get it.

Notice that this definition covers many types of innovation. The Internet is a set of technologies that support new practices, including browsing, searching, online shopping, social networking, blogging, and texting. Mothers Against Drunk Driving (MADD) inspired new practices backed by laws to take drunk drivers off the roads. Sustainable architects have introduced new construction practices that produce buildings with no carbon footprint. Heads of families have adopted small business practices to help them balance income and expense and pay off debt. The key to success is adoption of practices, not the invention of ideas.

Unfortunately, the notion that innovation comes from clever ideas is enshrined in popular mythology. It is certainly true that ideas are necessary for innovation, but, as we will discuss, ideas are never sufficient. Company or public policies aimed at stimulating creativity, producing more ideas, or encouraging inventors do a disservice by getting everyone to focus too much on ideas at the expense of adoption. We call this imbalance the invention myth — the belief that invention of new ideas is the driver of innovation. The invention myth has led many people down the path to failure in their innovation initiatives.

Then what is a balanced and holistic view of innovation? The Eight Ways framework is our answer.


The Work of Innovators

The eight ways are practices that innovators use to produce the eight essential outcomes for innovation. Their names are listed on the wheel of the figure on page 43. Taken together, these practices define what it means to be a skillful innovator.

The wheel diagram suggests that the practices are not performed sequentially in numerical order. Instead, the innovator moves constantly among them, refining the results of earlier actions after seeing their consequences. It is better to think of the practices being done in parallel. That is why they must be learned as skills. The innovator must be able to do them well without thinking about them.

The “Structure of the Innovative Practices” table gives more detail. The first two practices are the main work of invention, and the next three are the main work of adoption. Although these five tend to be done sequentially, they are not strictly sequential. Each of the final three practices creates an environment for effective conduct of all the other practices. The environment is important: The innovator has to execute the innovation commitments, proactively promote the innovation, and be sensitive to how other people listen and react.

The specification of each practice has two parts. The anatomy describes the structure of the practice when it goes well and produces its outcome. The characteristic breakdowns are the most common obstacles that arise in trying to complete the practice. The innovator moves toward the desired outcome and copes with breakdowns as they arise. The breakdowns are not mere annoyances. Coping with them is a normal part of the process.


Example: The World Wide Web

Tim Berners-Lee is widely known for creating the World Wide Web, considered one of the great innovations of the twentieth century. His parents were both part of the Ferranti Atlas Project at the University of Manchester in England in the 1950s. After earning a graduate degree in physics in 1976 from Queen’s College, Oxford, he worked as a software engineer at Plessey Systems, a telecommunications company, and then at D. G. Nash Ltd., were he wrote text-processing software for intelligent printers and a multitasking operating system. He was fascinated by a question, first raised by his father, of whether computers could be used to link information rather than simply compute numbers. In 1980, he went to CERN, the European high energy physics research laboratory, with this question on his mind.

Berners-Lee saw a huge disharmony between the actual direction of the Internet and the information-sharing visions of its pioneers in the 1960s. He felt a burning desire to do something about it. Given his dream about information sharing through linking, the esoteric world of hypertext was an obvious place to look for a key to an information-sharing Internet.

In his spare time, he worked on a program called Enquire that could link information on any computer with any other. He began to envision CERN not as a network of separate computers, but as a single information space consolidated across many computers. In 1989, he wrote “Information Management: A Proposal” to create a hypertext system at CERN linking all its computers and documents into a single web from which information could be quickly retrieved from anywhere in CERN. At first his proposal was ignored, but with help from prominent computer engineer Robert Cailliau, he got the attention of CERN’s leadership. In 1990, they gave him the go-ahead to make a prototype, which he built on a NeXT computer.

The prototype included HTML, a new markup language for documents containing hyperlinks; HTTP, a new protocol for downloading an object designated by a hyperlink; URL, an Internet-compatible scheme for global names; and a graphical user interface. He drew on well-known ideas and practices, including Gopher (University of Minnesota’s file-fetching system), FRESS and ZOG (hypertext document management systems), SGML (the digital publishing markup language), TCP/IP and FTP (standard Internet protocols), operating systems (the global identifier concept of capability systems, which had been on the Plessey computers), and Usenet news and discussion groups.

He put up the first Web page at CERN in November 1990. He released and tested browser prototypes at CERN in 1991. He gave his first external demonstration at the Hypertext 1991 research conference, a natural audience for this idea. It was an immediate success and inspired others to build Web sites. The first non-CERN Web site went up at SLAC (Stanford Linear Accelerator Center) in December 1991. Web sites began to proliferate; there were 200 in 1993. With the universal free browser, Mosaic, released by Marc Andreesen at the University of Illinois in 1993, the World Wide Web took off exponentially. During the 1990s, many new industries formed including e-commerce (selling by online stores via Web interface), publishing, digital libraries, eBay, Google, Amazon.com, Yahoo, and the Internet business boom (and bust).

Berners-Lee had no master plan, business plan, or any other formal document outlining a strategy for the Web. Instead, he insisted that all programmers working on Web software adhere to a small set of simple core principles: openness to everyone, no single controlling authority, universal identifiers, a markup language HTML, and a protocol HTTP. He steadfastly maintained that these principles were the essence of the World Wide Web; all else would be a distraction. He analyzed all new proposals to make sure they were true to these principles.

Building political support for the Web while advancing the Web technology became his central passion. Cailliau helped him build support within CERN. In 1994, he worried that commercial companies might get into a competition over who owned the Web, in violation of his core principle of openness. Michael Dertouzos at MIT helped establish the World Wide Web Consortium, W3C, modeled after the successful MIT X Windows consortium. This consortium eventually attracted over 400 companies, who collaborated on development of Web standards and tools; it became an engine of innovation for the Web. The W3C was an open-software, consensus-based organization that issued nonbinding recommendations, which become de facto standards once consortium members adopted them.

Berners-Lee himself refused to set up a private company so that he could benefit financially from his technology. It belongs to the world, he said.

Here is a summary of how Berners-Lee engaged the eight innovation practices.

* Sensing: In the 1980s, he saw a disharmony between the actual direction of the Internet (e-mail and file transfer) and its promise (semantic web of all human knowledge). This bothered him. It moved him to do something about it.

* Envisioning: He envisioned a system of hypertext-linked documents; any one could link to any other. Mouse-clicking a link would cause the system to retrieve the target document. The system architecture would consist of HTTP, HTML, URLs, and a browser. Common tasks such as scheduling meetings, looking up citations, and getting mail and news would be easy in this system.

* Offering: In 1989, he offered to build such a system at CERN. At first his offer was spurned, but with advice from colleagues he reformulated his offer around CERN document retrieval needs and got permission to build a prototype on a NeXT machine. He demonstrated the prototype at the 1991 Hypertext research conference, got strong positive responses, and solicited implementations of Web servers.

* Adopting: He visited many sites and attended many conferences to tell people about his system, always soliciting new servers, software, and browsers. Mark Andreesen, a student at University of Illinois, in 1993 made Mosaic, the first universal, easy-install graphical browser. After that, users adopted the Web like wildfire.

* Sustaining. In 1994, he founded the World Wide Web Consortium, hosted by MIT and CERN, to preserve the Web in the public domain by creating open software and standards for the Web. Over 400 organizations eventually joined W3C; it became an engine of innovation for the Web.

* Executing: He put together programming teams and solicited others to do the same, so that good Web software was developed and made available for anyone to use. He set clear principles for design and implementation of all Web software.

* Leading: At every opportunity, he recruited ever-larger numbers of followers and Web supporters. He articulated a small set of guiding principles for Web development and stuck with them. He refused to let the Web “go private” or to become wealthy from his own invention. He said the cause was too important and too big for his personal considerations to influence.

* Embodying: He embodied his set of core principles for the Web and practiced them everywhere he went. Through well-designed software and later through tutorials in the W3C, he helped Web users embody the new practices of linking, clicking, and browsing.


Extension to Teams, Networks, And Organizations

The Eight Ways of Innovation

have been presented as personal skills. They are the skills of serial innovators, who are good at all eight.

But what happens if you are strong at several but not all? For example, you could be a good inventor and storyteller, but you dislike anything having to do with offering or adopting. The obvious thing to do is team up with others who are good at the practices you are not good at. With good coordination, the team as a whole can do all eight practices and be positioned for success at its innovations.

The same is true at a larger scale for organizations. A well-designed organization can, through good internal coordination, take individuals skilled in some of the practices and produce teams good at all of them. Those organizations can become very successful at innovation.

Networks can also be very good at innovation, if they have people who are good at each of the practices and use the network as a means to find each other and coordinate. Open source software communities, such as the W3C, illustrate this.

In all cases, the eight practices are embodied in the innovative individual, team, organization, or network. The eight practices must always be present in order for individuals or collectives to be successful at innovation.


Collaborations with Futurists

The work of futurists and innovators most closely aligns in the Sensing and Envisioning practices. Futurists are good at turning up new possibilities and formulating stories about what the world would be like if the possibility were made real. Innovators can use their help.

The standard futurist scenario is not necessarily a compelling vision story. A visioning story is not the same as a vision, which is a committed declaration about a future. A visioning story is a compelling narrative that connects a vision to the concerns of the people and provokes their care and commitment. A good vision story inspires your audience to:

* Believe that there is a better future, well worth sacrificing what they now do to gain it.

* See that a blind spot has kept them from seeing this future sooner.

* Trust in your ability and commitment to make it happen.

* Ask for more conversation about this future.

Futurists collaborating with innovators can convert scenarios into vision stories.

There are two other places in the innovation process where futurists can help innovators. One is in the Offering practice. Even if listeners are attracted by an innovator’s vision of an attractive future, they are often reluctant to sign on because the innovator has not shown them a credible, risk-managing path from the present to the future. Many futurists have well-honed skills at finding paths from the present to the desired future.

Futurists can also help innovators in the Adopting and Sustaining practices. In both cases, innovators are quite likely to encounter resistance from some subset of the community that feels threatened by the change. Resistance is a major impediment to adoption. Many futurists are skilled at examining communities as social systems and noticing where support for and resistance to change are most likely to come from.


Achieving Adoption

Innovation is the adoption of new practice in a community. It is not a mysterious talent, a product of good DNA, a management process, or a flash of genius. It is the outcome of an innovator — individual or team — skillfully performing the eight practices. The eight practices share four main features:

1. They are fundamentally conversations. Innovators perform them by engaging in the right conversations.

2. They are universal. Every innovator, and every innovative organization, engages in all of them in some way.

3. They are essential. If any practice fails to produce its outcome, the entire process of innovation fails.

4. They are embodied. They manifest in bodily habits and performance patterns that require no thought or reflection to perform.

These practices are consistent with the notion that the future is malleable. We are innovators when we shape it and influence how it evolves. The eight practices tell us how to go about doing that successfully. We as futurists can collaborate with innovators to help them improve success, especially in the Sensing, Envisioning, Offering, Adopting, and Sustaining practices.

The Eight Ways of Innovation


Structure of the Innovation Practices

The main 1 Sensing Locate and articulate a new possibility,

work of often in disharmonies or incongruous

events.

invention 2 Envisioning Tell a compelling story about the world

when the possibility is realized.

The main 3 Offering Offer to produce the outcome; gain

work of a commitment to consider it.

adoption 4 Adopting Gain commitment to try it for the first

time, and overcome resistance to the change.

5 Sustaining Gain commitment to stick with the new

practice over time, integrating it

into the environment.

The 6 Executing Create environment for effectively

Environment managing all commitments to

for the completion.

Other 7 Leading Proactively mobilize people

practices to generate the outcomes of

the other practices.

8 Embodying Instill the new practice into the

practices of the community.

~~~~~~~~

By Peter J. Denning

Peter J. Denning is Distinguished Professor of Computer Science and director of the Cebrowski Institute for information innovation at the Naval Postgraduate School in Monterey, California. He is editor of ACM Ubiquity, an online magazine about the future, and is a past president of ACM (Association for Computing Machinery). E-mail pjd@nps.edu.

Copyright of Futurist is the property of World Future Society and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.

Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER