Can you imagine a future where computers monitor humans from birth, predict sickness, and help us heal faster? Or a time when chronically ill or elderly persons can live at home and be monitored by instruments that a home nurse or caregiver can use?
Judith Donath, a fellow at Harvard University’s Berkman Center for Internet and Society, predicts that individual healthy diets based on each person’s unique genetics, locations, and activities are going to be common in the future, while drugstores will have booths that function as remote examining, treatment, and simple surgery rooms. In 1950, few could imagine the impact computers would have on everyday life in the year 2000. Today, everyone has a mobile phone, email has replaced physical letters, and online markets are challenging the economics of brick-and-mortar retailers.
The Emergence of Killer Applications (Apps)
Merriam-Webster defines “killer app” as “a computer application of such great value or popularity that it assures the success of the technology with which it is associated.” PC Magazine calls it “the first of a new breed.” To a layman, a killer app is a computer application that saves money, time, or energy, makes the user safer, or enhances the experiences of the user to the degree that it must be acquired and used.
The 1979 appearance of the first killer app, VisiCalc, ignited widespread business and personal use by consumers – use that couldn’t have been conceived of in the early 1940s when computers were first developed. According to the Computer History Museum, computer use in its initial stages was limited to research laboratories, large companies, and the Federal Government.
Personal computers (PCs) appeared in the early 1970s with the introduction of the microprocessor, integrated circuit boards, and solid state memory. The first commercially accepted PCs (Apple II, PET 2000, and TRS-80) were introduced in 1977 but remained niche products for the scientific community and hobbyists. According to a 1983 article in InfoWorld, only a half-million microcomputers were in place in 1980, and they were primarily used to play simple electronic games.
A History of Killer Applications
The first killer app for personal computers is recognized as VisiCalc, a 1979 electronic spreadsheet application that replaced manual financial spreadsheets, which were tediously constructed and replete with errors and erasures. VisiCalc was only available for the Apple II, and it made Apple a commercial success by stimulating the sale of 750,000 Apple II systems by 1982. The software was also the first program to be accepted by the business market. A review by Creative Computing magazine called the program “reason enough for owning a computer.”
Subsequent spreadsheet programs Lotus 1-2-3 and Excel spurred sales of other models of personal computers, notably those produced by IBM. The later versions of the program were innovative, rather than revolutionary, adding features that improved the user experience. Nevertheless, they might be classified as killer apps due to their dominant market shares.
The first word processor software to offer WYSIWYG (“what you see is what you get”) debuted in 1979. A review in InfoWorld called the program “the best-selling word-processing program for personal computers and the standard by which other word-processing programs are measured.”
Subsequent programs have continued to evolve. WordPerfect became the number-one word-processing program in 1986 until replaced by Microsoft Word for Windows in 1991, according to UT Dallas. The latter’s dominance may have resulted as much from innovative marketing (free giveaways, integration with other software) as a superior customer experience. According to a 1990 report by the U.S. Department of Labor, the introduction of word processing technologies increased office productivity by 15% to 20%, changing the demand for clerical employees and the nature of their work.
This presentation software by Foresight, Inc. was acquired by Microsoft in 1987. Bundled with other office software (Word and Excel), the program was officially released in 1990, coincidentally the same day the Windows operating system was launched.
According to Bloomberg Business, the program had been installed on no fewer than one billion computers by 2012, with an estimated 350 PowerPoint presentations given every second around the globe. It is available in multiple languages and dominates other presentation software with a 95% market share.
The Influence of the Internet
Before the Internet, personal computers were primarily standalone or linked with small office networks. Widespread networks were limited to mainframe computers and a subset of computer science researchers and closed communities of scholars. Modeled after the U.S. government’s ARPANET, the National Science Foundation developed protocols and policies in 1986 leading to the Internet as we know it today.
The development of electronic mail (email) was the first Internet killer application, driving Internet use as the light bulb drove the acceptance of electricity. While some electronic mail systems – such as facsimile (fax) transmissions – had been around for decades, communication required that the author and recipient be online at the same time, similar to instant messaging today.
The ability to store and forward messages was the critical feature leading to the explosion of email use. According to the technology market research firm Radicati Group Inc, there were an estimated 4.1 email accounts at the end of 2014 accounting for 191.4 billion emails each day, more than 6,000 individual messages per second.
With the growth of the Internet, killer apps that took advantage of this new connectivity began to appear.
One of the first web browsers (and the first to display images inline with text), this software developed by the National Center for Supercomputing Applications in 1993 was distributed free to noncommercial users. Gary Wolfe, writing in the October 1994 edition of Wired, claimed that Mosaic was the most pleasurable way to find information on the Internet: “In the 18 months since it has been released, Mosaic has incited a rush of excitement and commercial energy unprecedented in the history of the Net.”
While Mosaic has been replaced over time by browsers like Internet Explorer, Firefox, Chrome, and others, many of its features have been retained in these newer programs.
Napster was the first peer-to-peer file sharing service, developed by two Northwestern University students in 1999, allowing users to share and download MP3 files over the Internet. The company was shuttered in 2001 following the loss of a lawsuit by the Recording Industry Association of America.
At the time, Doug McFarland, president of Media Metrix, said that Napster was “one of the fastest growing software companies Media Metrix ever reported.” In February 2001, the company had almost 26.4 million users worldwide. According to CNET, many feel that Napster demonstrated the power of the web to deliver music and led to the debut of Apple’s iTunes two years later.
This social network debuted in 2002 and was the first such service to grow its memberships into millions. Originally funded by venture capitalists, the founders turned down a buy-out offer from Google in 2003 and remained a private company. While the company has since faded into obscurity, it is considered the “granddaddy of modern social networks” like Facebook and LinkedIn by The Next Web.
Google Search (Post-Internet)
In 1996, Stanford University graduate students and entrepreneurs Sergey Brin and Larry Page introduced a revolutionary Internet search engine. According to the authors of “The Google Story,” the program has had an impact on access to information equivalent of the Guttenberg printing press 600 years earlier.
Google’s patented algorithm PageRank replaced older keyword search technology with searches based upon human-generated links and previous searches. Its logic was premised on the belief that the more searches and links, the more relevant and important the information is likely to be to the user.
Google Search dominates search engine market share today, according to Net Market Share, with two out of every three users worldwide and more than three times the combined volume of its two closest competitors, Yahoo and Bing. Search Engine Land claims that the program presently accounts for more than a trillion searches a year.
Impact of Increased Bandwidth on Killer Apps
The growth of revolutionary new computer applications is dependent upon the amount and speed of the network’s bandwidth – the speed at which bits of information move across the network. Common measures of speed are the megabit, or one million bits, per second (Mbps) and the gigabit, or one billion bits, per second (Gbps).
To understand how speed affects your browsing experience, consider the following:
- A 100-megabyte file of 20 songs requires 16 seconds to download at a speed of 50/50 Mbps and 1.6 seconds at 500/500 Mbps
- A 250-megabyte file of 50 high-resolution photographs requires 40 seconds to download at 50/50 Mbps and only four seconds at 500/500 Mbps
- A 759-megabyte file of a one-hour video requires four minutes to download at 50/50 Mbps and 12 seconds at 500/500Mbps
As more and more applications are available in the cloud – effectively turning personal computers into terminals that act as conduits to central processing centers – bandwidth becomes increasingly important. As NPR points out, it has also become a battleground between providers and users.
On the one hand, the providers of bandwidth – internet service providers (ISPs) like cable and telephone companies – want to control its availability and use through tiered pricing. In other words, the more broadband you use, the more you pay. On the other hand, according to The Atlantic, retailers and content providers such as Netflix want net neutrality where all traffic is treated the same, regardless of its broadband requirements.
The 2014 “Cost of Connectivity” report by the Open Technology Institute found that Americans pay more money for slower Internet access than many other industrial countries. Claire Cain Miller, writing in The New York Times, notes that downloading a high-definition movie takes about seven seconds in Seoul, Hong Kong, Tokyo, Zurich, Bucharest, and Paris for a cost of $30 per month. Residents of Los Angeles, New York, and Washington, D.C. using the fastest Internet connection available require 1.4 minutes to download the same movie and pay $300 per month for the privilege.
“The reason we lag behind other countries is not technology, but economics,” claims Columbia Law School professor Tim Wu. “The average market has one or two serious Internet providers, and they set their prices at monopoly or duopoly rates.”
While control over broadband speed and cost is currently being contested, there is agreement that the next generation of connected experiences is dependent upon greater and cheaper bandwidth. According to a recent report by Akamai Technologies, the global average connection speed was 4.5 Mbps with a peak of 26.9 Mbps at the end of 2014. The United States had an average connection speed of 11.1 Mbps and a peak connection speed of 49.4 Mbps, ranking 16th in the world rankings. However, only 39% of the country was above 10 Mbps, and one-quarter had an average lower than 4 Mbps. At these speeds, truly revolutionary applications are restricted.
Fortunately, gigabit speed networks transferring billions of bits per second are beginning to appear in pockets around the United States. These networks can transfer information 50 to 100 times faster than most users now enjoy.
Google built its first Google Fiber network in Kansas City, and has announced plans to build a similar network in Austin, Texas. AT&T expects to build gigabit networks in 100 cities, and there are regional efforts to build high-speed networks in other locations around the country including Colorado Springs, Brooklyn, and San Francisco. Kathryn Campbell, partner with interactive marketing firm Primitive Spark, Inc., claims that “no question, bandwidth will play the same kind of transformational role in reshaping society that railroads and freeways played in our past.”
Future Killer Applications
Peter Drucker, management consultant and author of 33 books on business, once said, “Trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window.” Despite the uncertainty of forecasting, there are some applications that experts who work within the industry anticipate being available by 2025, including the following.
Customized, Real-Time Healthcare
The delivery and cost of healthcare is destined to see an impact, according to Hal Varian, chief economist of Google. “The big story here is in continuous health monitoring… It will be much cheaper and more convenient to have that monitoring take place outside the hospital… Indeed, the home-security system will include health monitoring as a matter of course.” Varian believes that robotic and remote surgery may become common as broadband capacity increases.
Mark Kaganovich, CEO of SolveBio, agrees that healthcare may be profoundly affected by greater connectivity and speed. “Drugs will be developed precisely for the molecular profiles of an individual’s ailment [without side effects since they are targeted to individuals]. Diseases will have new names: they will no longer be referred to as vague groupings of symptoms but rather exact molecular pathways (instead of ‘colon cancer’ it will be the exactness and pathways disrupted).”
Fully interactive, immersive 3D experiences through persistent high-quality video and audio are expected to have a massive impact on entertainment, travel, and education. Campbell believes that the “holodeck” concept first shown on the Star Trek series in 1966 is possible. Physical travel won’t be necessary as people instantly meet face-to-face in cyberspace. Today’s video conferencing is going to be replaced by instantaneous, life-like video interaction that requires no setup or configuration.
Alison Alexander, a professor of journalism at the University of Georgia, believes that applications in the future won’t be tied to reality, but imagination: “Forget reality, live in your selected world. Visit wherever and whenever.” Rather than being tied to images and recordings, students can access and experience interactive and immersive virtual reality environments.
According to Andrew Connell, chief technology officer of virtual reality company Virtalis, 3D models allow students “to reach in with their hands and really dig about inside a product to explore, learn about, and improve it.” Connell believes in using video-like game experience with students “who’ve grown up with very adaptable thumbs playing on their Xboxes.”
Martin Ford, author of “Accelerating Technology and the Economy of the Future,” believes that the combination of increased computational power and interconnected machines may produce the next killer app, artificial intelligence (AI). When John McCarthy of MIT coined the term in 1956, the idea that a computer might learn and make decisions comparable to humans seemed an impossible goal.
Evidence of computer automated systems for perception, learning, understanding, and reasoning is already evident around us:
- GPS systems cut through the complexity of millions of routes to find the best one to take based on the user’s criteria.
- Smartphones understand human speech and Siri, Cortana, and Google Now are getting better at understanding our intentions when we give directions.
- Cars from Google and Tesla can drive themselves, autopilot systems direct airplanes around the world, and robotic surgeons are more exact than their human counterparts.
“We are moving into an er where the smart device becomes the assistant to the knowledge worker and to, frankly, everyone doing everything,” says Internet law expert Robert Cannon. Today, networked devices continuously and constantly interact creating information.
Might a new era of super-smart computers benefit its creators? ” According to Medium, AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and the elderly to walk, run, and even dance.” Ray Kurzweil, director of engineering at Google and author of “How to Create a Mind” and four other books on AI, believes that artificial intelligence is the pivotal step in addressing the grand challenges of humanity. According to CNN, Kurzweil also predicts that our brains may be able to connect directly with the cloud via nanobots in the 2030s.
While AI might help humanity solve some of its greatest problems, some scientists believe that unregulated development of AI could be a threat to humanity. Patrick Gray, a technology writer for TechRepublic, claims that a machine with “access to everything from the entirety of human wisdom via the Internet to connected financial markets and power grids could acquire knowledge, modify itself based upon that knowledge, and continue the cycle.” In other words, mankind couldn’t pull its plug.
Elon Musk, the creator of the Tesla automobile, is especially concerned that uncontrolled development of AI may have disastrous effects, tweeting on August 2, 2014, that AI is “potentially more dangerous than nukes.” According to CNET, Musk has also said, “With artificial intelligence, we are summoning the demon.”
Stephen Hawking, one of the world’s most preeminent scientists, believes “the development of full artificial intelligence could spell the end of the world.” In January 2015, Bill Gates, founder of Microsoft, expressed reservations about AI and didn’t understand why some people are not concerned.
Killer apps are best recognized in hindsight. Most observers agree that there has not been a killer app for decades, rather we’ve seen incremental innovations in things like computer processing and data transfer speeds. It is likely that new applications in personalized healthcare, virtual reality, and artificial intelligence are on the horizon for the next decade, but the impact of those applications is uncertain.
As Tiffany Shlain, filmmaker and host of “The Future Starts Here,” responded to a Pew Research Center survey, “We have no idea what new apps will exist when every human on the planet is online. We could have never predicted Google or Twitter. I can’t wait to see what 2025 will bring.”
What is your all-time favorite killer app?