COMPLEMENTARY BUSINESSES

COMPLEMENTARY BUSINESSES 


Complementarity between computers and humans isn’t just a macro-scale fact. It’s also the path to building a great business. I came to understand this from my experience at PayPal. In mid-2000, we had survived the dot-com crash and we were growing fast, but we faced one huge problem: we were losing upwards of $10 million to credit card fraud every month. Since we were processing hundreds or even thousands of transactions per minute, we couldn’t possibly review each one—no human quality control team could work that fast. So we did what any group of engineers would do: we tried to automate a solution. First, Max Levchin assembled an elite team of mathematicians to study the fraudulent transfers in detail.

Then we took what we learned and wrote software to automatically identify and cancel bogus transactions in real time. But it quickly became clear that this approach wouldn’t work either: after an hour or two, the thieves would catch on and change their tactics. We were dealing with an adaptive enemy, and our software couldn’t adapt in response. The fraudsters’ adaptive evasions fooled our automatic detection algorithms, but we found that they didn’t fool our human analysts as easily. So Max and his engineers rewrote the software to take a hybrid approach: the computer would flag the most suspicious transactions on a well-designed user interface, and human operators would make the final judgment as to their legitimacy. Thanks to this hybrid system—we named it “Igor,” after the Russian fraudster who bragged that we’d never be able to stop him—we turned our first quarterly profit in the first quarter of 2002 (as opposed to a quarterly loss of $29.3 million one year before). The FBI asked us if we’d let them use Igor to help detect financial crime. And Max was able to boast, grandiosely but truthfully, that he was “the Sherlock Holmes of the Internet Underground.” This kind of man-machine symbiosis enabled PayPal to stay in business, which in turn enabled hundreds of thousands of small businesses to accept the payments they needed to thrive on the internet. None of it would have been possible without the man-machine solution—even though most people would never see it or even hear about it. I continued to think about this after we sold PayPal in 2002: if humans and computers together could achieve dramatically better results than either could attain alone, what other valuable businesses could be built on this core principle? The next year, I pitched Alex Karp, an old Stanford classmate, and Stephen Cohen, a software engineer, on a new startup idea: we would use the humancomputer hybrid approach from PayPal’s security system to identify terrorist networks and financial fraud. We already knew the FBI was interested, and in 2004 we founded Palantir, a software company that helps people extract insight from divergent sources of information. The company is on track to book sales of $1 billion in 2014, and Forbes has called Palantir’s software the “killer app” for its rumored role in helping the government locate Osama bin Laden. We have no details to share from that operation, but we can say that neither human intelligence by itself nor computers alone will be able to make us safe. America’s two biggest spy agencies take opposite approaches: The Central Intelligence Agency is run by spies who privilege humans. The National Security Agency is run by generals who prioritize computers. CIA analysts have to wade through so much noise that it’s very difficult to identify the most serious threats. NSA computers can process huge quantities of data, but machines alone cannot authoritatively determine whether someone is plotting a terrorist act. Palantir aims to transcend these opposing biases: its software analyzes the data the government feeds it—phone records of radical clerics in Yemen or bank accounts linked to terror cell activity, for instance—and flags suspicious activities for a trained analyst to review.
In addition to helping find terrorists, analysts using Palantir’s software have been able to predict where insurgents plant IEDs in Afghanistan; prosecute high-profile insider trading cases; take down the largest child pornography ring in the world; support the Centers for Disease Control and Prevention in fighting foodborne disease outbreaks; and save both commercial banks and the government hundreds of millions of dollars annually through advanced fraud detection. Advanced software made this possible, but even more important were the human analysts, prosecutors, scientists, and financial professionals without whose active engagement the software would have been useless. Think of what professionals do in their jobs today. Lawyers must be able to articulate solutions to thorny problems in several different ways—the pitch changes depending on whether you’re talking to a client, opposing counsel, or a judge. Doctors need to marry clinical understanding with an ability to communicate it to non-expert patients. And good teachers aren’t just experts in their disciplines: they must also understand how to tailor their instruction to different individuals’ interests and learning styles. Computers might be able to do some of these tasks, but they can’t combine them effectively. Better technology in law, medicine, and education won’t replace professionals; it will allow them to do even more. LinkedIn has done exactly this for recruiters. When LinkedIn was founded in 2003, they didn’t poll recruiters to find discrete pain points in need of relief. And they didn’t try to write software that would replace recruiters outright. Recruiting is part detective work and part sales: you have to scrutinize applicants’ history, assess their motives and compatibility, and persuade the most promising ones to join you. Effectively replacing all those functions with a computer would be impossible. Instead, LinkedIn set out to transform how recruiters did their jobs. Today, more than 97% of recruiters use LinkedIn and its powerful search and filtering functionality to source job candidates, and the network also creates value for the hundreds of millions of professionals who use it to manage their personal brands. If LinkedIn had tried to simply replace recruiters with technology, they wouldn’t have a business today.
The Ideology of Computer Science Why do so many people miss the power of complementarity? It starts in school. Software engineers tend to work on projects that replace human efforts because that’s what they’re trained to do. Academics make their reputations through specialized research; their primary goal is to publish papers, and publication means respecting the limits of a particular discipline. For computer scientists, that means reducing human capabilities into specialized tasks that computers can be trained to conquer one by one. Just look at the trendiest fields in computer science today. The very term “machine learning” evokes imagery of replacement, and its boosters seem to believe that computers can be taught to perform almost any task, so long as we feed them enough training data. Any user of Netflix or Amazon has experienced the results of machine learning firsthand: both companies use algorithms to recommend products based on your viewing and purchase history. Feed them more data and the recommendations get ever better. Google Translate works the same way, providing rough but serviceable translations into any of the 80 languages it supports—not because the software understands human language, but because it has extracted patterns through statistical analysis of a huge corpus of text.
The other buzzword that epitomizes a bias toward substitution is “big data.” Today’s companies have an insatiable appetite for data, mistakenly believing that more data always creates more value. But big data is usually dumb data. Computers can find patterns that elude humans, but they don’t know how to compare patterns from different sources or how to interpret complex behaviors. Actionable insights can only come from a human analyst (or the kind of generalized artificial intelligence that exists only in science fiction). We have let ourselves become enchanted by big data only because we exoticize technology. We’re impressed with small feats accomplished by computers alone, but we ignore big achievements from complementarity because the human contribution makes them less uncanny. Watson, Deep Blue, and ever-better machine learning algorithms are cool. But the most valuable companies in the future won’t ask what problems can be solved with computers alone. Instead, they’ll ask: how can computers help humans solve hard problems

Post a Comment

أحدث أقدم