20 years of Google and an uncertain future with big data


Twenty years ago this week, a couple of computer science students at Stanford University launched technology that crawled the world wide web to index pages.

The computer algorithm underpinning the Google search engine blew everything else out of the water at the time – Altavista, Lycos and Yahoo! Just couldn’t deliver as relevant search results.

Two decades on it is almost comical how dependent we are on the search giant to serve up information to help us make decisions.

Google now has eight products – Gmail, Drive, Maps, Youtube, Android, Chrome, Play and Search, that each have one billion users. 

LMGTFY, or “Let Me Google That For You”, has become a cultural meme aimed at those too lazy or ignorant to use the search engine to find their own answers.

Recently on a bicycle trip through France I was completely reliant on Google Maps on my smartphone to guide me from town to town. More than once I backed my own sense of direction over the guiding blue line of Google, only to have to pedal out of a dead-end cobblestone alleyway.

Most of us are merely bemused and thankful when Facebook automatically identifies our friends’ faces in photos or Amazon suggests exactly the type of running shoes we’ve been looking for.

But extrapolate that algorithmic ability to make decisions on our behalf in all facets of life and you have something decidedly more powerful, less amusing than scary.

Big data, big problem?

European legislators and regulators have been the first to respond to the dominance of big tech and Google in particular, with anti-trust action and new laws to curb its dominance. The General Data Protection Regulation (GDPR) introduced across the European Union in May and responsible for the deluge of privacy policy updates across the web, is the Europeans starting to call time on the wanton big data harvest the tech companies have been engaged in.

That data is the raw feedstock for increasingly sophisticated algorithms that build on the computer science that gave birth to Google’s search engine, but applied in a multitude of different applications.

For Israeli history professor and best-selling author Yuval Noah Harari, the rise of big data algorithms powered by artificial intelligence is one of the biggest challenges humanity faces.

Harari is not a tech guru, but his dispassionate and informed study of where technology is taking us represents some of the best current thinking on the matter.

His first big hit, Sapiens, sold eight million copies and traced where humanity has come from. I consider it a must read. He turned futurist in Homo Deus to look at how the forces of artificial intelligence and biotechnology will redefine what it means to be human. It also blew my mind.

21 Lessons for the 21st Century, his latest work, is a bit uneven as it tackles some of the pressing issues facing the world now – from climate change to the failures of liberal capitalism.

But Harari is at his most compelling when he discusses the impact of technological disruption – and the rise of big data.

What happens, he asks, when we start to entrust algorithms to make the big life decisions on our behalf? Will we gradually lose the ability to think for ourselves? Will we lose free will?

“At present we trust Netflix to recommend movies and Google Maps to choose whether we turn right or left,” he writes.

“But once we begin to count on AI to decide what to study, where to work and whom to marry, human life will cease to be a drama of decision making.”

Addicted to algorithms

If that all sounds far-fetched, just Google the latest developments coming out of the field of artificial intelligence. In the medical field in particular, AI is leaping ahead, rivalling the ability of humans to detect and diagnose cancer and deciding whether those appearing before some US courts are eligible for bail.

“The temptation to rely on algorithms is likely to increase,” Harari argues.

“Hacking human decision making will not only make human decision making more reliable, but will also simultaneously make human feelings less reliable.”

Algorithms will undoubtedly make life better for us in many ways, replacing gut feelings with evidence-based decision making. But at the same time they could undermine individual freedom, which is central to what makes us human. That’s the issue we currently face and are ill-prepared to deal with.

So what’s the answer? Harari doesn’t really have one. He wants better protection of the data we generate and currently give away in return for access to “free” internet services. But he points out that we didn’t give our approval to the rise of the internet. It developed in the domain of engineers and computer geeks, not politicians.

Where’s the oversight?

Mark Zuckerberg’s recent interrogation on Capitol Hill revealed the woeful lack of understanding of the power of the tech titans among the political elite in the US. The situation is no better in New Zealand.

However, the Labour-New Zealand First Government has made noises about introducing some oversight and transparency into the use of algorithms, at least in the public sector. That was a hot issue for open government and communications and digital media minister Clare Curran, at least it was before she was dumped from those portfolios for her inappropriate communications over the recruitment of a chief technology officer for the country and subsequently resigned from her remaining portfolio as broadcasting minister.

Curran may have displayed staggeringly bad judgement in both her broadcasting and tech portfolios over the way she carelessly bypassed protocols, but she was the one minister in the current Government who understood the power of algorithms and the need to act to vet their usage – at least by government departments.

She co-opted University of Otago experts to advise the Government on the issue and was openly talking about that advice leading to an official government function for data governance and algorithm oversight. That work must continue and if we do finally end up with a CTO, those issues must be given due attention.

Of course, tackling the private sector’s use of algorithms will be much more problematic – they are the lifeblood of tech companies and transparency could be seen as an innovation killer.

Technology’s march seems relentless, but as Harari points out in his latest book, we don’t have to accept and apply everything our smartest technologists come up with.

The rise of algorithms must take a different path to that of the internet. It will require more oversight, more checks and balances – and a more engaged and competent political establishment that understands the influence of tech on the economy, society and democracy itself.

The stakes are higher now as Google’s AI alone displays abilities exponentially greater than what it was capable of in 1998.

The future of us as human beings mustn’t resemble Harari’s bleakest potential scenario – which is the trajectory we are currently on: “tiny chips inside a massive data processing system that nobody really understands”.

21 Lessons for the 21st Century by Yuval Noah Harari, 372pp, Jonathan Cape $34.95

Zoomd Reports

Be the first to comment

Leave a Reply

Your email address will not be published.


*