27 September 2023

Google vs Microsoft: The good, bad, and ugly of the AI arms race

Start the conversation

Ben Dickson* discusses the impact of that the ‘AI arms race’ could have on internet users.

The past weeks have seen escalating competition between Microsoft and Google over large language models—or more precisely put, Google trying hard to protect its search business against Microsoft and OpenAI’s large language models.

The two tech giants are in an intensifying tug of war over how we will access information in the future, matching research with research, product with product, and investment with investment.

Since OpenAI released ChatGPT in November, there has been a lot of speculation about the large language model’s killer application(s).

One of the topics brought up again and again is ChatGPT and other LLMs making Google Search obsolete.

I’m still sticking to my previous argument that something like ChatGPT will replace Google Search.

But there is now a lot of room for disruption and unbundling, and that could become a big threat to Google, which makes most of its revenue from search.

The competition between Google and Microsoft can be positive for the search engine market.

But there is also a bad side to all of this, especially how it affects the smaller artificial intelligence companies and the future of AI research.

And there is an even uglier side, which is the broader effects that Google and Microsoft’s hasty moves will have on internet users.

The Good: AI brings innovation to search

Google owns more than 90 per cent of the search engine market.

It has been the undisputed player, protected by its vast network of users and advertisers, its revenue and spending power, and it being the default search engine in the two most popular browsers (Chrome and Safari) as well as Android and iOS.

This dominant position has allowed Google to enjoy its market share without any urge for innovation.

Surely, Google Search has seen some changes here and there in the past years, including better question-answering and deep learning enhancements.

But the core of the search experience has not changed, and many users are complaining that it has gotten worse over the years, with more ads being displayed above the fold.

Now, Microsoft is suggesting a new search experience on its Edge browser, in which you can use the classic search model along with a conversational interface powered by ChatGPT (or rather an enhanced version of the LLM that provides links to sources).

The new search model has several problems to solve, including covering the costs of inference, monetizing the results, and more.

But Microsoft has nothing to lose.

Its share of the search engine market is negligible, and it has a lot of room to experiment with new products and business models and bleed money in the process.

Google, on the other hand, has everything to lose.

Search is its largest source of revenue and even losing a fraction of its market share will have dire consequences.

For the moment, Google’s response to Microsoft’s Bing+ChatGPT project was a hasty demo of its own LLM, Bard, which got its facts wrong and resulted in a $100 billion drop in the company’s market cap.

Google must now innovate and reinvent itself or see other tech companies chip away at its search engine business.

But the war is not over yet.

Google still has a lot of money to throw at the problem.

And the bad parts of the competition are starting to show.

The Bad: AI research will be centralised in big tech

As the competition heats up, the tech giants will try to outmanoeuvre each other in various ways.

One of them is by launching new products and adding AI capabilities to their existing markets.

But another shortcut to innovation is partnering with and possibly acquiring startups and research labs that can provide them with more AI firepower and protect their turf.

We can see this in Microsoft’s latest multi-billion-dollar extension in its partnership with OpenAI and Google’s recent $300-million investment in Anthropic, an AI research lab founded by former OpenAI scientists.

Microsoft has already used is exclusive license to OpenAI’s technology to enhance its products with advanced machine learning models.

Microsoft Azure has become the exclusive cloud provider for OpenAI’s extremely expensive research.

And Google Cloud Platform has become the provider of Anthropic.

I’m worried of this cycle repeating itself as the AI arms race between Google and Microsoft intensifies.

The two tech giants will try to outspend each other to bring more AI startups to their side.

And there are quite a few to choose from, including Cohere AI, Stability AI, Midjourney, You.com, Perplexity AI, Copy.ai, and Hugging Face.

While the startups and labs—most of which are not profitable yet—will benefit much from the infusion of cash and subsidized cloud resources, they will find themselves caught in the crossfire between Google and Microsoft.

They will gradually lose their freedom and flexibility and become beholden to the short-term interests of their financial backers.

This means working more on technologies that can be quickly monetized and less research on unexplored (and non-profitable) topics.

We’re already seeing this happen with OpenAI.

While LLMs and generative AI are fascinating and still have much to deliver, they aren’t the only promising research area that can help us decipher the mysteries of intelligence.

But OpenAI has mostly lost its interest in these other areas (robotics, game-playing AI, etc.) as it has become more focused on the kind of technologies that Microsoft will benefit from.

The Ugly: Poor AI content will litter the internet

The current craze around grabbing market share in generative AI is not like the early “move fast and break things” days of Facebook.

These are two very large companies whose products have billions of users.

Any change will have immediate worldwide impact.

But they are growing hastier in rolling out new AI-powered features.

And wherever there is haste, there will be damaging consequences.

In a way, this reminds me of the craze surrounding the internet of things (IoT) in mid-2010s.

Manufacturers rushed to hop on the “smart” device bandwagon, shipping half-backed IoT solutions that were riddled with security holes.

As a result, billions of insecure devices were connected to the internet, many of which had no means to patch their vulnerabilities.

They were later exploited to stage distributed denial of service (DDoS) attacks, espionage, and other malicious activities at scale.

It took several damaging incidents for device makers to seriously consider developing security standards for IoT and industrial IoT (IIoT).

We are still exploring the legal, ethical, and social implications of large language models.

LLMs such as ChatGPT hallucinate false facts, a problem that has not been completely solved yet.

Online publications can use them to generate articles that are linguistically sound but factually wrong.

We’ve seen a glimpse of how this can happen in CNET’s failed experiment to use AI to generate SEO content that had erroneous information.

While CNET eventually retracted and corrected the articles, there is nothing preventing other actors from generating massive amounts of SEO-focused articles at very low cost without taking care to verify the integrity of the content.

It can eventually dilute the web and search results with poorly written content produced en masse, making the experience worse for users.

There are also concerns about students using LLMs to generate their homework or cheat in exams.

While I think this to be a secondary concern, I believe it shows that many of our social structures, including the education system, need to adapt themselves to this new reality.

And these changes might not happen as fast as tech companies will roll out LLM-powered products.

A bigger concern is security and privacy.

In their haste to make LLMs available to every business, tech companies have created tools that let you finetune the models with your proprietary data without having data engineering skills.

However, training and fine-tuning machine learning models in the wrong way can cause the models to leak sensitive data.

And this can turn into a nightmare as the integration of LLMs into productivity tools become more widespread.

Research in LLMs and other areas of generative AI can have great benefits for humanity.

But diving headlong into an unmeasured competition over market share can have unsavoury consequences.

We’ve seen this happen time and again.

And unfortunately, we’re seeing another such episode unfold before our eyes.

*Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business and politics.

This article first appeared at bdtechtalks.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.