Note: This post is also shared on LinkedIn.

Recent concerns about Google Search competition (i.e., see SeekingAlpha article Alphabet shares tumble as Apple exec expresses search concerns) overlook the company’s fundamental strengths. My take:

Their latest model, Gemini 2.5 Pro, seems to be significantly ahead of others on many metrics. Recently, on the 2025 USAMO, it achieved 25% while all others stayed below 5%.

My point is, if Google wanted to build a better version of Perplexity.ai, this would not be a technical challenge for them; it is purely a matter of decisions and strategy. Whether this would be a success remains hard to gauge (see Shaz Ansari’s insight on that). We all remember the Google+ fiasco, where technology was not the core issue (but the extreme winner-takes-all network effects in that space was).

Google is fundamentally an R&D firm. We invest in their expenses side of the income statement (R&D, innovation), not just the asset side. How good is Google at innovating? How effective have they been at transforming those expenses into future assets?

Look at Google’s capabilities in AI, leaving aside LLMs and Search (which is, by the way, packed with machine learning and other sophisticated algorithms that fall under the broad AI umbrella):

  • Google developed its own computing unit, the TPU, way back ~2013 (very significant today, bear with me).
  • The seemingly unsolvable-by-machine game of Go was conquered in 2016 by DeepMind, part of Google.
  • Computers became better than humans at face recognition (~2015), with Google leading the pack (e.g., FaceNet).
  • Google open-sourced its TensorFlow software in 2015. Ken Griffin, in a recent interview , spoke Google up and touted TensorFlow’s benefits. He said: “TensorFlow, which Google gifted to the world, and it’s one of the greatest gifts ever given to humanity.” While also stating: “I don’t think [GenAI] is going to revolutionize most of what we do in finance.”
  • Google published the “Transformer paper,” introducing the architecture used by most modern LLMs (ChatGPT, Claude, etc.)
  • AlphaFold (~2021) by DeepMind/Google is a scientific revolution whose medical impact might eventually dwarf LLMs.

And there are many other contributions, not only in AI but also in fields like Quantum Computing or self-driving cars (Waymo).

Google holds key advantages: superior tech like Gemini 2.5 Pro/Deep Search, potentially better economics (OpenAI reportedly unprofitable, even its premium plan), and massive hardware scale. Do not just count NVIDIA purchases; in 2024, Google’s >1.6M chips (mostly custom TPUs for inference) likely dwarf MSFT/OpenAI’s estimated <700k (incl. custom, see FT piece Microsoft acquires twice as many Nvidia AI chips as tech rivals).