#464: Google And OpenAI Are Redefining Technology, & More
1. Google And OpenAI Are Redefining Technology

Last week, during I/O 2025, Google announced that it will embed generative AI across its ecosystem, including Android, Chrome, Search, and a new wearable device. Its long-term goal is to develop a universal AI agent that operates seamlessly across software and hardware environments.
With its Gemini AI platform—now deeply integrated1 into Android, Chrome, and Search—Google’s "AI Mode" in Search enables users to pose complex, multi-part questions and receive comprehensive, AI-generated responses. Gemini enhances Chrome browsing with content summaries and more intuitive navigation.
Android's latest update also includes Gemini Live, which integrates camera input, voice recognition, and web data to perform real-time tasks and make smartphones more context-aware and proactive. Google also showcased its foray into AI-driven wearables with Android XR, an extended operating system designed for smart glasses and other XR devices. Developed in collaboration with partners like Samsung and Warby Parker, Android XR aims to deliver immersive, Gemini-powered experiences that redefine the way that users interact with digital content in physical spaces.
Two days later, OpenAI announced its acquisition2 for $6.5 billion of io, the hardware startup founded by former Apple design chief Jony Ive. This acquisition signals OpenAI’s ambitious entry into consumer hardware with AI-native devices that transcend traditional screens and interfaces.
While product details are under wraps, the collaboration between OpenAI and Ive’s design firm, LoveFrom, suggests a focus on creating intuitive, human-centric devices that integrate AI seamlessly into daily life. The partnership could deliver groundbreaking products that challenge current paradigms of personal technology.
The announcements from both Google and OpenAI highlight that AI could reshape operating systems, browsers, search experiences, and hardware design. As these tech titans chart new territories, users should expect that AI-powered devices and services will become integral to everyday life with more personalized, efficient, and immersive experiences.
2. Anthropic’s Claude 4 Is Pushing The Frontier Of AI Software Generation

Last week, Anthropic unveiled Claude 4,3 its latest large language model in the race for coding-focused AI. Offered in two sizes—a frontier-grade Opus model for power user cases and a lean, cost-efficient Sonnet variant—Claude 4 doubles down on Anthropic’s competitive advantage in software development performance. Our research suggests that such advanced agentic models will unlock new paradigms in autonomous coding, enabling engineers to prototype, iterate, and ship products faster than ever before.
On SWE-bench Verified, an agentic coding benchmark, Claude Sonnet 4 demonstrated its dominance. Out of the box, the model scored 72.7%, surpassing OpenAI’s o3 reasoning model at 69.1% and Claude 3.7’s 62.3%. Given multiple sample solutions and cherry-picking the best, Sonnet 4’s score leapt to 80.2%, well above state of the art.
While its consumer footprint is dwarfed by that of ChatGPT, Anthropic leads on developer-centric software platforms. On OpenRouter,4 an open-source API aggregator, Claude 3.7 led the leaderboard with 1.15 trillion tokens processed, edging out Google’s Gemini 2.0 Flash at 935 billion and OpenAI’s GPT 4o-mini at 651 billion, in the month ended May 15. In other words, consumers have crowned one champion, but coders have chosen another. As a coding assistant, Claude Code is consolidating Anthropic’s position as the leader of software automation.
3. As AI Agents Commoditize Drug Discovery, Private Data Will Be Key To Creating Value

Recently, FutureHouse demonstrated5 the first AI-orchestrated, closed-loop drug discovery platform. "Robin" is a multi-agent system applied to dry age-related macular degeneration (dAMD), addressing a significant unmet need with limited treatment options. While current dAMD therapies target the complement pathway, Robin identified a novel strategy to enhance phagocytosis in retinal pigment epithelial (RPE) cells, a core dysfunction in the disease. In ~2.5 months, using public data from PubMed and ~1–1.5 FTEs (Full-Time Equivalent Employees) at an estimated cost of $150,000–200,000, Robin proposed and validated experimentally ripasudil, a ROCK inhibitor previously approved for glaucoma. Importantly, ripasudil outperformed other ROCK inhibitors (e.g., Y-27632) in boosting phagocytosis and upregulating lipid efflux genes like ABCA1, demonstrating that AI can drive hypothesis generation and iterative testing with minimal human input.
That said, our research suggests that true differentiation will come not from AI alone, but from AI in conjunction with lab automation and proprietary biological data. Companies like Recursion with industrialized wet-lab systems, and those like TempusAI that anchor AI in real-world patient data are well positioned to surface novel biology and achieve discovery at scale. According to our research, as AI tooling commoditizes, companies that can operationalize data-rich, automated discovery platforms are likely to pull away from the pack in fulfilling unmet needs and creating long term shareholder value.
-
1
Chiu, J. 2025. “Everything Google Announced at I/O 2025.” Wired.
-
2
OpenAI. 2025. “Sam & Jony introduce io.”
-
3
Anthropic. 2025. “Introducing Claude 4.”
-
4
See OpenRouter. 2025. “LLM Rankings, All Categories, May 20, 2024 – May 15, 2025.” Accessed via: web.archive.org/web/20250515094355/https://openrouter.ai/rankings?view=month
-
5
Ghareeb, A.E. et al. 2025. “Robin: A multi-agent system for automating scientific discovery.” arXiv.