Skip to main content

Open AI's Browser Ambition: A Strategic Move or a Distraction?

·1410 words·7 mins
Artificial Intelligence Browser Wars
Brian Fertig
Author
Brian Fertig
Technology Pioneer, Scout and Reconnoiter
Table of Contents

Open AI to release their own web browser
#

The internet is once again abuzz with news from Open AI — this time, about their rumored plans to develop a web browser capable of competing with industry leaders like Chrome. While many are speculating about the features such a browser might offer or how an AI-powered browser could enhance our online experience, I find myself focusing on two central concerns.

  • First, creating and distributing a new browser is not advancing generative AI. Sure it might bring agentic qualities to a browser, but those qualities and agents could just as easily be incorporated into Chrome or other browsers via extensions. As opposed to this being an altruistic advancement in AI, this feels like a move that is more about asserting power and control.
  • Second, if Open AI is choosing to diversify its offerings into the browser market, how can this not be a distraction from their stated mission of delivering safe AGI? When a company takes its eyes off the ball of their primary mission, one of two things is typically true: either they are stuck and cannot advance further, or the stated mission is not the true mission of the company. In the case of Open AI, I’m concerned that both may be the case.

LLMs are not advancing as much as promised
#

While LLMs are still advancing, they are not advancing as much as promised. Obviously, 2022’s release of ChatGPT was huge and sparked a multi-billion-dollar AI race.

Since this AI race started, we have seen models released by Open AI and other big tech companies make refined improvements to these models, increasing the quality of the outputs. We have also seen advancements that have allowed smaller models to function better than much larger models, and get down to a size that runs well on consumer-grade hardware. And lastly, we have seen the introduction of software, or agents, that more effectively leverage generative AI for incorporation into workflows and information.

But it has also been told, or at minimum implied, that AGI is just around the corner, and this just hasn’t happened. AGI, for those that don’t understand the distinction, stands for Artificial General Intelligence. The difference between what we have today and AGI is, to put it simply, thinking capabilities. What we have today are predictive models that do a fantastic job using deep learning mathematics to determine the next word in a sentence based on the context of the words and discussion that has come before. Starting a conversation off by saying “hello” means that the LLM is likely to respond “Hi” or “Hello” back to start with. In reality, our brains work very similarly to an LLM. If you consider it, you will realize that whenever you start talking, you have not yet thought of every word you are going to say and in what sequence. Your brain just keeps adding words until you’ve covered the context of what your thought was, just as the LLM does.

Unlike human cognition, current Large Language Models (LLMs) lack the ability to think creatively or generate truly novel ideas. While LLMs can provide extensive and accurate information — such as a detailed history of England — they struggle when asked to produce original content, such as a stand-up comedy routine. This highlights a key limitation: LLMs are highly effective at prediction and pattern-matching, but they are not yet capable of the kind of creative thinking that defines human intelligence.

For a few years now, I’ve seen Sam Altman teasing out that what they are seeing in their testing is truly remarkable, that the world may not be ready for it, that we are on the verge of changing society forever. But, despite the promises, what we’ve actually seen released from Open AI is just refinements and improvements on what was already there.

There was testimony before congress about how we had to put in safety measures in LLMs or people would use AI to make novel viruses or chemicals. None of this has happened. We were told that jobs would be replaced in software engineering, yet without the thinking component, AI is limited to a tool that allows software engineers to work quicker, but not replace them.

In conclusion, it appears that not only Open AI, but the broader field of large language models as a whole, has yet to overcome the significant challenge of achieving Artificial General Intelligence (AGI). Despite promises and advancements, we are still far from developing systems capable of true human-level thinking.

Open AI is about profit and control, not about making the world a better place
#

While I have no doubt that the advancements Open AI has made have helped society in many ways, they are still a profit-driven company. They went from being a non-profit when they started back in 2015, to a for-profit, and now a PBC which is still a for-profit entity, with some legally obligated conditions to think of the public good. I give this about as much credence as I do Google’s “do no harm” mission – it’s lip service. Many big tech companies want to dazzle you with how quirky and visionary their leaders are. They want you to believe they are working on really fascinating things that are going to make our world a better place. And while many of the tools they produce do help the world, the end goal is profit, profit, profit. Don’t look at how they treat the user, the user IS the product, look at how they treat those that want to advertise with them.

I do not fault Open AI for pursuing a profit-driven model — after all, in today’s business environment, companies must remain financially viable to continue their operations. However, it is unfortunate that these companies often appears more focused on projecting an idealistic image than on delivering substantive progress toward its stated mission.

Information drives dollars
#

Google drives all of its money, more or less, from search advertisements which are bolstered by the profiles they are able to build on their users. The introduction of their ‘free’ tools like Google Analytics and Google Tag Manager have dramatically boosted their ability to track and categorize customers, browsing behavior, and other profile-type information that are then leveraged in marketing and advertisement. Bottom line is that the amount of information Google has on its customers and the number of eyeballs being exposed to Google have allowed Google to become one of the, if not the, most powerful companies in the entire history of the world.

If you think about entering the browser marketplace right now, one of two things is true. Either you are forking Chromium and adding some unique selling proposition onto it (but still being beholden to Google), or you are in for a massive fight. Market share in the browser space is something that Google will fight tooth and nail for. If Open AI is going to enter this space with a proprietary browser, then this means they are likely going to have to devote a considerable amount of time and energy into it, if they hope to have any level of success.

And what does it mean if Open AI needs to spend a considerable amount of time and energy in the browser wars? It means they aren’t spending as much time on their primary mission – which is again, why I think they’re stuck. This may be Open AI hedging against the long-term success of the uniqueness of their LLM product, and pivoting instead to a more agentic approach to marketing themselves.

If that winds up being the case, I expect the focus to be on trying to steal market share from Google and get the eyeballs and data to Open AI, and less effort on achieving AGI or furthering the AI technology from where it is today. I don’t see anywhere as much benefit to the average consumer by Open AI going this route, but this is the sometimes sad side effect of capital-driven technology development.

Final Thoughts
#

In conclusion, while Open AI’s move into the browser market may seem like an ambitious step forward, it raises important questions about their priorities and whether this shift is a distraction from their core mission of advancing AGI. As we continue to follow developments in both AI and the browser space, it’s clear that the balance between innovation and focus will be critical.