Is ChatGPT Microsoft's Google Killer?
Following the initial public release statement from OpenAI and the subsequent announcement by Microsoft indicating its intent to invest significantly in the technology, ChatGPT has quickly garnered the attention of even the least tech-savvy.
What is Chat GPT?
For an excellent introduction to ChatGPT and how it works, I would refer you to this article by Stephen Wolfram “What is Chat GPT doing and why does it work?”.
First and foremost, ChatGPT is a chatbot that can generate human-like responses to any query by using its neural networks to predict the most suitable sequence of words that follow a question. In that way, it seems to know the answer to, or at least responds to, any question because it has been trained on a wide and extensive body of textual information. And it is this access to arcane facts, words, books, and historical writings that enables it to mimic human behavior. It can help you draw up vacation plans, fix your grammar, write poetry, and answer trivia. It is the machine incarnation of a know-it-all, a Sheldon Cooper on steroids.
A gossipy smart aleck
And like most smart alecks, it has a fatal flaw. If it cannot find a precise answer to a question, it will predict, fabricate, and invent possible answers. In summary, it will lie. ChatGPT is a gossip machine. This should not come as a surprise to us since human language has evolved to a point where almost everybody lies at least some of the time, to varying degrees, in everyday conversation. Some of these are harmless white lies ("how are you?" – "I’m good"), while others are fabrications, obfuscations, half-truths ("you look great, honey," "I missed the train, sorry," "my internet went down"), platitudes ("nobody’s perfect"), conspiracy theories ("aliens built the pyramids"), and blatant falsehoods. So ChatGPT, in aiming to mimic human conversation, also lies and will likely do more and more of it as it evolves.
The classic machine learning approach to correct this flaw is reinforcement learning – wherein an army of designated human experts evaluates the responses and reinforces good behaviors, like answering a question correctly and truthfully, and punishes bad behavior, like lying. Extensive reinforcement during the training of ChatGPT is one of the reasons it does not spout abusive or racist language. But just as any parent who has meted out a time-out to a lying child knows, reinforcement is challenging to implement, especially at scale, because consensus on what may be considered a fabrication and why sometimes lying is acceptable is difficult to attain. We should, therefore, expect these chatbots derived from ChatGPT to have distinct personalities, wholly based on the process of reinforcement. So seemingly innocuous questions like "Who is Dr. Chandrasekaran of QDT" receive completely fabricated and false responses from ChatGPT. Because like any smart aleck, ChatGPT is trained to respond, and therefore, a false answer is better than no answer, as long as it is not abusive.
OK, so what can this know-it-all do?
Like any kid genius, ChatGPT’s weakness is also its strength. Under the right human supervision, it can perform wondrous tasks. It can correct your grammar, proofread complex texts, translate into multiple languages, and take on mundane writing tasks with ease. Immediately, applications can integrate these features within automated writing software, document editors, customer chatbots, email marketing tools, and other similar applications.
Another useful ability is its ability to write good code under the right supervision and reinforcement. It helps that most code written today is copied, derived, or transformed from older code. To create new logic, coders often turn to GitHub or Stackoverflow to access a library of pre-written code. ChatGPT now has access to this repository and can aid in seemingly complex coding tasks such as reading and conceptualizing unstructured data, optimizing functions, and increasing code efficiency/readability. It can even help you add comments to previously uncommented code. From having experimented with this personally, I believe variations of this use case will significantly increase the efficiency of software coding. Coders will need to evolve their skills to direct, correct, and redirect the AI engine behind ChatGPT, instead of having to code from scratch. The fictional analog of Tony Stark co-operating with the AI Jarvis is becoming a reality now. And that may be the greatest innovation from ChatGPT.
Microsoft vs. Others
Microsoft has made its intentions clear. It’s doubling down on ChatGPT with ambitious plans to integrate this seamlessly with its applications - Bing, Teams, Azure, and Office. The competition Google (with Bard) and Meta (with LLaMA) among others will aim to close the gap with ChatGPT. Microsoft, with its recent growth and push towards the Cloud with Azure, is well positioned to dominate this race.
However, history suggests that Microsoft shall not succeed in capitalizing on this beginner’s advantage. Why? Because Microsoft, in every domain, has been a follower and not a leader. It is a perpetual #2, whether it is AWS vs. Azure, PS4 vs. Xbox, Zoom vs. Teams, Google vs. Bing, iPad vs. Surface Pro, Google Apps vs. Office, Chrome vs. Edge, Gmail vs. Outlook, WhatsApp vs. Skype, Oracle vs. SQL, or Android vs. Windows. The last great product that Microsoft built was…Windows 3, maybe? Evidence suggests that this is likely not accidental, and is a deficiency in the cultural ethos of Microsoft. In all of the above comparisons, the leaders are different (Amazon, Zoom, Meta, Google, Oracle), but the follower is the same; Microsoft has its finger in every pie, but has not baked any of them. It is a culture entrenched in following a leader, and doing just enough to be a #2 with significant market share. In some of these areas, it briefly led the market, but quickly lost the leadership position. Classic examples are Internet Explorer replacing Netscape, but being overtaken quickly by Google Chrome.
Key breakthrough innovations
In each case, technology leaders in a given area have achieved market domination through a key initial breakthrough. With Google it was their search powered by their page-ranking algorithms, with Zoom, it was their user experience for a video conference, and with Oracle it was their database architecture. Significantly, however, innovation leaders struggle to grow past their initial grand innovation, in that their future innovations never quite achieve the same level of impact. Like Einstein who failed to unify gravity and quantum mechanics, after his initial brilliant theories in general relativity and quantum mechanics, like Pink Floyd, who arguably, never surpassed their achievements in the “The Dark Side of the Moon”, or like Harper Lee’s “To Kill a Mockingbird”, history abounds with examples of geniuses that have failed to surpass their initial grand achievement. “Wish You Were Here” and “The Wall” are great albums, but “The Dark Side of the Moon” stands alone. Perhaps the great companies are not unlike these creative geniuses. Google has used its search algorithms to power advertising, Gmail and a host of applications. But the truth is that Google is where it is because it is the king of search. Also notable is the fact that Google has failed in its ventures to launch hardware devices (Motorola, Google Glass), but products directly driven by its search technology have succeeded.
History repeats itself
An additional damning precedent for Microsoft is their previous foray into chatbots, that went horribly wrong, possibly due to the limitations of this technology outlined earlier. So, if history is any indication of future performance, Microsoft-ChatGPT is likely to be superseded by META-LLaMA or Google-Bard. Google and Meta both have distinct competitive advantages over Microsoft when it comes to access to proprietary data. Google, with its search engine, and Meta with access to social network feeds, can fuel a language model with much more oomph. I foresee that, with faint echoes of the annoying paper clip, Microsoft shall remain the reliable #2, an also-ran, the-almost-but-not-quite-there.
P.S. As for what ChatGPT said about me:
No, I did not attend school at the University of Waterloo, nor do I have any association with the University of Toronto. I have not worked at IBM or Oracle, and Qualetics Data Tools is a non-existent company. But I do have a degree in Electrical Engineering, not Computer Science, and am an entrepreneur that works as the CEO of a data analytics company. In summary, Chat GPT = Some Facts + Lots of Imagination.