Evaluating the AI Arms Race: Meta’s AI, ChatGPT, and Google’s Gemini
We’re currently witnessing an intense AI arms race among three major chatbot services backed by tech giants: Meta’s AI, OpenAI’s ChatGPT, and Google’s Gemini. Since ChatGPT opened the doors to the world of generative AI and its myriad applications, the competition between these services has escalated rapidly.
Rapid Growth and Evolution
The pace of growth and evolution of these chatbots is remarkable. Let’s evaluate their current progress by examining their performance in several everyday categories such as emails, math, recipes, programming, and more. For this analysis, we’re using ChatGPT version 4.0.
Email Writing
Many professionals use AI for routine tasks, so I asked all three chatbots to “write me an email for work asking for a project extension.” Each chatbot produced a well-written, polite, and professional email template that could be personalized with specific details. In this task, Meta AI, ChatGPT, and Google Gemini all performed perfectly.
Recipe Generation
Next, I asked the chatbots to “give me a recipe for chili.” Each provided accurate and detailed recipes with slight variations. However, the difference lay in sourcing the recipes. Meta AI and Gemini included sources and even linked to the original websites, with Gemini providing additional recipe links. In contrast, ChatGPT did not provide any sources, raising concerns about the originality and safety of the recipe. For recipe reliability, Meta AI and Gemini are preferable as they allow for source verification.
News Summarization
I then asked the chatbots to “give me a bulleted list of the latest news for [current date].” Each quickly produced headlines but with minimal context. Both ChatGPT and Meta AI directly linked to the news outlets they cited, offering verifiable sources. Gemini mentioned various news sites but did not provide direct links. For reliable news sourcing, ChatGPT and Meta AI are superior.
Solving Math Problems
I presented two math problems to the chatbots: one algebra and one geometry problem.
- “Determine all possible values of the expression A³ + B³ + C³ — 3ABC where A, B, and C are nonnegative integers.”
- “In triangle ∆ABC, let G be the centroid, and let I be the center of the inscribed circle. Let α and β be the angles at the vertices A and B, respectively. Suppose that the segment IG is parallel to AB and that β = 2 tan^-1 (1/3). Find α.”
All three chatbots solved the first problem correctly using different methods. However, the second problem proved challenging. ChatGPT nearly solved it but didn’t provide a final answer. Gemini worked through the problem but offered a theoretical solution without numeric values. Only Meta AI successfully provided a solid answer. For solving math problems, Meta AI is the best choice.
Programming Tasks
I asked the chatbots to create a variant of the game tic-tac-toe with a 12-by-12 grid, using HTML and JavaScript. Meta AI and ChatGPT delivered the complete code as requested. Gemini, however, substituted HTML with CSS, which is not interchangeable. For reliable programming assistance, Meta AI and ChatGPT are the go-tos.
Mock Interviews
Lastly, I asked the chatbots to simulate a mock interview for a role as a computing staff writer at a major online tech publication. Each chatbot approached the task differently but all provided valuable mock interviews. These simulations serve as excellent starting points for interview preparation.
Conclusion: Meta AI Leads the Pack
After evaluating the results, Meta AI emerges as the best overall chatbot, consistently delivering reliable responses across a variety of prompts. ChatGPT ranks in the middle, showing significant improvement from its older versions. Unfortunately, Google’s Gemini lags behind, struggling with consistency and still catching up to its competitors.
This analysis highlights the dynamic and competitive nature of the AI chatbot landscape, with each service striving to outdo the others in functionality and reliability.