OpenAI is training a successor to GPT-4. Below are 3 major upgrades to expect from GPT-5
Although OpenAI’s most recent release, GPT-4o, is significant improve knowledge of large language models (LLM)the company is working on its next flagship model, the GPT-5.
Also: How to use ChatGPT Plus: From GPT-4o to interactive whiteboard
Lead to spring event Including the announcement of GPT-4o, many hope the company will launch the highly anticipated GPT-5. To limit speculation, CEO Sam Altman even posted on X that “not gpt-5, not a search engine”.
Now, just two weeks later, in one blog posts announced a new Safety and Security Committee formed by the OpenAI board to recommend safety and security decisions, the company confirmed that it is training its next flagship model, which will most likely recommend refers to GPT-5, the successor to GPT-4.
“OpenAI recently began training its next frontier model, and we anticipate the resulting systems will take us to the next level of capability on the path towards AGI [artificial general intelligence]”, the company said in a blog post.
While it may take months if not longer before GPT-5 is available to customers — LLMs can take a long time to train — here are some expectations of what the next-generation model will be like. of OpenAI would be able to do, ranked from least interesting to most interesting.
Better accuracy
Following past trends, we can expect GPT-5 to respond more accurately — because it will be trained on more data. Innovative AI models like ChatGPT work by using their training data to power the answers they come up with. Therefore, the more data a model is trained on, the better its ability to produce coherent content, leading to better performance.
Also: How to use ChatGPT to create charts and tables with Advanced Data Analysis
With each model released to date, the training data has increased. For example, report says GPT-3.5 is trained on 175 billion parameters while GPT-4 is trained on one trillion parameters. We will likely see an even bigger leap with the release of GPT-5.
Multimodal enhancement
When predicting the capabilities of GPT-5, we can consider the differences between every flagship model since GPT-3.5, including GPT-4 and GPT-4o. With each jump, the model gets smarter and has many upgrades, including price, speed, context length, and methods.
GPT-3.5 can only import and export text. With GPT-4 Turbo, users can input text and images to get text output. With GPT-4o, users can import combination of text, audio, images and video and receive any combination of text, audio, and image output.
Also: What does GPT mean? Understand GPT-3.5, GPT-4, GPT-4o, etc
Following this trend, the next step for GPT-5 will be the ability to export video. In February, OpenAI announced Text-to-video model Sora, can be integrated into GPT-5 for video export.
Ability to act autonomously (AGI)
Chatbots are undeniably impressive AI tools capable of helping humans perform many tasks, including creating code, Excel formula, essay, curriculum vitae, application, charts and tables, and more. However, we see a growing desire for AI that can know what you want to do and can do it with minimal instructions — general artificial intelligence, or AGI.
With AGI, the user will ask the agent to complete the end goal, and it can produce results by deducing what needs to be done, planning how to do it, and executing the task. For example, in the ideal scenario where GPT-5 has AGI, a user would be able to request a task like “Order a burger from McDonald’s for me” and the AI would be able to complete a series of tasks including open store. McDonald’s website and enter your order, address and payment method. All you have to worry about is eating burgers.
Also: What really is artificial intelligence in general? Conquer the final stage of the AI arms race
The Rabbit R1 The startup is trying to accomplish a similar goal, creating a utility that can use agents to create a seamless experience with real-world tasks, such as booking an Uber or ordering food. The machine has sold out many times even though it cannot perform the more advanced tasks mentioned above.
As the next frontier of AI, AGI could radically upgrade the type of assistance we receive from AI and completely change the way we think about assistants. Instead of relying on AI assistants to tell us what the weather is like, they’ll be able to help us complete tasks from start to finish, which — if you ask me — is something to look forward to.