OpenAI, a startup backed by Microsoft Corp, has started to introduce GPT-4, an advanced artificial intelligence model that builds upon the technology used in the immensely popular ChatGPT.
Photo by Romilly Cotta
WHAT IS THE DIFFERENCE BETWEEN GPT-4 AND GPT-3.5?
Compared to GPT-3.5, which only accepts text prompts, the latest version of the large language model - GPT-4 - can also process image inputs to identify objects in a picture and analyze them. Additionally, while GPT-3.5 is restricted to producing responses of approximately 3,000 words, GPT-4 can generate responses that exceed 25,000 words.
Moreover, GPT-4 has an improved capability of avoiding disallowed content requests, with an 82% lower likelihood of generating inappropriate responses than its predecessor, and performs 40% better in certain factuality tests. Developers using GPT-4 can also customize the AI's style of tone and verbosity, allowing for more flexibility.
For instance, GPT-4's conversation style can be tailored to a Socratic method of questioning, responding to questions with inquiries. In contrast, the previous version of the technology had a fixed tone and style. According to OpenAI, ChatGPT users will soon be able to modify the chatbot's tone and style of responses.
GPT-4's development aims to improve the model "alignment" - that is, to make it follow user intentions more accurately while also generating more truthful and less offensive or dangerous output.
And more importantly, the GPT-4 model is capable of generating content using both image and text inputs, thanks to its "multimodal" capability.
Photo from Twitter
WHAT ARE THE CAPABILITIES OF GPT-4?
GPT-4 has surpassed its previous version in terms of performance on standardized tests such as the U.S. bar exam and the Graduate Record Examination (GRE). Additionally, OpenAI President Greg Brockman showcased GPT-4's capability to assist individuals in calculating their taxes.
During the demonstration, GPT-4 was able to take a photograph of a hand-drawn mock-up for a basic website and convert it into a real one. Moreover, Be My Eyes - an app that aids visually impaired individuals - will soon feature a virtual volunteer tool powered by GPT-4.
WHAT ARE THE LIMITATIONS OF GPT-4?
OpenAI has acknowledged that GPT-4 has limitations similar to its previous versions and is not as capable as humans in many real-world scenarios. The issue of "hallucinations", or inaccurate responses, has been a challenge for many AI programs, including GPT-4.
However, OpenAI claims that GPT-4 can compete with human propagandists in various domains, particularly when combined with a human editor. For instance, GPT-4 generated plausible suggestions when asked how to make two parties disagree with each other.
According to OpenAI CEO Sam Altman, GPT-4 is the "most capable and aligned" with human values and intent, despite still being flawed. GPT-4's knowledge is limited to events that occurred before September 2021, as the vast majority of its data was cut off after that point. Additionally, GPT-4 does not learn from experience.
WHO HAS ACCESS TO GPT-4?
Even though GPT-4 has the ability to process both text and image inputs, the image-input feature is currently not available to the public, while only ChatGPT Plus subscribers and software developers who are on a waitlist can use the text-input feature. The subscription plan, which costs $20 per month, was introduced in February and provides subscribers with faster response times and priority access to new features and upgrades. GPT-4 is already being used to power Microsoft's Bing AI chatbot and some of the features on Duolingo's subscription tier for language learning.