OpenAI has launched GPT-4, the latest iteration of its ChatGPT AI.
According to the OpenAI website, GPT-4 is 40 percent more likely to serve up factual responses than GPT-3.5 – highlighting a significant improvement in its accuracy.
Another example is in its performance when completing the Uniform Bar Exam. GPT-4 now places in the 90th percentile. Previous editions languished in the tenth percentile.
Moreover, GPT-4 is 82 percent less likely to respond to requests for disallowed content.
So, not only is it smarter, but it is considerably safer.
As such, existing business applications of GPT may receive a shot in the arm. However, the new features GPT-4 brings may open up many more possibilities.
The Three New Capabilities of ChatGPT-4
GPT-4 users will notice they have lost a little speed from GPT-3.5. Yet, thanks to its heightened ability to parse much more intricate data, they will enjoy a higher degree of reasoning and conciseness in response.
Such enhancements have paved the way for three eye-catching new features: mimicking, image recognition, and longer context.
Consider mimicking first. According to OpenAI’s new webpage:
GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.
The text exemplifies how the new model can adapt to the user and mimic a particular style that a writer likes to use. As such, it may help brands create a consistent brand voice in the future.
Next, GPT-4 allows for visual inputs, generating image captions while classifying and analyzing them.
On its website, OpenAI gives an excellent example of a possible use case that spins from this new capability. It feeds GPT-4 a photo of multiple ingredients and asks: what can I make with these? GPT-4 then generates a list of potential meals.
From this, consider the possible future applications in healthcare. Users could send an image of their problem into the system, such as a skin condition. GPT-4 may then tell them what the problem likely is and recommend possible treatments – all without a doctor in the loop.
Finally, GPT-4 can now handle larger text inputs – up to 25,000 words at once. As a result, it is easier to create long-term content such as blogs, reports, and product manuals.
How Are Brands Already Using GPT-4?
Unsurprisingly, Bing was one of the early adopters, as Microsoft increased its investment in OpenAI while – rather worryingly – laying off its ethical AI team at the same time.
Bing is already using GPT-4 to further its bid to change how people search for information online and – ultimately – dethrone Google.
Yet, it is not the only early adopter. Consider OpenAI’s work with Be My Eyes, an app that assists the visually impaired by helping them accomplish various tasks.
With GPT-4, its creators have developed a virtual volunteer, harnessing the new image recognition capabilities to ingest images, identify them, and offer assistance based on what GPT-4 sees.
Meanwhile, the Khan Academy also partnered with OpenAI, paving the way for the institution to implement “Khanmigo”, a GPT-4-powered virtual tutor that supports its students.
Other fascinating early applications of GPT-4 include:
- Stripe scanning inbound communications, assessing the customer’s syntax, and spotting potential fraudsters.
- The Government of Iceland preserving the native Icelandic language.
- Duolingo automating conversations across multiple languages that explain grammatical rules to its Max subscribers.
- Morgan Stanley managing, organizing, and searching its colossal content library.
As GPT-4 rolls out, brands will likely uncover many more use cases. Yet, for now, users can only access GPT-4 via the ChatGPT Plus subscription service.
Moreover, OpenAI has warned that it will introduce usage caps, anticipating a significant capacity strain. It has also floated the possibility of a separate subscription service for GPT-4 in the near future.
How Could This Impact Customer Experience?
By mimicking the style of particular writers, GPT-4 may allow businesses to more easily create a consistent tone of voice across their platforms and customer engagement channels.
Yet, this capability could also aid the development of conversational AI, as bots may mimic empathy and a certain level of reasoning.
The image recognition capabilities within GPT-4 will also further the development of conversational AI – opening up more opportunities to automate customer queries.
Moreover, as the Stripe example underlines, it may make biometrics capabilities more accessible, helping contact centers fight fraud.
GPT-4 could also enhance the agent-assist capabilities many CCaaS providers have already built using GPT-3.5 – perhaps by improving suggested responses via text-based channels.
Back to conversational AI, it is likely that GPT-4 will generate better training data for bots and even help build the design flows for new customer intents.
These are only a handful of examples, and many bright minds from across the CX space will think up many more possibilities to harness the new features of GPT-4.
Nonetheless, even the latest iteration of GPT must come with a warning label attached. It will still make mistakes and invent facts when it has limited data to draw from.
Thankfully, OpenAI is attempting to train it to decipher the differences between factual and inaccurate statements. Yet, brands must keep their limitations front of mind – especially for customer-facing use cases.
Learn more about the existing applications of ChatGPT in CX by reading our article: How Is ChatGPT Changing CX?