Two Minute Technology: GPT-3
Two Minute Technology: GPT-3
What is GPT-3?
Third generation Generative Pre-trained Transformer, or GPT-3, is a neural network machine learning model trained using internet data to generate any type of human-like text. Developed by OpenAI, it requires a small amount of input text to produce large volumes of relevant and sophisticated machine-generated text.
GPT-3's deep learning neural network is a model with over 175 billion machine learning parameters, making it the largest neural network ever produced. To put things into scale, Microsoft's Turing NLG model had 10 billion parameters. As a result, GPT-3 is better than any prior language-prediction model, since it is able to generate text that is convincing enough to seem like a human could have written it. Below are two example conversations using GPT-3 technology. The first video is a conversation on existential themes between two AI's. The second video is a conversation between a person and AI.
What can GPT-3 do?
The focus of GPT-3 is on generating natural human language text. However, generating understandable content is quite the challenge for machines that don't really understand the complexities and nuances of language. That is why GPT-3 uses text found on the Internet to train itself on how to produce realistic human text.
GPT-3 has been used to create articles, poetry, stories, news reports, websites and dialogue. It is also being used for automated conversational tasks, such as responding to any text that a person types into the computer or phone with a new piece of text appropriate to the context. We call these virtual assistants. GPT-3 is also used in customer service centers, often called chatbots that help answer customer questions. There are researchers and developers working on expanding the usage of GPT-3. For example, researchers are working on creating AI therapists using GPT-3 models. This video shows an example of how GPT-3 could be used to create AI therapists. A potential improvement of this model could be slowing down the output response. Slowing down GPT-3 response can provide a more human touch to the interaction. What do you think could improve this AI therapist?
Risks and Limitations
The biggest limitation is that GPT-3 is not constantly learning. Since it is pre-trained, it doesn't have an ongoing long term memory that learns after each interaction. Like all neural networks, GPT-3 lacks the ability to explain and interpret why certain inputs result in specific outputs. Furthermore, it has a limited input size. A user cannot provide a lot of text input for output. GPT-3 can only handle input text that is a few sentences long.
A primary concern with GPT-3 is the risk of machine learning bias. Since the model was trained on Internet text, many of the biases that humans exhibit on the Internet is exposed to GPT-3. For example, researchers at Middlebury Institute of International Studies found that GPT-3 is particularly adept at generating radical text that mimics conspiracy theorists and white supremacists. The quality of these generated text is high enough that people are worried that GPT-3 will be used to create "fake news" articles. For more information on GPT-3, feel free to check out OpenAI's website at the following link!
GPT-3 Powers the Next Generation of Apps
Message from the Blogger
Hello! Thank you for reading this blog post. I hope you enjoyed it and learned something new today. See you next time!
Comments
Post a Comment