To make auto-generated text more human and helpful, OpenAI created Generative Pre-trained Transformer 3 (GPT-3) language model, which generates realistic human text.
With just a small quantity of input text, GPT-3 has been used to write articles, poems, stories, news reports, and dialogue that may be utilized to produce enormous volumes of an excellent copy.
GPT-3 was developed by OpenAI, an AI research firm co-founded by Elon Musk and has been considered the most significant and usable advancement in AI in years. It has 175 billion learning parameters, making it 10 times larger than any language model ever constructed.
Learn how to use GPT-3 in a variety of exciting & helpful ways, from using it as a writing aid to developing a complex chatbox, designing it to converse with you about your favourite topics.
Working with OpenAI
OpenAI Playground is a web-based platform for playing and modelling GPT-3-based solutions. Here, we will work on a project and try out a few prototypes to solve a variety of problems.
We’ll also look at how to convert the work you completed in the Playground into a standalone Python programme in the next blog.
To follow the examples in this tutorial, all you need is an OpenAI GPT-3 license.
Currently, OpenAI runs a beta program for GPT-3, and you can request them for a beta licence directly.
You will need to have a Python 3.6 or newer installed, only if you are interested in writing GPT-3 applications in Python.
The OpenAI Playground
As mentioned above, the GPT-3 must be trained to produce the desired text output, and this can be easily done in the OpenAI Playground.
Before starting, understand the main aspect of this interface:
On the right side, the sidebar has a few options to configure the kind of GPT-3 it is expected to produce.
To interact with the GPT-3 engine there is a large area of text. The paragraph in the bold font is what GPT-3 receives as input. The key aspect of training the engine is teaching it, what type of text is to be generated by giving it examples.
The second appearance of the prefix is the last part of the input, we are giving GPT-3 a section that has a prefix and text, followed by a line that has the prefix. This hints to GPT-3 that it needs to produce some text to finish the second paragraph so that it matches the tone and style of the first paragraph.
When you are done with the training text and options set as required, press ‘Submit’ at the bottom, and GPT-3 look over the input text and initiate more text to match. If you press ‘Submit’ again, GPT-3 runs again and produces another block of text.
Log in to the OpenAI Playground, make yourself familiar with the interface.
In the navigation bar, at the top right corner, select one of some language models:
In this tutorial, we are going to use the DaVinci model, which is the most advanced right now, make sure you use this too. Once you learn how to work with the Playground you can try other models and work with them as well.
Above the text area, there is another dropdown with the label “Load a preset…”
Here, OpenAI allows some ready to be used presets for different uses of GPT-3.
Select the “English to French” preset. After choosing a preset, the text area contents are updated with a predetermined training text for it. The settings in the right sidebar are also updated.
In the case of the “English to French” template, the text shows a few English sentences, each with its French translation:
As in the example above, each line starts with a prefix. Since this application has English lines and French lines, the prefix is dissimilar to help GPT-3 make sense of the pattern.
Note: there is an empty English prefix at the bottom of the text, where we can enter the text we would like GPT-3 to translate to French. Enter a sentence in English and then press the “Submit” button to have GPT-3 generate the French translation. Here is the example that I used:
Each preset provided by OpenAI is easy to use and accessible, it would be a great idea to try other presets. “Q&A” and “Summarize for a 2nd grader” presets are highly recommended.
Creating GPT-3 Applications
The presets given by the OpenAI Playground are fun to use and work with, you can use your ideas to use the GPT-3 engine. In this section, we are going to explore all the options provided in the Playground to create the applications.
Generating a GPT-3 based solution requires writing the input text to train the engine and adjusting the settings on the sidebar according to your requirement.
To go along with the examples in this section, make sure to reset Playground back to its default settings. To do this, remove all the text from the text area, and if you have selected a preset, click the “x” next to its name to remove it.
The most important setting for controlling the output of the GPT-3 engine is ‘temperature’. It is in charge of the generated text’s randomness. A value of 0 indicates that the engine is predefined, meaning that it will create the same output regardless of the input text. A value of 1 causes the engine to be very creative and to take the majority of the risks.
Let’s start prototyping an application by setting the temperature to 0. The “Top P” parameter also has some control over the randomness of the response, so make sure that it is at its default value of 1. Set all other parameters also to their default values.
This configuration will help GPT-3 to act expectedly, this will be a good start to try things out.
You can type text and press “Submit” to see how GPT-3 adds more text. In the example below, the text ‘Python is’ was typed and the rest was generated by GPT-3.
Know that GPT-3 does not like input strings that end in space because this causes unpredictable behaviours. You may have the readiness to add a space after the last word of your input, but keep in mind that this can cause problems. If by mistake you leave one or more spaces at the end of your input, the Playground will show a warning.
Raise the temperature to 0.5. Remove the text generated above, With text: ‘Python is’ click “Submit”. Now GPT-3 has more freedom while completing the sentence. As a result:
To see how GPT-3 becomes more or less creative try different values of temperature with its responses. Set the temperature value back to 0 and rerun the original ‘Python is’ request
You must have noticed, GPT-3 often stops in the middle of a sentence. You can use the “Response Length” setting, to control how much text should be generated.
The default setting for response length is 64, which means that 64 tokens will be added to the text, with a token being defined as a word or a punctuation mark.
With the temperature set to 0 and a length of 64 tokens, and text: ‘Python is’. Now, you can press the “Submit” button a second time to have GPT-3 attach another set of 64 tokens added at the end.
The picture above ends with an incomplete sentence, to avoid this problem you can set the length to a larger value than what you need and then delete the incomplete part at the end.
Using a short prefix for each line of text can help GPT-3 understand better what response is expected. Consider the following scenario: we want GPT-3 to generate names for metasyntactic variables that we can use in our code. These are commonly used placeholder variables in coding examples, such as foo and bar.
GPT-3 can be trained by displaying one of these variables and allowing it to generate more. We can use foo as input again, but this time we’ll press enter and move the cursor to a new line to tell GPT-3 that the response should be on the next line. Sadly, this does not work because GPT-3 does not understand what we want:
The GPT-3 is not able to understand that we want more lines like the one we entered.
Let us try adding a prefix, var: foo as our input, and also type var: in the second line to force GPT-3 to follow our pattern. Now, the second line is incomplete unlike the first line, we are giving clear hints that we want “something like foo” added to it.
The “Stop Sequences” option, which appears at the bottom of the right sidebar, allows defining one or more sequences when generated and forces GPT-3 to stop.
Following the example from the previous section, we would like to have only one new variable each time we call on the GPT-3 engine. We may use the same prefix as a stop sequence since we are prefixing every line with var: and priming the engine by inserting the prefix alone in the last line of the input.
Find the “Stop Sequences” in the sidebar and enter var: followed by Tab.
Now click “Submit” and change the input text to include var: foo in the first line and just var: in the second. The outcome is now a single variable:
Type var: again in the third line of the input text and submit, and you will get one more.
Every time we want to request a response we have to type the prefix for the line GPT-3 has to complete.
In settings, the “Inject Start Text” option tells the Playground what text to automatically attach to the input before sending a request to GPT-3. Place the cursor in this field and type var:
With the var: foo, you can reduce the text to a single line. Press enter to move the cursor to the beginning of the second line, then press “Submit” to display the next variable. Each time you submit, you’ll get a new one with the prefixes automatically appended.
Using multiple prefixes
The variable name generator we’ve been using in the previous sections takes the straightforward approach of displaying GPT-3 as a text sample and asking for more.
Another way to engage with GPT-3 is to have it perform some type of analysis and transformation on the input text before generating the result. As an example of this type of interaction, we’ve only seen the English to French translation setting thus far. Other options include Q&A chatbots, having GPT-3 fix grammatical mistakes in the input text, and even more esoteric options like having it transform English design instructions into HTML.
The user and GPT-3 have a conversation in these projects, which necessitates the usage of two prefixes to distinguish between lines that belong to the user and lines that belong to GPT-3.
To showcase this project approach, we will create a tweet bot that will receive a complicated notion from the user and respond with a basic explanation that a five-year-old can comprehend.
By clicking the garbage symbol, you may restore your Playground to its original condition.
We’ll show GPT-3 an example of interaction to help us build a tweet generator. The Key: prefix will be used on the line that shows the item we want to be explained, while tweet: will be used on the line that contains the explanation.
Make sure to use simple words in the response that will be used for the training, as to generate other responses in a similar style
Set the “Inject Start Text” option to [enter]tweet: to have the Playground inject the prefix for the GPT-3 line automatically.
For GPT-3 to know when to stop, we must additionally set a stop sequence. To ensure that GPT-3 knows that the “key” lines do not need to be created, we can use key: here. Keep in mind that you must press the Tab key to finish the stop sequence input in this field.
Because the stop sequence is how we get GPT-3 to halt, I’ve set the response length to the maximum of 512. I’ve also reduced the temperature slider to 0.25 to keep the replies from being overly exaggerated or random, but this is an area where you may experiment with different settings to see what works best for you.
If you started experimenting with the tweet generator in the last section, you may have noticed that every time you wish to ask the bot a new question, you have to retype the thing: prefix.
The “Inject Restart Text” option in the sidebar can be used to autotype the next prefix by automatically inserting some text after the GPT-3 response. I’ve typed in the key: prefix, followed by a space.
It’s much easier to interact with GPT-3 and have it explain things to us now!
The “Top P” option
The “Top P” parameter is a different technique to manage the randomness and creativity of GPT-3’s generated text. Only one of Temperature and Top P should be utilised, according to the OpenAI guidelines, so if you use one, make sure the other is set to 1.
I wanted to observe how the GPT-3 responses differed when Top P was used instead of Temperature, so I set Temperature to 1 and Top P to 0.25:
As you can see, there isn’t much of a difference, but I believe the responses are of lower quality. Consider the time travel answer, which is a terrible explanation, as well as how GPT-3 repeats the concept of finding information in two of the answers.
I increased Top P to 0.5 to see if I could improve these answers
As a result, 0.5 temperature is too high for this purpose, and GPT-3’s replies become more ambiguous and informal.
After experimenting with several projects and using both Temperature and Top P, I’ve concluded that Top P provides better control for applications where GPT-3 is expected to generate text with accuracy and correctness, whereas Temperature is best for applications that require unique, creative, or even amusing responses.
for the but I’ve found that utilising Top P with a value of 0.5 produces the greatest results.
We’ve experimented with the majority of the configuration choices and have a first interesting application in the form of the tweet bot.
You should save the bot and the parameters that you found to work best for you before we move on to the next project.
Begin by resetting the text to only include the training portion of the text, along with the definition of a microphone in the third line, plus the thing: prefix. Use the floppy disc icon to save the project as a preset once the text has been reset to the initial training:
For each saved preset, you can provide a name and a description.
If you want to share this preset with others, you can use the share button. To share a preset, you’ll be given a URL to share with your friends:
Penalties for frequency and presence
Let’s have a look at two more of the options we haven’t explored yet. The “Frequency Penalty” and “Presence Penalty” sliders let you determine the level of repetition GPT-3 is permitted in its responses.
The frequency penalty reduces the likelihood of a word being chosen again the more times it has already been used. The presence penalty does not take into account how often a word is used, only whether it appears in the text.
The distinction between these two alternatives is minor but think of Frequency Penalty as a method to avoid word repetitions and Presence Penalty as a way to avoid topic repetitions.
I haven’t been able to figure out how these two alternatives function. In general, I’ve observed that if these settings are set to zero, GPT-3 won’t repeat itself because of the randomization provided by the Temperature and/or Top P parameters. In the few instances when I did notice significant duplication, I just set both sliders to 1 and the problem was solved.
The “Best Of” option
GPT-3 can generate numerous responses to a query using the “Best Of” option. After that, the Playground chooses the finest one and presents it.
I haven’t found a good use for this option because I’m not sure how a decision is made between several options. When this option is set to a value other than 1, the Playground stops displaying responses as they are generated in real-time, as it needs to receive the entire list of responses before selecting the best one.
Displaying Word Probabilities
“Show Probabilities,” the last option in the options sidebar, is a debugging feature that lets you examine why certain tokens were chosen.
Activate the tweet bot setting once again. Set “Show Probabilities” to “Most Likely,” then restart the boot using the word “book” as the command. The resulting text is going to be colourized:
The darker a word’s background, the more likely it was to be chosen. When you click a word, a list of all the words considered in that location of the text appears. Above, you can see that I selected the term “written,” which has a light hue, and it turned out to be the second most popular option after “pages.” Due to the randomization of the Top P and/or Temperature parameters, this word was chosen instead of the favourite.
When this option is set to “Least Likely,” the colourization is reversed, with darker backgrounds allocated to words that were selected although they were not a likely choice.
If you choose “Full Spectrum,” the least and most likely words will be colourized, with green tones for the most likely and reds for the least likely.
GPT-3 is humanizing communication with the audience and customers in multiple ways, from answering queries in numerous ways to generating realistic human text. It can be trained in various aspects in terms of presets, temperature, Response length, prefixes, stop sequences, and more, to not just automatically produce text summarization and programming codes but can also chat, write applications, create content, write an email, or blog post, play chess and can even chit chat with you.
It outperforms fine-tuned language models based on task-specific training data for a variety of language tasks using just a few examples and task descriptions. GPT-3 also does well on non-language activities that involve reasoning, such as arithmetic.
No doubt it is as helpful as exciting it sounds.
To learn more about the workings of GPT-3 and how to effectively use it in your business, feel free to contact us at: firstname.lastname@example.org