Let’s Think Step by Step – Solving Complex Word Problems with GPT-3
In their paper Large Language Models are Zero-Shot Reasoners, Scientists from Tokyo University pointed out an interesting discovery: the word sequence “Let’s think step by step” before a completion answer dramatically increases the accuracy of the answer by GPT-3.
To test this hypothesis, I tried it out with the obvious prompt “What is the meaning of life?” and have got a very elaborated – and wise – answer!
Learning GPT-3 Model and Its Use Cases
GPT-3 (Generative Pre-trained Transformer 3), is an autoregressive language model which uses deep learning to produce human-like text. It was developed by OpenAI, and is one of the most advanced natural language processing models available today. It has been trained on a massive dataset of over 45TB of text, making it one of the largest language models ever created.
GPT-3 can be used for various tasks such as question answering, summarization, machine translation, dialogue generation, and more. It has also been used to generate code from natural language descriptions.
Using “Let’s Think Step by Step” Prompt to Increase Accuracy
“Let’s think step by step” can be used to increase accuracy when using GPT-3 for solving complex word problems. This technique involves adding the phrase “Let’s think step by step” before the prompt in order to guide GPT-3 in its reasoning process. For example, if we were asked to solve a math problem like “What is 6+7?” we could add the phrase “Let’s think step by step” before it in order to increase its accuracy.
# Let's think step by step: # First, let's start with 6 + 7 = ? # We know that 6 + 7 = 13 # So our answer is 13
This technique can also be used with more complex questions or prompts such as “What is the meaning of life?” By adding “Let’s think step by step” before this prompt, GPT-3 was able to provide an elaborate and insightful answer.
The key insight here is that providing guidance in how we want our model to reason—even at a basic level—can result in dramatic improvements in accuracy.
It seems that using “Let’s think step by step” before giving a prompt can significantly improve accuracy when using GPT models for complex tasks such as question answering or code generation. Although this technique does not always guarantee correct results, it does help guide GPT models towards accurate solutions.