Getting set up with ChatGPT
Before we dive into the documentation, be sure you have a chat-based Large Language Model (LLM) environment set up. This could be a service you sign up for online (like ChatGPT) or a locally hosted open-source model—whatever suits your needs. Once you have an environment running, you should have a console or interface in which you can type prompts and receive responses.
Most chat-based LLMs provide:
A text input area (where you type your prompts).
A workspace that keeps track of your ongoing conversation.
The ability to generate responses based on your most recent input and the conversation history.
For example, if you want to test your setup, you might type something like:
"Write some lyrics in the style of Metallica."
After the model responds, you can follow up with:
"Rewrite the above in the style of Dr. Seuss."
Observe how the LLM’s response changes based on the conversation history. If you see appropriate responses, then your environment is working as expected!
This idea of “remembering” previous prompts and responses is crucial for developers. We often refer back to previously generated snippets—whether it’s debugging an exception, converting code from one language to another, or integrating a new library. The technical term for a model’s ability to incorporate previous conversation text is called prompt context, and it’s one of five key prompting concepts we’ll explore in more detail throughout this documentation.