Generative Development Framework
GDF.ai
  • Intro to GDF-FSE
    • Generative AI, Large Language Models, ChatGPT?
    • Knowledge Areas
    • Access a Chat Based LLM
    • Why GDF?
    • Expectations
  • Limitations
  • Prompting
    • Prompt Patterns
    • Prompt Context
    • Prompt Stores
    • Prompt Operators
    • Prompt Chaining
  • Security
    • Protecting Data
    • Protecting Application Security
    • Protecting Intellectual Property
    • Protection Stores
    • AI Security Assessments and Penetration Testing
    • Social Engineering Testing with AI
  • Subject Knowledge Areas
    • Ideation
      • Identifying a Problem Statement
      • Plan and Prioritize Features
      • Develop User Stories
      • Requirement Gathering
      • Ideation Prompting
      • Ideation Template
    • Specification
      • Specifying Languages
      • Specifying Libraries
      • Specifying Project Structures
      • Specify Schemas
      • Specifying Elements
      • Specifying API Specs
    • Generation
      • Generating UI Elements
      • Generating Mock Data
      • Generating Schemas
      • Generating Parsers
      • Generating Databases
      • Generate Functions
      • Generate APIs
      • Generate Diagrams
      • Generating Documentation
    • Transformation
      • Converting Languages
      • Converting Libraries
    • Replacement
      • Replacing Functions
      • Replacing Data Types
    • Integration
      • Connecting UI Components
      • Connecting UI to Backend
      • Connecting Multiple Services Together
      • Connecting Cloud Infrastructure (AWS)
    • Separation
      • Abstraction
      • Model View Controller (MVC)
    • Consolidation
      • Combining UI Elements
      • Deduplicating Code Fragments
    • Templating
      • Layouts
      • Schemas
      • Project Structures
      • Content Management Systems
    • Visualization
      • General Styling
      • Visual Referencing
      • Visual Variations
    • Verification
      • Test Classes
      • Logging and Monitoring
      • Automated Testing
      • Synthetic Monitoring
    • Implementation
      • Infrastructure
      • DevOps / Deployment
    • Optimization
      • General Optimization
      • Performance Monitoring
      • Code Review
  • Guidance
    • Business Process
    • Regulatory Guidance
  • Generative Pipelines
  • Troubleshooting
    • Client Side Troubleshooting
    • Server Side Troubleshooting
    • Troubleshooting with AI
    • Documentation
    • Infrastructure Engineering
  • Terminology
Powered by GitBook
On this page
  • Key Takeaways
  • Example
  • Prompt
  • Response
  • Discussion
  • Prompt
  • Response
  • Discussion

Was this helpful?

Export as PDF
  1. Prompting

Prompt Stores

Understanding how prompt stores can be used to speed up the generative development process

Why Prompt Stores?

When working on a project with specific tools—for instance, React, Next.js, Chakra UI, Yup, and Axios—it can be tedious to re-specify them every single time you ask for a new component or feature. Prompt stores let you define shorthands or references up front, so you don’t have to restate them repeatedly.

  • Shorthand Example: You might tell the LLM, “When I say myStack, I mean React, Next.js, Chakra UI, Yup, and Axios.” Then, in subsequent prompts, you can just say “create a sign-up page using myStack” and the LLM will remember the specifics.

How Context Windows Work

Most chat-based LLMs maintain a context window (as discussed in the previous section)—a chunk of recent conversation that the model uses to inform its next response. When you reference a shorthand like myStack in subsequent prompts, the model looks back at that conversation context to recall what myStack was defined to include. However, each LLM has a token limit for how much text it can remember at once (its “window size”).

  • If your conversation exceeds that limit, older prompts can drop out of context and the model may “forget” them.

  • You can manage this by reintroducing important information or storing it in a separate reference outside the immediate conversation.

Incorporating Vector Databases

To extend the context beyond what the LLM’s built-in window can handle, some workflows integrate vector databases:

  1. Embedding and Storage – You convert text (e.g., your project specs) into vector embeddings and store them in a vector database.

  2. Context Retrieval – When you query your LLM, relevant chunks of text can be pulled from the database and appended to your prompt, effectively refreshing the model’s memory.

  3. Scalable Knowledge Base – This approach can scale well for large projects, where you might have extensive documentation or domain-specific details that need to be referenced repeatedly.

Practical Example

Without Prompt Stores

“Create a contact form using React, Next.js, Chakra UI, Yup, and Axios.” “Create a sign-up page using React, Next.js, Chakra UI, Yup, and Axios.” …

You’re repeating the entire stack each time, which can become cumbersome.

With Prompt Stores and Context

  1. Define your stack once:

    “When I say myStack, please recall that it’s React, Next.js, Chakra UI, Yup, and Axios.”

  2. Use your shorthand:

    “Now, create a sign-up page using myStack.”

The LLM references the conversation’s context window (and potentially a vector database if you’ve set one up) to remember your definition of myStack.


Key Takeaways

  • Prompt Stores help you establish shorthand references to reduce repetition.

  • Context Windows enable the LLM to recall recent details in a conversation, but they have token limits.

  • Vector Databases can scale context management by storing and retrieving important chunks of text outside the LLM’s built-in memory.

  • Efficiency & Consistency: By combining prompt stores and context windows, you can rapidly iterate on your application’s features without constantly re-describing your tech stack or project constraints.

Example

Create a prompt store to store language and libraries

Prompt

prompt: 
when I write bicycle-app-config remember that I mean "next.js, chakra ui, yup, and axios"

Response

Sure, I'll keep that in mind! When you write bicycle-app-config, I'll assume that you mean a configuration that includes Next.js, Chakra UI, Yup, and Axios.

Discussion

Perfect. Now that we have a prompt store that retains the context of the languages and libraries we use, let's see it in practice.

Prompt

prompt
create a contact form, bicycle-app-config

Response

Sure, here's an example of how you can create a contact form in a Next.js app with Chakra UI, Yup, and Axios, using the bicycle-app-config:

import React from "react";
import { Box, Button, FormControl, FormErrorMessage, FormLabel, Input, Textarea } from "@chakra-ui/react";
import { useFormik } from "formik";
import * as Yup from "yup";
import axios from "axios";

const contactSchema = Yup.object().shape({
  name: Yup.string()
    .required("Name is required")
    .matches(/^[a-zA-Z\s]*$/, "Invalid name format"),
  email: Yup.string().email("Invalid email format").required("Email is required"),
  message: Yup.string().required("Message is required"),
});

const ContactForm = () => {
  const formik = useFormik({
    initialValues: {
      name: "",
      email: "",
      message: "",
    },
    validationSchema: contactSchema,
    onSubmit: async (values, { setSubmitting }) => {
      try {
        const response = await axios.post("/api/contact", values);
        console.log(response.data);
      } catch (error) {
        console.error(error);
      } finally {
        setSubmitting(false);
      }
    },
  });

  return (
    <Box>
      <FormControl id="name" isInvalid={formik.touched.name && formik.errors.name}>
        <FormLabel>Name</FormLabel>
        <Input type="text" {...formik.getFieldProps("name")} />
        <FormErrorMessage>{formik.errors.name}</FormErrorMessage>
      </FormControl>
      <FormControl id="email" isInvalid={formik.touched.email && formik.errors.email}>
        <FormLabel>Email</FormLabel>
        <Input type="email" {...formik.getFieldProps("email")} />
        <FormErrorMessage>{formik.errors.email}</FormErrorMessage>
      </FormControl>
      <FormControl id="message" isInvalid={formik.touched.message && formik.errors.message}>
        <FormLabel>Message</FormLabel>
        <Textarea {...formik.getFieldProps("message")} />
        <FormErrorMessage>{formik.errors.message}</FormErrorMessage>
      </FormControl>
      <Button mt="4" colorScheme="teal" isLoading={formik.isSubmitting} onClick={formik.handleSubmit}>
        Submit
      </Button>
    </Box>
  );
};

export default ContactForm;

In this example, we're using Chakra UI's FormControl, FormErrorMessage, FormLabel, Input, and Textarea components to create a form that includes the fields for name, email, and message. We're also using the useFormik hook from formik to manage the form state and handle form submission.

We're passing the contact form schema to the validationSchema prop of the useFormik hook to validate the form data against the schema. The isInvalid prop of the FormControl component is set to true if the field has been touched and there is a validation error.

Finally, we're using the getFieldProps function from formik to bind each form input to the corresponding formik field, and using the handleSubmit function from formik to handle form submission. We're also using Axios to make an asynchronous POST request to the /api/contact endpoint when the form is submitted.

Note that in order for this example to work, you'll need to create an API route in your Next.js app at /pages/api/contact.js that handles the form submission and sends an email or performs other actions based on the submitted data.

Discussion

Using prompt stores can greatly expedite your development delivery while bringing unity and standardization to your response output. The concept of prompt stores can not only be applied languages, but can also be applied to libraries, UI layouts, business rules, formats, security schemas, user scenarios, or just about any repeatable concept you can think of.

PreviousPrompt ContextNextPrompt Operators

Last updated 3 months ago

Was this helpful?