Truth
Understanding scope of truth related to generative AI
Most large language models are typically not trained to lie. They simply evaluate the prompt against its model (the data it was trained on), and return a response. These models are updated at a point and time, meaning they may have been built on data that is days, months, or years old. And while responses can sound very confident, AI can be incorrect through its misunderstanding of context, application or simply be out of date.
Time Accuracy
While this applies to all questions, this is especially true in software development due to the fast pace of change within languages, libraries, and platforms.
Scope
Another limitation to LLMs is the scope of it's knowledge. Let's say there is an updated charting library being using in existing development framework like React. A particular exception and it's solution may not be something that the model has been trained on and may not be able to answer that question. Or if a certain method in a framework was never documented publicly, the LLM may respond that the method does not exist when it really does.
Summary
No matter how rapidly AI is refined, there will always be the possibility for AI to create code that results in exceptions that it cannot solve. I have encountered a few situations where GPT is continually generating clean code that compiles without issue, and suddenly gets caught on something that it cannot resolve. Have patience and run through a structured troubleshooting process. A process for troubleshooting is provided at the end of this course, as a resource for anytime you run into an issue while developing. To be successful, have the expectation that AI is a tool to build applications and cannot be used an โapp generatorโ at this point and time.
How to check last updated data?
Checking when data on model on was last updated will like vary per model.
For ChatGPT simply ask it as a prompt.
Prompt
Response
This model's data was last updated in September 2021.
Last updated