Unfinished Responses

How to handle unfinished responses in generative AI

Some large language models have limits to how much text they will include in a response. Other times a network outage may occur and the response is cut off.

To handle this, many of the LLMs support a "please continue" response that derives what to do based on the context of the conversation / model.

Last updated