This integration allows you to very simply add observability and monitoring to your LLM application based on Vercel’s AI SDK. The instrumentation is available for the two main methods of the Vercel AI SDK: generateText and streamText.

With Threads and Steps

In most cases, you will want to keep track of the different generations from your application by grouping them into Threads or Steps. This is especially useful when you want to understand the context in which a generation was made, or when you want to compare different generations.

You can attach generation to a Thread or a Step by passing it as the literalAiParent parameter. This parameter should be an instance of a Thread or a Step that you have created using the Literal AI SDK.

TypeScript
export async function POST(req: Request) {
  const thread = await literalClient.thread({ name: "Example" }).upsert();

  const { text } = await generateText({
    model: openai('gpt-3.5-turbo'),
    prompt: question,
    literalAiParent: thread,
  });

  return { text };
}

Cookbooks

You can find more involved examples in our Cookbooks repository :

  • This chatbot uses Vercel AI SDK’s useChat hook in the frontend
  • This example uses the Vercel AI SDK integration in the backend