This code integrates an asynchronous OpenAI client with a Literal AI client to create a conversational agent. It utilizes Literal AI’s step decorators for structured logging and tool orchestration within a conversational flow. The agent can process user messages, make decisions on tool usage, and generate responses based on a predefined set of tools and a maximum iteration limit to prevent infinite loops.

This example demonstrates thread-based monitoring, allowing for detailed tracking and analysis of conversational threads.


With the integration of Literal AI, you can now visualize threads, runs and LLM calls directly on the Literal AI platform, enhancing transparency and debuggability of your AI-driven applications.