Overview
Literal AI is an all in one observability, evaluation and analytics platform for building production-grade LLM apps. Literal AI offers multimodal logging, including vision, audio, and video.
It covers a wide range of use cases, from conversational applications to task automation. Literal AI can be used with any LLM framework, for example Chainlit and LangChain. It integrates well with many LLM providers, like OpenAI, Anthrophic and Mistral.
Literal AI is developed by the builders of Chainlit, the open-source Conversational AI Python framework.
Literal AI Platform Example Thread
Key Features
Observability
Monitor your multimodal LLM app (including steps, feedback, prompts, token consumption) in a few minutes with our SDKs. Literal provides a unified view of all your data in one place.
Evaluation
Evaluate your threads and runs in real time using off the shelf and custom evaluators. Create datasets mixing production data and hand written examples to run non regression tests.
Prompt Collaboration
Safely design, try, debug, version and deploy prompts directly from Literal AI.
Monitoring
Monitor the performance of your LLM system in production. View LLM metrics in a dashboard, set automated rules and gather product & user analytics.
Next up
Get Started
Install the Literal AI SDK and get your API key.
Learn more about integrations
Learn about OpenAI, LangChain and Chainlit with Literal AI.
More
Use this documentation to
- Learn: Concepts, Integrations
- Find SDK references: Python, TypeScript
- Get inspired: Guides, Cookbook
Was this page helpful?