Monitoring
Monitoring
Literal AI has a Dashboard on which you can monitor usage of your application.
Metrics
Various charts inform you about:
- Performance: measure latency and token throughput per second.
- Quality: LLM Evaluation: view Human feedback scores, eval experiment scores and product metrics
- Cost: see the token usage per model in a time window.
- Volume: follow the number of threads developing over time, the number of (active) users, the amount of feedback and the token usage.
These metrics can be displayed over a specified time window.
Literal AI Dashboard
Was this page helpful?