All changes and improvements to Literal AI are listed here. For changes in the SDKs, go to the Python SDK or TypeScript SDK.

Literal AI cloud is currently compatible with:

0.0.613-beta (July 23rd, 2024)

New features

  • Datasets : you can now create a dataset from a CSV file
  • Onboarding : empty pages on a new project will now include code snippets and instructions to start sending data on Literal AI
  • Navigation : the sidebar has been revamped for flatter navigation between platform modules

Improvements

  • A new, tighter tree view that better displays Chain of Thought reasoning
  • Add new Run/Generation filters when creating or updating an evaluation rule : model, duration, prompt lineage, prompt version
  • Improve edition of Azure OpenAI credentials
  • Various improvements on platform deployment, both for cloud and self-hosting

Bug fixes

  • Fix a bug with Azure OpenAI in the prompt playground and other LLM calls
  • The credentials table will now refresh correctly after creting or updating an item
  • Local LLMs are now correctly handled by the prompt playground and other LLM calls
  • Fix a bug on the prompt playground where changing settings could reset the message list

0.0.612-beta (July 10th, 2024)

New features

  • When scoring a step through the platform, we now track the user who created the score
  • We are preparing the platform for the upcoming release of the Annotation Queue
  • A new chart on the dashboard shows the number of runs per day per run name
  • Upon signup, a new account will now contain a default project populated with Threads, Steps, Datasets etc…

Improvements

  • We are rolling out a new Role system. Possible roles are now as follows : Admin, AI Engineer, Domain expert
  • We have revamped the creation, edition and deletion of Rules for Online Evaluation
  • We have improved screen space management in tables, notably when displaying code previews
  • Some design tweaks on the dashboard and dark mode

Fixes

  • Fixed a bug where the generation was not correctly displayed in a run
  • Fixed a bug where some logos would not display correctly in dark mode
  • Fixed a bug where the scores API could break if no generationId was provided

0.0.611-beta (July 1st, 2024)

New features

  • Easily navigate the Runs view with arrow keys
  • You can now filter Runs/Generations by score presence
  • You can now bulk add Generations to Datasets from the UI
  • Dark theme for the diff editor, box plots and toasters
  • Added a new “Run” chart to the dashboard

Improvements

  • This version embarks the first iteration of our UI revamp
  • Images are now zoomable in the Prompt Playground

0.0.610-beta (June 24, 2024)

Improvements

  • Update the feedback button
  • Extend “Rules” table with filters and pagination
  • Update “is null” and “is not null” filters with a more explicit behavior
  • Improve the score element UI

Fixes

  • Fixed an issue where annotators could not access content
  • Fixed an issue when double-clicking on a date-picker
  • Fixed an issue related to “Generation” links

0.0.609-beta (June 17, 2024)

New features

  • Added a “maintainer” role for the project, that allows for write while preventing the user to manage the project

Improvements

  • Simplify the Generation and Step data handling
  • Rules can now be updated directly
  • In “Experiment” you can now see the diff between inputs and outputs columns
  • The navigation is improved
  • Score templates can now be accessed in the “Evaluate” section
  • When scoring with a “categorical” score, the score value now uses its name rather than the value itself
  • In the dashboards we no longer display nullish values as 0
  • Rules have now their own detail page

Fixes

  • Fixed an issue where prompt playground settings were not correctly persisted
  • Fixed an issue where some step rows were duplicated
  • Fixed an issue with the “dataset link”
  • Fixed an issue where it was not possible to select custom models in the playground
  • Fixed an issue with the “Generations” page pagination

0.0.608-beta (June 4, 2024)

New features

  • Compare feature is now available! This compare feature allows you to compare between generations.
  • We’ll be debuting in the coming week with self-service distribution of the Literal AI platform for self-hosting
  • A new user role has been added: “Annotator”, a user that can add tags and scores to the observability entities (e.g. thread, step…) and has no access outside of those.
  • Project administrator can now pick the role of a user when inviting.

Improvements

  • Literal AI API keys are now shared in the project. Before, admins could create “personal” API keys. The behavior of restricting access to admins is kept.
  • We’re continuing our push towards a more consistent - and prettier - User Experience:
    • We’ve switched to a more vibrant color scheme
    • Made some visual tweaks on the Dashboard page
    • Observability items such as Threads, Runs, and others will now display as full pages rather than side-panes
    • And lots of other improvements across the platform
  • Some changes on the way the platform is deployed, both on our end and for our on-premise users :
    • Improved and centralized environment management
    • The Portkey AI Gateway is now handled directly inside the Node process
    • The BUCKET_NAME environment variable is no longer mandatory. Trying to store objects will log errors but not disrupt the rest of the operations

Fixes

  • This week’s release sees a big focus on performance, especially on the Users and individual Thread pages
  • We’ve also chased and squashed a few bugs related to :
    • Signing Attachment URLs (for object storage like S3)
    • Conflicts on unique userId
    • A visual bug on initialization of “continuous” score templates
  • Audio attachments are now resolved.
  • Links on prompt versions are now directed to the correct prompt.
  • Show the correct projects when accessing the prompt playground.

0.0.607-beta (May 27, 2024)

New features

  • Added online evaluations to score LLM generations on the fly.
  • Created the endpoint /api/my-project to quickly access a project ID with an API key
  • Brushed up the Dashboard page with:
    • Browser-level customizable layouts of charts
    • Filters on each chart to select relevant data - also saved at the browser level
  • Specific to token usage, we offer multiple visualizations

Improvements

  • Improved the look of Step badges, specifically colors.
  • Sharing threads requires additional privilege
  • Improved UX on text, audio, image and video attachments in Step details
  • Prompt versions show a visual “Open” button to jump to the Prompt Playground
  • Revamped the UI look of the side navigation
  • Stop sequences on Prompt Playground now show visual cues
  • Removed UUID columns across tables to improve readability
  • JSON & Text previews come with full screen & copy/paste options

Fixes

  • Newly created API keys do not contain special characters

0.0.606-beta (May 20, 2024)

Breaking Changes

  • Dataset: Renamed intermediary steps expectedOutput to output. In a Dataset, in the field Indermediary Steps, the field expectedOutput is renamed to just output, because this is the actual output of the LLM. This breaks backward compatibility for users relying on DatasetItem.intermediarySteps.expectedOutput, DatasetItem.expectedOutput remains unchanged.

New Features

  • Attachments now come with preview widgets (multi-modality)

Improvements

  • A page change on tables now scrolls back to top of table
  • We removed name fallback to ID for threads, now shown as N/A
  • We reduced the indent of navigation sub-menus
  • The prompt playground now persists the credentials for your session
  • Improved user feedback options from the UI
  • JSONs in tables now display on multiple lines with syntax highlighting
  • Improved dashboard performance with data fetch in separate requests

Fixes

  • Fixed creation of attachments and scores when step doesn’t exist
  • Fixed thread duplication when filtering on errors
  • Fixed the upserts of step input/output to prevent going past size limit

0.0.605-beta (May 13, 2024)

New Features

  • Support GPT-4o as LLM model provider
  • We now display a diff of the prompt settings when saving a prompt version
  • Steps are now supporting tags

Improvements

  • We now populate the dataset item intermediarySteps when adding a step with children steps
  • The API credentials in the prompt playground are moved
  • Generation details view now has a link to prompt
  • Support higher than 1 temperature setting for compatible LLMs

Fixes

  • Fix display bugs in prompt playground
  • Fix a bug where we allow for very large json inputs
    • Now metadataare limited to 1mb
    • Step input and expectedOutput limited to 3mb
  • Fix a bug where full-text searching threads would lead to a spike in cpu usage
  • Fix a rare bug that could occur when ingesting multiple steps with a new tag

0.0.604-beta (May 6, 2024)

New Features

  • UI/UX: New button on the headers on the Literal AI platform of a page that links to the documentation. This improves the UX of the Literal AI platform.
  • Release status page. Literal AI now has a release status page: https://literalai.betteruptime.com/. Here, you can see the uptime of the services.
  • Experiments/Score: You are now able create Scores directly in Experiments.
  • Threads: There is now a search bar in the Threads table.

Improvements

  • Minor UI updates to:
    • Sidebar navigation
    • Scores table
    • Table filters
  • Tags: Pressing enter will now create a Tag
  • Warn on dataset deletion
  • Persist playground settings
  • Warn when creating a prompt

Fixes

  • Fix dashboard evolution badge tooltip period being wrong

0.0.603-beta (April 29, 2024)

New Features

  • Credentials: You are now able to share your llm credentials to better collaborate through the prompt playground. Shared credentials settings

Improvements

  • Dashboard: New comparison badge in dashboard to display the data evolution. Dashboard evolution badge
  • UI of Thread and Dataset: On the top of a Thread or Dataset page there, the location is now shown as breadcrumb. This prevents getting lost in sheets and improves navigation.
  • Settings UI: Split Settings menu in the UI into sub-menus for General, LLM and Team.
  • Prompts: New Created by column on Prompt Version table, which improves the table display.

Fixes

  • Prompt Playground: Fix model select overflow (This is minimal change in the prompt playground. The long model name in the select is now ellipsed when size is reduced)
  • Experiments: In comparison mode, parameters are made more explicit. In addition, the charts inversed, which is now fixed.
  • Filters: is null and not in filters on tags edge cases are added. This fixes the tags filters in the table, as they were not working as intended before.
  • Tags: Newly created Tags are now visible in the UI when a Thread, Step or Generation page is refreshed. Tags are now refetched on page refresh.
  • Tags: Tags can now be added on generations being created.