Maxim AI release notes
Maxim AI release notes
www.getmaxim.ai

โœ… Bring your own evaluators into Maxim using webhooks

 

New

  
  • We now support adding custom evaluators (API-based evaluators) using webhooks.
  • You can expose the configuration of your evaluator, i.e., endpoint, method, and payload requirements, and we call this webhook while running test cases. image.png
 

Improvement

  
  • Custom AI Evaluators now support the context field. This will help you evaluate a test set entry accurately.
 

Fix

  
  • We fixed our dataset import from CSV to consider the context field.

๐Ÿ“€ Add dataset entries using our SDK

 

New

  
  • We now support adding entries to the dataset using our SDK. To add an entry
//... initialize SDK using API key
await maxim.addDatasetEntries("dataset-id", [
    {
        input: {
            type: VariableType.TEXT,
            payload: "Hello",
        },
        expectedOutput: {
            type: VariableType.TEXT,
            payload: "Hello",
        },
        context: {
            type: VariableType.JSON,
            payload: '{"key":"value"}}',
        },
    },
]);

expectedOutput and context are optional fields. View complete documentation here: https://www.getmaxim.ai/docs/nodejssdk#dataset-management

๐Ÿท๏ธ Filter prompts using tags, folders and folder tags

 

New

  
  • Now, you can use tags on prompts and/or folders to retrieve prompts using Maxim SDK. This gives more flexibility to fetch bulk of prompts related to workflow or any other tags.

Examples

  1. Filtering prompt using tags
const prompts = await maximClient.getPrompts(
    new QueryBuilder()
            .and()
            .deploymentVar("Environment", "prod")
            .tag("CustomerId", "123")
            .build(),
);
  1. Filtering using folder
const folder = ...
const prompts = await maximClient.getPrompts(
    new QueryBuilder()
    .and()  
    .folder(folder.id)
    .deploymentVar("Environment", "prod")
    .build(),
);

๐ŸŒŽ Add context to your prompts and workflows

 

New

  
  • To re-create real-world scenarios, you can now attach context information to your prompts in the prompt or prompt-experiment playground.

Maxim_ Testing Framework for GenAI Apps ยท 10.04am ยท 01-09.jpeg

  • We have added support for context in datasets and workflows as well.

Maxim_ Testing Framework for GenAI Apps ยท 10.06am ยท 01-09.jpeg

๐Ÿง  Now you can send message history with prompt and prompt experiment runs

 

New

  
  • Prompts and Prompt Experiments now support sending chat history with each run to test your chatbot-like workflows quickly.
  • Head to any Prompt or Prompt Experiment and configure the Memory field.
 

Fix

  
  • We have patched some UX inconsistencies in the Prompt view.

๐Ÿงช Iterate your prompts using Prompt Experiments

 

New

  
  • We are introducing Prompt Experiments to help you iterate and evaluate prompts side-by-side.
  • To start, click on Prompt Experiments from the top bar or the Compare button on any prompt.

Now deploy your prompts more confidently with Deployment Variables

 

New

  

We have added deployment variables to prompts, giving you enhanced control over how and where your prompts are deployed.

How to Use Deployment Variables:

  1. Navigate to the Deploy prompt dialog by clicking on Deploy button.
  2. Click Configure Deployment Fields to access the deployment variables section.
  3. Set up your conditions by adding the appropriate deployment variables.

Once you deploy, you can see all active deployments on the prompt view.

You can use our 5-line integration JS SDK or 1 API call to use these prompts in your service. More info ๐Ÿ‘‰ https://getmaxim.ai/docs/nodejssdk

Post-Script Support in Workflows

 

New

  
  • In our continuous effort to empower our users with flexible and powerful tools, we are excited to introduce the new Post-Script Support feature for workflows. This feature enables users to add custom JavaScript functions to process the payload after receiving a response within a workflow.

You can add/update post-scripts from your HTTP workflow UI.

Get notified after each test run

 

New

  
  • We have added webhook support to notify you about different events inside your Maxim account, starting with Test Run results.
  • You can add your webhooks from the settings dialog.

SCR-20231128-so7.png

  • Click on Add New

SCR-20231128-sng.png

  • You will get test run notifications in the corresponding Slack channel once you add the webhook.

image.png

Improved workflow execution

 

New

  
  • HTTP workflows now support advanced settings like controlling the timeout for your workflow execution.

image.png

 

Improvement

  
  • Finetuned the workflow execution that will reduce the overall test execution time significantly.