Monitoring#

The monitoring mode of a project helps you keep track of model health in production and set up alert for when your model is not performing as expected. You will use the methods described on this page to create an inference pipeline, publish production data, and upload reference datasets.

To use these methods, you must have:

  1. Authenticated, using openlayer.OpenlayerClient

  2. Created a project, using openlayer.OpenlayerClient.create_project

Related guide: How to set up monitoring.

Creating and loading inference pipelines#

The inference pipeline represents a model deployed in production. It is part of an Openlayer project is what enables the monitoring mode.

Project.create_inference_pipeline(*args, ...)

Creates an inference pipeline in an Openlayer project.

Project.load_inference_pipeline(*args, **kwargs)

Loads an existing inference pipeline from an Openlayer project.

Tracing#

If you have a multi-step system (e.g., RAG), you can trace all the steps in the system by decorating the functions with the @trace() decorator.

openlayer.tracing.tracer.trace(*step_args, ...)

Decorator to trace a function.

Publishing production data#

LLMs#

If you are using an OpenAI LLM, you can simply switch monitoring on and off with a single line of code.

openlayer.llm_monitors.OpenAIMonitor([...])

Monitor inferences from OpenAI LLMs and upload traces to Openlayer.

openlayer.llm_monitors.AzureOpenAIMonitor([...])

Monitor inferences from Azure OpenAI LLMs and upload traces to Openlayer.

Traditional ML models#

For traditional ML models and other LLM providers, you can publish production data with the following methods.

InferencePipeline.publish_batch_data(*args, ...)

Publishes a batch of production data to the Openlayer platform.

InferencePipeline.stream_data(*args, **kwargs)

Streams production data to the Openlayer platform.

InferencePipeline.update_data(*args, **kwargs)

Updates values for data already on the Openlayer platform.

InferencePipeline.publish_ground_truths(...)

(Deprecated since version 0.1.0a21.)

Uploading reference datasets#

Reference datasets can be uploaded to an inference pipeline to enable data drift goals. The production data will be compared to the reference dataset to measure drift.

InferencePipeline.upload_reference_dataset(...)

Uploads a reference dataset saved as a csv file to an inference pipeline.

InferencePipeline.upload_reference_dataframe(...)

Uploads a reference dataset (a pandas dataframe) to an inference pipeline.