openlayer.Project.create_inference_pipeline#

Project.create_inference_pipeline(*args, **kwargs)#

Creates an inference pipeline in an Openlayer project.

An inference pipeline represents a model that has been deployed in production.

Parameters:
namestr

Name of your inference pipeline. If not specified, the name will be set to "production".

Important

The inference pipeline name must be unique within a project.

descriptionstr, optional

Inference pipeline description. If not specified, the description will be set to "Monitoring production data.".

reference_dfpd.DataFrame, optional

Dataframe containing your reference dataset. It is optional to provide the reference dataframe during the creation of the inference pipeline. If you wish, you can add it later with the InferencePipeline.upload_reference_dataframe or InferencePipeline.upload_reference_dataset methods. Not needed if reference_dataset_file_path is provided.

reference_dataset_file_pathstr, optional

Path to the reference dataset CSV file. It is optional to provide the reference dataset file path during the creation of the inference pipeline. If you wish, you can add it later with the InferencePipeline.upload_reference_dataframe or InferencePipeline.upload_reference_dataset methods. Not needed if reference_df is provided.

reference_dataset_configDict[str, any], optional

Dictionary containing the reference dataset configuration. This is not needed if reference_dataset_config_file_path is provided.

reference_dataset_config_file_pathstr, optional

Path to the reference dataset configuration YAML file. This is not needed if reference_dataset_config is provided.

Returns:
InferencePipeline

An object that is used to interact with an inference pipeline on the Openlayer platform.

Examples

Related guide: How to set up monitoring.

Instantiate the client and retrieve an existing project:

>>> import openlayer
>>>
>>> client = openlayer.OpenlayerClient('YOUR_API_KEY_HERE')
>>>
>>> project = client.load_project(
...     name="Churn prediction"
... )

With the Project object retrieved, you are able to create an inference pipeline:

>>> inference_pipeline = project.create_inference_pipeline(
...     name="XGBoost model inference pipeline",
...     description="Online model deployed to SageMaker endpoint.",
... )

With the InferencePipeline object created, you are able to upload a reference dataset (used to measure drift) and to publish production data to the Openlayer platform. Refer to InferencePipeline.upload_reference_dataset and InferencePipeline.publish_batch_data for detailed examples.