Project.load_inference_pipeline(*args, **kwargs)#

Loads an existing inference pipeline from an Openlayer project.

namestr, optional

Name of the inference pipeline to be loaded. The name of the inference piepline is the one displayed on the Openlayer platform. If not specified, will try to load the inference pipeline named "production".


If you haven’t created the inference pipeline yet, you should use the create_inference_pipeline method.


An object that is used to interact with an inference pipeline on the Openlayer platform.


Related guide: How to set up monitoring.

Instantiate the client and load a project:

>>> import openlayer
>>> client = openlayer.OpenlayerClient('YOUR_API_KEY_HERE')
>>> project = client.load_project(name="Churn prediction")

With the Project object retrieved, you are able to load the inference pipeline:

>>> inference_pipeline = project.load_inference_pipeline(
...     name="XGBoost model inference pipeline",
... )

With the InferencePipeline object created, you are able to upload a reference dataset (used to measure drift) and to publish production data to the Openlayer platform. Refer to InferencePipeline.upload_reference_dataset and InferencePipeline.publish_batch_data for detailed examples.