openlayer.llm_monitors.OpenAIMonitor#

class openlayer.llm_monitors.OpenAIMonitor(client=None, publish=None)#

Monitor inferences from OpenAI LLMs and upload traces to Openlayer.

Parameters:
clientopenai.api_client.Client

The OpenAI client. It is required if you are using openai>=1.0.0.

Examples

Let’s say that you have a GPT model you want to monitor. You can turn on monitoring with Openlayer by simply doing:

  1. Set the environment variables:

export OPENAI_API_KEY=<your-openai-api-key>

export OPENLAYER_API_KEY=<your-openlayer-api-key>
export OPENLAYER_PROJECT_NAME=<your-project-name>
  1. Instantiate the monitor:

>>> from opemlayer import llm_monitors
>>> from openai import OpenAI
>>>
>>> openai_client = OpenAI()
>>> monitor = llm_monitors.OpenAIMonitor(client=openai_client)
  1. Use the OpenAI model as you normally would:

From this point onwards, you can continue making requests to your model normally:

>>> openai_client.chat.completions.create(
>>>     model="gpt-3.5-turbo",
>>>     messages=[
>>>         {"role": "system", "content": "You are a helpful assistant."},
>>>         {"role": "user", "content": "How are you doing today?"}
>>>     ],
>>> )

The trace of this inference request is automatically uploaded to your Openlayer project.

Methods

get_cost_estimate(num_input_tokens, ...)

Returns the cost estimate for a given model and number of tokens.

monitor_thread_run(run)

Monitor a run from an OpenAI assistant.

start_monitoring()

(Deprecated) Start monitoring the OpenAI assistant.

stop_monitoring()

(Deprecated) Stop monitoring the OpenAI assistant.