pyspark - How to use input_example in MLFlow logged ONNX model in Databricks to make predictions? - Stack Overflow

I logged an ONNX model (converted from a pyspark model) in MLFlow like this:with mlflow.start_run() as

I logged an ONNX model (converted from a pyspark model) in MLFlow like this:

with mlflow.start_run() as run:
    mlflow.onnx.log_model(
        onnx_model=my_onnx_model,
        artifact_path="onnx_model",
        input_example=input_example,
    )

where input_example is a Pandas dataframe that gets saved to artifacts.

On Databricks experiments page, I can see the model being logged along with input_example.json that indeed contains the data I provided as input_example while logging the model.

How to use that data now to make predictions for testing whether ONNX model was logged correctly or not? On model artifacts page in Databricks UI, I see:

from mlflow.models import validate_serving_input

model_uri = 'runs:/<some-model-id>/onnx_model'

# The logged model does not contain an input_example.
# Manually generate a serving payload to verify your model prior to deployment.
from mlflow.models import convert_input_example_to_serving_input

# Define INPUT_EXAMPLE via assignment with your own input example to the model
# A valid input example is a data instance suitable for pyfunc prediction
serving_payload = convert_input_example_to_serving_input(INPUT_EXAMPLE)

# Validate the serving payload works on the model
validate_serving_input(model_uri, serving_payload)

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745671155a4639410.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信