MLFlow

Users can now use MLFlow with BentoML with the following API: load, and load_runner as follow:

import bentoml
import mlflow
import pandas as pd

# `load` the model back in memory:
model = bentoml.mlflow.load("mlflow_sklearn_model:latest")
model.predict(pd.DataFrame[[1,2,3]])

# Load a given tag and run it under `Runner` abstraction with `load_runner`
runner = bentoml.mlflow.load_runner(tag)
runner.run_batch([[1,2,3,4,5]])

BentoML also offer import_from_uri which enables users to import any MLFlow model to BentoML:

import bentoml
from pathlib import Path

# assume that there is a folder name sklearn_clf in the current working directory
uri = Path("sklearn_clf").resolve()

# get the given tag
tag = bentoml.mlflow.import_from_uri("sklearn_clf_model", uri)

# uri can also be a S3 bucket
s3_tag = bentoml.mlflow.import_from_uri("sklearn_clf_model", "s3://my_sklearn_model")

Note

You can find more examples for MLFlow in our gallery repo.

bentoml.mlflow.import_from_uri(name, uri, *, labels=None, custom_objects=None, metadata=None)

Imports a MLFlow model format to BentoML modelstore via given URI.

Parameters
  • name (str) – Name for your MLFlow model to be saved under BentoML modelstore.

  • uri (str) –

    URI accepts all MLflow defined APIs in terms of referencing artifacts. All available accepted URI (extracted from MLFlow Concept):

    • /Users/me/path/to/local/model

    • relative/path/to/local/model

    • s3://my_bucket/path/to/model

    • hdfs://<host>:<port>/<path>

    • runs:/<mlflow_run_id>/run-relative/path/to/model

    • models:/<model_name>/<model_version>

    • models:/<model_name>/<stage>

  • labels (Dict[str, str], optional, default to None) – user-defined labels for managing models, e.g. team=nlp, stage=dev

  • custom_objects (Dict[str, Any]], optional, default to None) – user-defined additional python objects to be saved alongside the model, e.g. a tokenizer instance, preprocessor function, model configuration json

  • metadata (Dict[str, Any], optional, default to None) – Custom metadata for given model.

  • model_store (ModelStore, default to BentoMLContainer.model_store) – BentoML modelstore, provided by DI Container.

Returns

A Tag object that can be used to retrieve the model with bentoml.tensorflow.load():

Return type

Tag

Example:

from sklearn import svm, datasets

import mlflow
import bentoml

# Load training data
iris = datasets.load_iris()
X, y = iris.data, iris.target

# Model Training
clf = svm.SVC()
clf.fit(X, y)

# Wrap up as a custom pyfunc model
class ModelPyfunc(mlflow.pyfunc.PythonModel):

    def load_context(self, context):
        self.model = clf

    def predict(self, context, model_input):
        return self.model.predict(model_input)

# Log model
with mlflow.start_run() as run:
    model = ModelPyfunc()
    mlflow.pyfunc.log_model("model", python_model=model)
    print("run_id: {}".format(run.info.run_id))

model_uri = f"runs:/{run.info.run_id}/model"

# import given model from local uri to BentoML modelstore:
tag = bentoml.mlflow.import_from_uri("model", model_uri)

# from MLFlow Models API
model_tag = bentoml.mlflow.import_from_uri("mymodel", "models:/mymodel/1")
bentoml.mlflow.load(tag, model_store=<simple_di.providers.SingletonFactory object>)

Load a model from BentoML local modelstore with given name.

Parameters
  • tag (Union[str, Tag]) – Tag of a saved model in BentoML local modelstore.

  • model_store (ModelStore, default to BentoMLContainer.model_store) – BentoML modelstore, provided by DI Container.

Returns

an instance of mlflow.pyfunc.PyFuncModel from BentoML modelstore.

Return type

mlflow.pyfunc.PyFuncModel

Examples:

import bentoml

model = bentoml.mlflow.load("mlflow_sklearn_model")
bentoml.mlflow.load_runner(tag, name=None)

Runner represents a unit of serving logic that can be scaled horizontally to maximize throughput. bentoml.mlflow.load_runner implements a Runner class that has to options to wrap around a PyFuncModel, or infer from given MLflow flavor to load BentoML internal Runner implementation, which optimize it for the BentoML runtime.

Parameters

tag (Union[str, Tag]) – Tag of a saved model in BentoML local modelstore.

Returns

Runner instances loaded from bentoml.mlflow.

Return type

bentoml._internal.runner.Runner

Note

Currently this is an instance of _PyFuncRunner which is the base runner. We recommend users to use the correspond frameworks’ Runner implementation provided by BentoML for the best performance.

On our roadmap, the intention for this API is to load the coresponding framework runner automatically. For example:

import bentoml

# this tag `mlflow_pytorch` is imported from mlflow
# when user use `bentoml.mlflow.load_runner`, it should returns
# bentoml._internal.frameworks.pytorch.PyTorchRunner` instead
# and kwargs can be passed directly to `PyTorchRunner`
runner = bentoml.mlflow.load_runner("mlflow_pytorch", **kwargs)

Examples:

import bentoml

runner = bentoml.mlflow.load_runner(tag)
runner.run_batch([[1,2,3,4,5]])
bentoml.mlflow.save(*args, **kwargs)

BentoML won’t provide a save() API for MLflow. If one uses bentoml.mlflow.save, it will raises BentoMLException:

  • If you currently working with mlflow.<flavor>.save_model, we kindly suggest you to replace mlflow.<flavor>.save_model with BentoML’s save API as we also supports the majority of ML frameworks that MLflow supports. An example below shows how you can migrate your code for PyTorch from MLflow to BentoML:

- ̶i̶m̶p̶o̶r̶t̶ ̶m̶l̶f̶l̶o̶w̶.̶p̶y̶t̶o̶r̶c̶h
+ import bentoml

# PyTorch model logics
...
model = EmbeddingBag()
- ̶m̶l̶f̶l̶o̶w̶.̶p̶y̶t̶o̶r̶c̶h̶.̶s̶a̶v̶e̶_̶m̶o̶d̶e̶l̶(̶m̶o̶d̶e̶l̶,̶ ̶"̶e̶m̶b̶e̶d̶_̶b̶a̶g̶"̶)
+ bentoml.pytorch.save("embed_bag", model)
  • If you want to import MLflow models from local directory or a given file path, you can utilize bentoml.mlflow.import_from_uri:

import mlflow.pytorch
+ import bentoml

path = "./my_pytorch_model"
mlflow.pytorch.save_model(model, path)
+ tag = bentoml.mlflow.import_from_uri("mlflow_pytorch_model", path)
  • If your current workflow with MLflow involve log_model() as well as importing models from MLflow Registry, you can import those directly to BentoML modelstore using bentoml.mlflow.import_from_uri. We also accept MLflow runs syntax, as well as models registry uri. An example showing how to integrate your current log_model() with mlflow.sklearn to BentoML:

import mlflow.sklearn
+ import mlflow
+ import bentoml

# Log sklearn model `sk_learn_rfr` and register as version 1
...
reg_name = "sk-learn-random-forest-reg-model"
artifact_path = "sklearn_model"
mlflow.sklearn.log_model(
    sk_model=sk_learn_rfr,
    artifact_path=artifact_path,
    registered_model_name=reg_name
)

# refers to https://www.mlflow.org/docs/latest/tracking.html#logging-functions
+ current_run = mlflow.active_run().info.run_id
+ uri = "runs:/%s/%s" % (current_run, artifact_path)
+ tag = bentoml.mlflow.import_from_uri("runs_mlflow_sklearn", uri)

An example showing how to import from MLflow models registry to BentoML modelstore. With this usecase, we recommend you to load the model into memory first with mlflow.<flavor>.load_model then save the model using BentoML save() API:

import mlflow.sklearn
+ import bentoml

reg_model_name = "sk-learn-random-forest-reg-model"
model_uri = "models:/%s/1" % reg_model_name
+ loaded = mlflow.sklearn.load_model(model_uri, *args, **kwargs)
+ tag = bentoml.sklearn.save("my_model", loaded, *args, **kwargs)

# you can also use `bentoml.mlflow.import_from_uri` to import the model directly after
#  defining `model_uri`
+ import bentoml
+ tag = bentoml.mlflow.import_from_uri("my_model", model_uri)