API InputAdapters(former Handlers)

DataframeInput

class bentoml.adapters.DataframeInput(orient=None, typ='frame', input_dtypes=None, is_batch_input=True, **base_kwargs)
DataframeInput expects inputs from HTTP request or cli arguments that

can be converted into a pandas Dataframe. It passes down the dataframe to user defined API function and returns response for REST API call or print result for CLI call

Parameters
  • orient (str or None) – Incoming json orient format for reading json data. Default is None which means automatically detect.

  • typ (str) – Type of object to recover for read json with pandas. Default is frame

  • ({str (input_dtypes) – str}): describing expected input data types of the input dataframe, it must be either a dict of column name and data type, or a list of data types listed by column index in the dataframe

Raises
  • ValueError – Incoming data is missing required columns in input_dtypes

  • ValueError – Incoming data format can not be handled. Only json and csv

JsonInput

class bentoml.adapters.JsonInput(is_batch_input=False, **base_kwargs)

JsonInput parses REST API request or CLI command into parsed_jsons(a list of json serializable object in python) and pass down to user defined API function

How to upgrade from LegacyJsonInput(JsonInput before 0.8.3)

To enable micro batching for API with json inputs, custom bento service should use JsonInput and modify the handler method like this:

``` @bentoml.api(input=LegacyJsonInput()) def predict(self, parsed_json):

result = do_something_to_json(parsed_json) return result

```

—>

``` @bentoml.api(input=JsonInput()) def predict(self, parsed_jsons):

results = do_something_to_list_of_json(parsed_jsons) return results

```

For clients, the request is the same as LegacyJsonInput, each includes single json.

TfTensorInput

class bentoml.adapters.TfTensorInput(method='predict', is_batch_input=True, **base_kwargs)

Tensor input adapter for Tensorflow models. Transform incoming tf tensor data from http request, cli or lambda event into tf tensor. The behaviour should be compatible with tensorflow serving REST API: * https://www.tensorflow.org/tfx/serving/api_rest#classify_and_regress_api * https://www.tensorflow.org/tfx/serving/api_rest#predict_api

Parameters

method (*) – equivalence of serving API methods: (predict, classify, regress)

Raises

BentoMLException – BentoML currently doesn’t support Content-Type

ImageInput

class bentoml.adapters.ImageInput(accept_image_formats=None, pilmode='RGB', is_batch_input=False, **base_kwargs)

Transform incoming image data from http request, cli or lambda event into numpy array.

Handle incoming image data from different sources, transform them into numpy array and pass down to user defined API functions

Parameters
  • accept_image_formats (string[]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • pilmode (string) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at: https://imageio.readthedocs.io/en/stable/format_png-pil.html

Raises

ImportError – imageio package is required to use ImageInput

LegacyImageInput

class bentoml.adapters.LegacyImageInput(input_names='image', accept_image_formats=None, pilmode='RGB', **base_kwargs)

* This LegacyImageInput is identical to the ImageHandler prior to BentoML version 0.8.0, it was kept here to make it easier for users to upgrade. If you are starting a new model serving project, use the ImageInput instead. LegacyImageInput will be deprecated in release 1.0.0. *

Transform incoming image data from http request, cli or lambda event into numpy array.

Handle incoming image data from different sources, transform them into numpy array and pass down to user defined API functions

Parameters
  • input_names (string[]]) – A tuple of acceptable input name for HTTP request. Default value is (image,)

  • accept_image_formats (string[]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • pilmode (string) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at: https://imageio.readthedocs.io/en/stable/format_png-pil.html

Raises

ImportError – imageio package is required to use LegacyImageInput

FastaiImageInput

class bentoml.adapters.FastaiImageInput(input_names='image', accept_image_formats=None, convert_mode='RGB', div=True, cls=None, after_open=None, **base_kwargs)

InputAdapter specified for handling image input following fastai conventions by passing type fastai.vision.Image to user API function and providing options such as div, cls, and after_open

Parameters
  • input_names ([str]]) – A tuple of acceptable input name for HTTP request. Default value is (image,)

  • accept_image_formats ([str]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • convert_mode (str) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at https://imageio.readthedocs.io/en/stable/format_png-pil.html

  • div (bool) – If True, pixel values are divided by 255 to become floats between 0. and 1.

  • cls (Class) – Parameter from fastai.vision open_image, default is fastai.vision.Image

  • after_open (func) – Parameter from fastai.vision open_image, default is None

Raises
  • ImportError – imageio package is required to use FastaiImageInput

  • ImportError – fastai package is required to use FastaiImageInput

ClipperInput

A special group of adapters that are designed to be used when deploying with Clipper.

class bentoml.adapters.ClipperBytesInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Bytes

class bentoml.adapters.ClipperFloatsInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Floats

class bentoml.adapters.ClipperIntsInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Ints

class bentoml.adapters.ClipperDoublesInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Doubles

class bentoml.adapters.ClipperStringsInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type String