API InputAdapters(former Handlers)

DataframeInput

class bentoml.adapters.DataframeInput(orient=None, typ='frame', input_dtypes=None, is_batch_input=True, **base_kwargs)
DataframeInput expects inputs from HTTP request or cli arguments that

can be converted into a pandas Dataframe. It passes down the dataframe to user defined API function and returns response for REST API call or print result for CLI call

Parameters
  • orient (str or None) – Incoming json orient format for reading json data. Default is None which means automatically detect.

  • typ (str) – Type of object to recover for read json with pandas. Default is frame

  • ({str (input_dtypes) – str}): describing expected input data types of the input dataframe, it must be either a dict of column name and data type, or a list of data types listed by column index in the dataframe

Raises
  • ValueError – Incoming data is missing required columns in input_dtypes

  • ValueError – Incoming data format can not be handled. Only json and csv

JsonInput

class bentoml.adapters.JsonInput(is_batch_input=False, **base_kwargs)

JsonInput parses REST API request or CLI command into parsed_jsons(a list of json serializable object in python) and pass down to user defined API function

How to upgrade from LegacyJsonInput(JsonInput before 0.8.3)

To enable micro batching for API with json inputs, custom bento service should use JsonInput and modify the handler method like this:

``` @bentoml.api(input=LegacyJsonInput()) def predict(self, parsed_json):

results = self.artifacts.classifier([parsed_json[‘text’]]) return results[0]

```

—>

``` @bentoml.api(input=JsonInput()) def predict(self, parsed_jsons):

results = self.artifacts.classifier([j[‘text’] for j in parsed_jsons]) return results

```

For clients, the request is the same as LegacyJsonInput, each includes single json.

` curl -i             --header "Content-Type: application/json"             --request POST             --data '{"text": "best movie ever"}'             localhost:5000/predict `

LegacyJsonInput

class bentoml.adapters.LegacyJsonInput(is_batch_input=False, **base_kwargs)

LegacyJsonInput parses REST API request or CLI command into parsed_json(a dict in python) and pass down to user defined API function

TfTensorInput

class bentoml.adapters.TfTensorInput(method='predict', is_batch_input=True, **base_kwargs)

Tensor input adapter for Tensorflow models. Transform incoming tf tensor data from http request, cli or lambda event into tf tensor. The behaviour should be compatible with tensorflow serving REST API: * https://www.tensorflow.org/tfx/serving/api_rest#classify_and_regress_api * https://www.tensorflow.org/tfx/serving/api_rest#predict_api

Parameters

method (*) – equivalence of serving API methods: (predict, classify, regress)

Raises

BentoMLException – BentoML currently doesn’t support Content-Type

ImageInput

class bentoml.adapters.ImageInput(accept_image_formats=None, pilmode='RGB', is_batch_input=False, **base_kwargs)

Transform incoming image data from http request, cli or lambda event into numpy array.

Handle incoming image data from different sources, transform them into numpy array and pass down to user defined API functions

  • If you want to operate raw image file stream or PIL.Image objects, use lowlevel

alternative FileInput.

Parameters
  • accept_image_formats (string[]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • pilmode (string) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at: https://imageio.readthedocs.io/en/stable/format_png-pil.html

Raises

ImportError – imageio package is required to use ImageInput

Example

```python from bentoml import BentoService, api, artifacts from bentoml.artifact import TensorflowArtifact from bentoml.adapters import ImageInput

CLASS_NAEMS = [‘cat’, ‘dog’]

@artifacts([TensorflowArtifact(‘classifer’)]) class PetClassification(BentoService):

@api(input=ImageInput()) def predict(self, image_ndarrays):

results = self.artifacts.classifer.predict(image_ndarrays) return [CLASS_NAEMS[r] for r in results]

```

MultiImageInput

class bentoml.adapters.MultiImageInput(input_names='image', accepted_image_formats=None, pilmode='RGB', is_batch_input=False, **base_kwargs)
Parameters
  • input_names (string[]) – A tuple of acceptable input name for HTTP request. Default value is (image,)

  • accepted_image_formats (string[]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • pilmode (string) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at: https://imageio.readthedocs.io/en/stable/format_png-pil.html

Raises

ImportError – imageio package is required to use MultiImageInput

Example usage:

>>> from bentoml import BentoService
>>> import bentoml
>>>
>>> class MyService(BentoService):
>>>     @bentoml.api(input=MultiImageInput(input_names=('imageX', 'imageY')))
>>>     def predict(self, image_groups):
>>>         for image_group in image_groups:
>>>             image_array_x = image_group['imageX']
>>>             image_array_y = image_group['imageY']

The endpoint could then be used with an HTML form that sends multipart data, like the example below

>>> <form action="http://localhost:8000" method="POST"
>>>       enctype="multipart/form-data">
>>>     <input name="imageX" type="file">
>>>     <input name="imageY" type="file">
>>>     <input type="submit">
>>> </form>

Or the following cURL command

>>> curl -F imageX=@image_file_x.png
>>>      -F imageY=@image_file_y.jpg
>>>      http://localhost:8000

LegacyImageInput

class bentoml.adapters.LegacyImageInput(input_names='image', accept_image_formats=None, pilmode='RGB', **base_kwargs)

* This LegacyImageInput is identical to the ImageHandler prior to BentoML version 0.8.0, it was kept here to make it easier for users to upgrade. If you are starting a new model serving project, use the ImageInput instead. LegacyImageInput will be deprecated in release 1.0.0. *

Transform incoming image data from http request, cli or lambda event into numpy array.

Handle incoming image data from different sources, transform them into numpy array and pass down to user defined API functions

Parameters
  • input_names (string[]]) – A tuple of acceptable input name for HTTP request. Default value is (image,)

  • accept_image_formats (string[]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • pilmode (string) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at: https://imageio.readthedocs.io/en/stable/format_png-pil.html

Raises

ImportError – imageio package is required to use LegacyImageInput

FastaiImageInput

class bentoml.adapters.FastaiImageInput(input_names='image', accept_image_formats=None, convert_mode='RGB', div=True, cls=None, after_open=None, **base_kwargs)

InputAdapter specified for handling image input following fastai conventions by passing type fastai.vision.Image to user API function and providing options such as div, cls, and after_open

Parameters
  • input_names ([str]]) – A tuple of acceptable input name for HTTP request. Default value is (image,)

  • accept_image_formats ([str]) – A list of acceptable image formats. Default value is loaded from bentoml config ‘apiserver/default_image_input_accept_file_extensions’, which is set to [‘.jpg’, ‘.png’, ‘.jpeg’, ‘.tiff’, ‘.webp’, ‘.bmp’] by default. List of all supported format can be found here: https://imageio.readthedocs.io/en/stable/formats.html

  • convert_mode (str) – The pilmode to be used for reading image file into numpy array. Default value is ‘RGB’. Find more information at https://imageio.readthedocs.io/en/stable/format_png-pil.html

  • div (bool) – If True, pixel values are divided by 255 to become floats between 0. and 1.

  • cls (Class) – Parameter from fastai.vision open_image, default is fastai.vision.Image

  • after_open (func) – Parameter from fastai.vision open_image, default is None

Raises
  • ImportError – imageio package is required to use FastaiImageInput

  • ImportError – fastai package is required to use FastaiImageInput

FileInput

class bentoml.adapters.FileInput(**base_kwargs)

Transform incoming file data from http request, cli or lambda event into file stream object.

Handle incoming file data from different sources, transform them into file streams and pass down to user defined API functions

Parameters

None

Example

```python import bentoml from PIL import Image import numpy as np

from bentoml.artifact import PytorchModelArtifact from bentoml.adapters import FileInput

FASHION_MNIST_CLASSES = [‘T-shirt/top’, ‘Trouser’, ‘Pullover’, ‘Dress’, ‘Coat’,

‘Sandal’, ‘Shirt’, ‘Sneaker’, ‘Bag’, ‘Ankle boot’]

@bentoml.env(pip_dependencies=[‘torch’, ‘pillow’, ‘numpy’]) @bentoml.artifacts([PytorchModelArtifact(‘classifier’)]) class PyTorchFashionClassifier(bentoml.BentoService):

@bentoml.api(input=FileInput()) def predict(self, file_streams):

img_arrays = [] for fs in file_streams:

im = Image.open(fs).convert(mode=”L”).resize((28, 28)) img_array = np.array(im) img_arrays.append(img_array)

inputs = np.stack(img_arrays, axis=0)

outputs = self.artifacts.classifier(inputs) return [FASHION_MNIST_CLASSES[c] for c in outputs]

```

ClipperInput

A special group of adapters that are designed to be used when deploying with Clipper.

class bentoml.adapters.ClipperBytesInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Bytes

class bentoml.adapters.ClipperFloatsInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Floats

class bentoml.adapters.ClipperIntsInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Ints

class bentoml.adapters.ClipperDoublesInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type Doubles

class bentoml.adapters.ClipperStringsInput(output_adapter=None, http_input_example=None, **base_config)

ClipperInput that deals with input type String