Diffusers#

About this page

This is an API reference for 🤗 Diffusers in BentoML. Please refer to Diffusers guide for more information about how to use Hugging Face Diffusers in BentoML.

bentoml.diffusers.import_model(name: Tag | str, model_name_or_path: str | os.PathLike[str], *, proxies: dict[str, str] | None = None, revision: str = 'main', variant: str | None = None, pipeline_class: str | type[diffusers.DiffusionPipeline] | None = None, sync_with_hub_version: bool = False, signatures: dict[str, ModelSignatureDict | ModelSignature] | None = None, labels: dict[str, str] | None = None, custom_objects: dict[str, t.Any] | None = None, external_modules: t.List[ModuleType] | None = None, metadata: dict[str, t.Any] | None = None) bentoml.Model[source]#

Import Diffusion model from a artifact URI to the BentoML model store.

Parameters:
  • name – The name to give to the model in the BentoML store. This must be a valid Tag name.

  • model_name_or_path

    Can be either: - A string, the repo id of a pretrained pipeline hosted inside a model repo on

    https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like CompVis/ldm-text2im-large-256.

    • A path to a directory containing pipeline weights saved using [~DiffusionPipeline.save_pretrained], e.g., ./my_pipeline_directory/.

  • proxies (Dict[str, str], optional) – A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request.

  • revision (str, optional, defaults to “main”) – The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

  • variant (str, optional) – Variant of the model to import. For example there’s “fp16” and “fp32” variant for “DeepFloyd/IF-I-XL-v1.0”. This may save download bandwidth and local disk space.

  • sync_with_hub_version (bool, default to False) – If sync_with_hub_version is true, then the model imported by

  • signatures – Signatures of predict methods to be used. If not provided, the signatures default to {“__call__”: {“batchable”: False}}. See ModelSignature for more details.

  • labels – A default set of management labels to be associated with the model. For example: {"training-set": "data-v1"}.

  • custom_objects – Custom objects to be saved with the model. An example is {"my-normalizer": normalizer}. Custom objects are serialized with cloudpickle.

  • metadata

    Metadata to be associated with the model. An example is {"param_a": .2}.

    Metadata is intended for display in a model management UI and therefore all values in metadata dictionary must be a primitive Python type, such as str or int.

Returns:

A Model instance referencing a saved model in the local BentoML model store.

Example:

import bentoml

bentoml.diffusers.import_model(
    'my_sd15_model',
    "runwayml/stable-diffusion-v1-5",
    signatures={
        "__call__": {"batchable": False},
    }
)
bentoml.diffusers.save_model(name: Tag | str, pipeline: diffusers.DiffusionPipeline, *, signatures: dict[str, ModelSignatureDict | ModelSignature] | None = None, labels: dict[str, str] | None = None, custom_objects: dict[str, t.Any] | None = None, external_modules: t.List[ModuleType] | None = None, metadata: dict[str, t.Any] | None = None) bentoml.Model[source]#

Save a DiffusionPipeline to the BentoML model store.

Parameters:
  • name – The name to give to the model in the BentoML store. This must be a valid Tag name.

  • pipeline – Instance of the Diffusers pipeline to be saved

  • signatures – Signatures of predict methods to be used. If not provided, the signatures default to {“__call__”: {“batchable”: False}}. See ModelSignature for more details.

  • labels – A default set of management labels to be associated with the model. For example: {"training-set": "data-v1"}.

  • custom_objects – Custom objects to be saved with the model. An example is {"my-normalizer": normalizer}. Custom objects are serialized with cloudpickle.

  • metadata

    Metadata to be associated with the model. An example is {"param_a": .2}.

    Metadata is intended for display in a model management UI and therefore all values in metadata dictionary must be a primitive Python type, such as str or int.

Returns:

A Model instance referencing a saved model in the local BentoML model store.

bentoml.diffusers.load_model(bento_model: str | Tag | bentoml.Model, device_id: str | torch.device | None = None, pipeline_class: str | type[diffusers.pipelines.DiffusionPipeline] = diffusers.DiffusionPipeline, device_map: str | dict[str, int | str | torch.device] | None = None, custom_pipeline: str | None = None, scheduler_class: type[diffusers.SchedulerMixin] | None = None, torch_dtype: str | torch.dtype | None = None, low_cpu_mem_usage: bool | None = None, enable_xformers: bool = False, enable_attention_slicing: int | str | None = None, enable_model_cpu_offload: bool | None = None, enable_sequential_cpu_offload: bool | None = None, enable_torch_compile: bool | None = None, variant: str | None = None, lora_weights: LoraOptionType | list[LoraOptionType] | None = None, textual_inversions: TextualInversionOptionType | list[TextualInversionOptionType] | None = None, load_pretrained_extra_kwargs: dict[str, t.Any] | None = None) diffusers.DiffusionPipeline[source]#

Load a Diffusion model and convert it to diffusers Pipeline with the given tag from the local BentoML model store.

Parameters:
  • bento_model – Either the tag of the model to get from the store, or a BentoML ~bentoml.Model instance to load the model from.

  • device_id (str, optional, default to None) – Optional devices to put the given model on. Refer to device attributes.

  • pipeline_class (type[diffusers.DiffusionPipeline], optional) – DiffusionPipeline Class use to load the saved diffusion model, default to diffusers.DiffusionPipeline. For more pipeline types, refer to Pipeline Overview

  • device_map (None | str | Dict[str, Union[int, str, torch.device]], optional) – A map that specifies where each submodule should go. For more information, refer to device_map

  • custom_pipeline (None | str, optional) – An identifier of custom pipeline hosted on github. For a list of community maintained custom piplines, refer to https://github.com/huggingface/diffusers/tree/main/examples/community

  • scheduler_class (type[diffusers.SchedulerMixin], optional) – Scheduler Class to be used by DiffusionPipeline

  • torch_dtype (str | torch.dtype, optional) – Override the default torch.dtype and load the model under this dtype.

  • low_cpu_mem_usage (bool, optional) – Speed up model loading by not initializing the weights and only loading the pre-trained weights. defaults to True if torch version >= 1.9.0 else False

  • enable_xformers (bool, optional) – Use xformers optimization if it’s available. For more info, refer to https://github.com/facebookresearch/xformers

  • variant (str, optional) – If specified load weights from variant filename, e.g. pytorch_model.<variant>.bin.

  • lora_weights (LoraOptionType | list[LoraOptionType] optional) – lora weights to be loaded. LoraOptionType can be either a string or a dictionary. When it’s a string, it represents a path to the weight file. When it’s a dictionary, it contains a key :code`”model_name”` pointing to a huggingface repository or a local directory, a key weight_name pointing the weight file and other keys that will be passed to pipeline’s load_lora_weights method.

  • textual_inversions (TextualInversionOptionType | list[TextualInversionOptionType] optional) – Textual inversions to be loaded. TextualInversionOptionType can be either a string or a dictionary. When it’s a string, it represents a path to the weight file. When it’s a dictionary, it contains a key :code`”model_name”` pointing to a huggingface repository or a local directory, a key weight_name pointing the weight file and other keys that will be passed to pipeline’s load_lora_weights method.

  • load_pretrained_extra_kwargs – (dict[str, t.Any], optional): Extra kwargs passed to Pipeline class’s from_pretrained method

Returns:

The Diffusion model loaded as diffusers pipeline from the BentoML model store.

Example:

import bentoml
pipeline = bentoml.diffusers.load_model('my_diffusers_model:latest')
pipeline(prompt)
bentoml.diffusers.get(tag_like: str | Tag) Model[source]#

Get the BentoML model with the given tag.

Parameters:

tag_like – The tag of the model to retrieve from the model store.

Returns:

A BentoML Model with the matching tag.

Return type:

Model

Example:

import bentoml
# target model must be from the BentoML model store
model = bentoml.diffusers.get("my_stable_diffusion_model")