TFX-BSL Public Beam¶
tfx_bsl.public.beam
¶
Module level imports for tfx_bsl.beam.
Functions¶
RunInference
¶
RunInference(
examples: PCollection,
inference_spec_type: InferenceSpecType,
load_override_fn: Optional[LoadOverrideFnType] = None,
) -> PCollection
Run inference with a model.
There are two types of inference you can perform using this PTransform
- In-process inference from a SavedModel instance. Used when
saved_model_spec
field is set ininference_spec_type
. - Remote inference by using a service endpoint. Used when
ai_platform_prediction_model_spec
field is set ininference_spec_type
.
PARAMETER | DESCRIPTION |
---|---|
examples
|
A PCollection containing examples of the following possible kinds, each with their corresponding return type. - PCollection[Example] -> PCollection[PredictionLog] * Works with Classify, Regress, MultiInference, Predict and RemotePredict.
TYPE:
|
inference_spec_type
|
Model inference endpoint.
TYPE:
|
load_override_fn
|
Optional function taking a model path and sequence of tags, and returning a tf SavedModel. The loaded model must be equivalent in interface to the model that would otherwise be loaded. It is up to the caller to ensure compatibility. This argument is experimental and subject to change.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
PCollection
|
A PCollection (possibly keyed) containing prediction logs. |
Source code in tfx_bsl/public/beam/run_inference.py
Modules¶
run_inference
¶
Public API of batch inference.
Functions¶
CreateModelHandler
¶
Creates a Beam ModelHandler based on the inference spec type.
There are two model handlers
- In-process inference from a SavedModel instance. Used when
saved_model_spec
field is set ininference_spec_type
. - Remote inference by using a service endpoint. Used when
ai_platform_prediction_model_spec
field is set ininference_spec_type
.
Example Usage:
from apache_beam.ml.inference import base
tf_handler = CreateModelHandler(inference_spec_type)
# unkeyed
base.RunInference(tf_handler)
# keyed
base.RunInference(base.KeyedModelHandler(tf_handler))
PARAMETER | DESCRIPTION |
---|---|
inference_spec_type
|
Model inference endpoint.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
ModelHandler
|
A Beam RunInference ModelHandler for TensorFlow |
Source code in tfx_bsl/public/beam/run_inference.py
RunInference
¶
RunInference(
examples: PCollection,
inference_spec_type: InferenceSpecType,
load_override_fn: Optional[LoadOverrideFnType] = None,
) -> PCollection
Run inference with a model.
There are two types of inference you can perform using this PTransform
- In-process inference from a SavedModel instance. Used when
saved_model_spec
field is set ininference_spec_type
. - Remote inference by using a service endpoint. Used when
ai_platform_prediction_model_spec
field is set ininference_spec_type
.
PARAMETER | DESCRIPTION |
---|---|
examples
|
A PCollection containing examples of the following possible kinds, each with their corresponding return type. - PCollection[Example] -> PCollection[PredictionLog] * Works with Classify, Regress, MultiInference, Predict and RemotePredict.
TYPE:
|
inference_spec_type
|
Model inference endpoint.
TYPE:
|
load_override_fn
|
Optional function taking a model path and sequence of tags, and returning a tf SavedModel. The loaded model must be equivalent in interface to the model that would otherwise be loaded. It is up to the caller to ensure compatibility. This argument is experimental and subject to change.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
PCollection
|
A PCollection (possibly keyed) containing prediction logs. |
Source code in tfx_bsl/public/beam/run_inference.py
RunInferenceOnKeyedBatches
¶
RunInferenceOnKeyedBatches(
examples: PCollection,
inference_spec_type: InferenceSpecType,
load_override_fn: Optional[LoadOverrideFnType] = None,
) -> PCollection
Run inference over pre-batched keyed inputs.
This API is experimental and may change in the future.
Supports the same inference specs as RunInference. Inputs must consist of a keyed list of examples, and outputs consist of keyed list of prediction logs corresponding by index.
PARAMETER | DESCRIPTION |
---|---|
examples
|
A PCollection of keyed, batched inputs of type Example, SequenceExample, or bytes. Each type support inference specs corresponding to the unbatched cases described in RunInference. Supports - PCollection[Tuple[K, List[Example]]] - PCollection[Tuple[K, List[SequenceExample]]] - PCollection[Tuple[K, List[Bytes]]]
TYPE:
|
inference_spec_type
|
Model inference endpoint.
TYPE:
|
load_override_fn
|
Optional function taking a model path and sequence of tags, and returning a tf SavedModel. The loaded model must be equivalent in interface to the model that would otherwise be loaded. It is up to the caller to ensure compatibility. This argument is experimental and subject to change.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
PCollection
|
A PCollection of Tuple[K, List[PredictionLog]]. |
Source code in tfx_bsl/public/beam/run_inference.py
RunInferencePerModel
¶
RunInferencePerModel(
examples: PCollection,
inference_spec_types: Iterable[InferenceSpecType],
load_override_fn: Optional[LoadOverrideFnType] = None,
) -> PCollection
Vectorized variant of RunInference (useful for ensembles).
PARAMETER | DESCRIPTION |
---|---|
examples
|
A PCollection containing examples of the following possible kinds, each with their corresponding return type. - PCollection[Example] -> PCollection[ Tuple[PredictionLog, ...]] * Works with Classify, Regress, MultiInference, Predict and RemotePredict.
TYPE:
|
inference_spec_types
|
A flat iterable of Model inference endpoints. Inference will happen in a fused fashion (ie without data materialization), sequentially across Models within a Beam thread (but in parallel across threads and workers).
TYPE:
|
load_override_fn
|
Optional function taking a model path and sequence of tags, and returning a tf SavedModel. The loaded model must be equivalent in interface to the model that would otherwise be loaded. It is up to the caller to ensure compatibility. This argument is experimental and subject to change.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
PCollection
|
A PCollection (possibly keyed) containing a Tuple of prediction logs. The |
PCollection
|
Tuple of prediction logs is 1-1 aligned with inference_spec_types. |
Source code in tfx_bsl/public/beam/run_inference.py
RunInferencePerModelOnKeyedBatches
¶
RunInferencePerModelOnKeyedBatches(
examples: PCollection,
inference_spec_types: Iterable[InferenceSpecType],
load_override_fn: Optional[LoadOverrideFnType] = None,
) -> PCollection
Run inference over pre-batched keyed inputs on multiple models.
This API is experimental and may change in the future.
Supports the same inference specs as RunInferencePerModel. Inputs must consist of a keyed list of examples, and outputs consist of keyed list of prediction logs corresponding by index.
PARAMETER | DESCRIPTION |
---|---|
examples
|
A PCollection of keyed, batched inputs of type Example, SequenceExample, or bytes. Each type support inference specs corresponding to the unbatched cases described in RunInferencePerModel. Supports - PCollection[Tuple[K, List[Example]]] - PCollection[Tuple[K, List[SequenceExample]]] - PCollection[Tuple[K, List[Bytes]]]
TYPE:
|
inference_spec_types
|
A flat iterable of Model inference endpoints. Inference will happen in a fused fashion (ie without data materialization), sequentially across Models within a Beam thread (but in parallel across threads and workers).
TYPE:
|
load_override_fn
|
Optional function taking a model path and sequence of tags, and returning a tf SavedModel. The loaded model must be equivalent in interface to the model that would otherwise be loaded. It is up to the caller to ensure compatibility. This argument is experimental and subject to change.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
PCollection
|
A PCollection containing Tuples of a key and lists of batched prediction |
PCollection
|
logs from each model provided in inference_spec_types. The Tuple of batched |
PCollection
|
prediction logs is 1-1 aligned with inference_spec_types. The individual |
PCollection
|
prediction logs in the batch are 1-1 aligned with the rows of data in the |
PCollection
|
batch key. |