|
TensorFlow Serving C++ API Documentation
|
#include <saved_model_bundle_factory.h>
Public Member Functions | |
| Status | CreateSavedModelBundle (const string &path, std::unique_ptr< SavedModelBundle > *bundle) |
| Status | CreateSavedModelBundleWithMetadata (const Loader::Metadata &metadata, const string &path, std::unique_ptr< SavedModelBundle > *bundle) |
| Status | EstimateResourceRequirement (const string &path, ResourceAllocation *estimate) const |
| const SessionBundleConfig & | config () const |
| SessionBundleConfig & | mutable_config () |
Static Public Member Functions | |
| static Status | Create (const SessionBundleConfig &config, std::unique_ptr< SavedModelBundleFactory > *factory) |
A factory that creates SavedModelBundles from SavedModel or SessionBundle export paths.
The emitted sessions only support Run(), and although not enforced it is expected that the client will only make non-mutating Run() calls. (If this restriction, which we've added as a safety measure, is problematic for your use-case please contact the TensorFlow Serving team to discuss disabling it.)
If the config calls for batching, the emitted sessions automatically batch Run() calls behind the scenes, using a SharedBatchScheduler owned by the factory. The 'config.num_batch_threads' threads are shared across all session instances created by this factory. However, each session has its own dedicated queue of size 'config.max_enqueued_batches'.
The factory can also estimate the resource (e.g. RAM) requirements of a SavedModelBundle based on the SavedModel (i.e. prior to loading the session).
This class is thread-safe.
Definition at line 52 of file saved_model_bundle_factory.h.
|
static |
Instantiates a SavedModelBundleFactory using a config.
| config | Config with initialization options. |
| factory | Newly created factory if the returned Status is OK. |
Definition at line 88 of file saved_model_bundle_factory.cc.
| Status tensorflow::serving::SavedModelBundleFactory::CreateSavedModelBundle | ( | const string & | path, |
| std::unique_ptr< SavedModelBundle > * | bundle | ||
| ) |
Instantiates a bundle from a given export or SavedModel path.
| path | Path to the model. |
| bundle | Newly created SavedModelBundle if the returned Status is OK. |
Definition at line 112 of file saved_model_bundle_factory.cc.
| Status tensorflow::serving::SavedModelBundleFactory::CreateSavedModelBundleWithMetadata | ( | const Loader::Metadata & | metadata, |
| const string & | path, | ||
| std::unique_ptr< SavedModelBundle > * | bundle | ||
| ) |
Instantiates a bundle from a given export or SavedModel path and the given metadata.
| metadata | Metadata to be associated with the bundle. |
| path | Path to the model. |
| bundle | Newly created SavedModelBundle if the returned Status is OK. |
Definition at line 106 of file saved_model_bundle_factory.cc.
| Status tensorflow::serving::SavedModelBundleFactory::EstimateResourceRequirement | ( | const string & | path, |
| ResourceAllocation * | estimate | ||
| ) | const |
Estimates the resources a SavedModel bundle will use once loaded, from its export path.
| path | Path to the model. |
| estimate | Output resource usage estimates. Different kinds of resources (e.g. CPU, RAM, etc.) may get populated. |
Definition at line 100 of file saved_model_bundle_factory.cc.