TFMA Evaluators¶
tensorflow_model_analysis.evaluators
¶
Init module for TensorFlow Model Analysis evaluators.
Attributes¶
Functions¶
AnalysisTableEvaluator
¶
AnalysisTableEvaluator(
key: str = ANALYSIS_KEY,
run_after: str = LAST_EXTRACTOR_STAGE_NAME,
include: Optional[
Union[Iterable[str], Dict[str, Any]]
] = None,
exclude: Optional[
Union[Iterable[str], Dict[str, Any]]
] = None,
) -> Evaluator
Creates an Evaluator for returning Extracts data for analysis.
If both include and exclude are None then tfma.INPUT_KEY extracts will be excluded by default.
PARAMETER | DESCRIPTION |
---|---|
key
|
Name to use for key in Evaluation output.
TYPE:
|
run_after
|
Extractor to run after (None means before any extractors).
TYPE:
|
include
|
List or map of keys to include in output. Keys starting with '_' are automatically filtered out at write time. If a map of keys is passed then the keys and sub-keys that exist in the map will be included in the output. An empty dict behaves as a wildcard matching all keys or the value itself. Since matching on feature values is not currently supported, an empty dict must be used to represent the leaf nodes. For example: {'key1': {'key1-subkey': {}}, 'key2': {}}.
TYPE:
|
exclude
|
List or map of keys to exclude from output. If a map of keys is passed then the keys and sub-keys that exist in the map will be excluded from the output. An empty dict behaves as a wildcard matching all keys or the value itself. Since matching on feature values is not currently supported, an empty dict must be used to represent the leaf nodes. For example, {'key1': {'key1-subkey': {}}, 'key2': {}}.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Evaluator
|
Evaluator for collecting analysis data. The output is stored under the key |
Evaluator
|
'analysis'. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If both include and exclude are used. |
Source code in tensorflow_model_analysis/evaluators/analysis_table_evaluator.py
MetricsPlotsAndValidationsEvaluator
¶
MetricsPlotsAndValidationsEvaluator(
eval_config: EvalConfig,
eval_shared_model: Optional[
MaybeMultipleEvalSharedModels
] = None,
metrics_key: str = METRICS_KEY,
plots_key: str = PLOTS_KEY,
attributions_key: str = ATTRIBUTIONS_KEY,
run_after: str = SLICE_KEY_EXTRACTOR_STAGE_NAME,
schema: Optional[Schema] = None,
random_seed_for_testing: Optional[int] = None,
) -> Evaluator
Creates an Evaluator for evaluating metrics and plots.
PARAMETER | DESCRIPTION |
---|---|
eval_config
|
Eval config.
TYPE:
|
eval_shared_model
|
Optional shared model (single-model evaluation) or list of shared models (multi-model evaluation). Only required if there are metrics to be computed in-graph using the model.
TYPE:
|
metrics_key
|
Name to use for metrics key in Evaluation output.
TYPE:
|
plots_key
|
Name to use for plots key in Evaluation output. |
attributions_key
|
Name to use for attributions key in Evaluation output.
TYPE:
|
run_after
|
Extractor to run after (None means before any extractors).
TYPE:
|
schema
|
A schema to use for customizing metrics and plots.
TYPE:
|
random_seed_for_testing
|
Seed to use for unit testing. |
RETURNS | DESCRIPTION |
---|---|
Evaluator
|
Evaluator for evaluating metrics and plots. The output will be stored under |
Evaluator
|
'metrics' and 'plots' keys. |
Source code in tensorflow_model_analysis/evaluators/metrics_plots_and_validations_evaluator.py
verify_evaluator
¶
Verifies evaluator is matched with an extractor.
PARAMETER | DESCRIPTION |
---|---|
evaluator
|
Evaluator to verify.
TYPE:
|
extractors
|
Extractors to use in verification. |
RAISES | DESCRIPTION |
---|---|
ValueError
|
If an Extractor cannot be found for the Evaluator. |