TFMA Utils¶
tensorflow_model_analysis.utils
¶
Init module for TensorFlow Model Analysis utils.
Classes¶
CombineFnWithModels
¶
CombineFnWithModels(model_loaders: Dict[str, ModelLoader])
Bases: CombineFn
Abstract class for CombineFns that need the shared models.
Initializes CombineFn using dict of loaders keyed by model location.
Source code in tensorflow_model_analysis/utils/model_util.py
Functions¶
setup
¶
Source code in tensorflow_model_analysis/utils/model_util.py
DoFnWithModels
¶
DoFnWithModels(model_loaders: Dict[str, ModelLoader])
Bases: DoFn
Abstract class for DoFns that need the shared models.
Initializes DoFn using dict of model loaders keyed by model location.
Source code in tensorflow_model_analysis/utils/model_util.py
Functions¶
calculate_confidence_interval
¶
calculate_confidence_interval(
t_distribution_value: ValueWithTDistribution,
)
Calculate confidence intervals based 95% confidence level.
Source code in tensorflow_model_analysis/utils/math_util.py
compound_key
¶
Returns a compound key based on a list of keys.
PARAMETER | DESCRIPTION |
---|---|
keys
|
Keys used to make up compound key. |
separator
|
Separator between keys. To ensure the keys can be parsed out of any compound key created, any use of a separator within a key will be replaced by two separators.
TYPE:
|
Source code in tensorflow_model_analysis/utils/util.py
create_keys_key
¶
create_values_key
¶
get_baseline_model_spec
¶
get_baseline_model_spec(
eval_config: EvalConfig,
) -> Optional[ModelSpec]
Returns baseline model spec.
Source code in tensorflow_model_analysis/utils/model_util.py
get_by_keys
¶
get_by_keys(
data: Mapping[str, Any],
keys: Sequence[Any],
default_value=None,
optional: bool = False,
) -> Any
Returns value with given key(s) in (possibly multi-level) dict.
The keys represent multiple levels of indirection into the data. For example if 3 keys are passed then the data is expected to be a dict of dict of dict. For compatibily with data that uses prefixing to create separate the keys in a single dict, lookups will also be searched for under the keys separated by '/'. For example, the keys 'head1' and 'probabilities' could be stored in a a single dict as 'head1/probabilties'.
PARAMETER | DESCRIPTION |
---|---|
data
|
Dict to get value from. |
keys
|
Sequence of keys to lookup in data. None keys will be ignored. |
default_value
|
Default value if not found.
DEFAULT:
|
optional
|
Whether the key is optional or not. If default value is None and optional is False then a ValueError will be raised if key not found.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If (non-optional) key is not found. |
Source code in tensorflow_model_analysis/utils/util.py
get_model_spec
¶
Returns model spec with given model name.
Source code in tensorflow_model_analysis/utils/model_util.py
get_model_type
¶
get_model_type(
model_spec: Optional[ModelSpec],
model_path: Optional[str] = "",
tags: Optional[List[str]] = None,
) -> str
Returns model type for given model spec taking into account defaults.
The defaults are chosen such that if a model_path is provided and the model can be loaded as a keras model then TF_KERAS is assumed. Next, if tags are provided and the tags contains 'eval' then TF_ESTIMATOR is assumed. Lastly, if the model spec contains an 'eval' signature TF_ESTIMATOR is assumed otherwise TF_GENERIC is assumed.
PARAMETER | DESCRIPTION |
---|---|
model_spec
|
Model spec.
TYPE:
|
model_path
|
Optional model path to verify if keras model. |
tags
|
Options tags to verify if eval is used. |
Source code in tensorflow_model_analysis/utils/model_util.py
get_non_baseline_model_specs
¶
get_non_baseline_model_specs(
eval_config: EvalConfig,
) -> Iterable[ModelSpec]
Returns non-baseline model specs.
has_change_threshold
¶
has_change_threshold(eval_config: EvalConfig) -> bool
Checks whether the eval_config has any change thresholds.
PARAMETER | DESCRIPTION |
---|---|
eval_config
|
the TFMA eval_config.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
bool
|
True when there are change thresholds otherwise False. |
Source code in tensorflow_model_analysis/utils/config_util.py
merge_extracts
¶
Merges list of extracts into a single extract with multidimensional data.
Running split_extracts followed by merge extracts with default options
will not reproduce the exact shape of the original extracts. Arrays in shape (x,1) will be flattened to (x,). To maintain the original shape of extract values of array shape (x,1), you must run with these options: split_extracts(extracts, expand_zero_dims=False) merge_extracts(extracts, squeeze_two_dim_vector=False)
Args: extracts: Batched TFMA Extracts. squeeze_two_dim_vector: Determines how the function will handle arrays of shape (x,1). If squeeze_two_dim_vector is True, the array will be squeezed to shape (x,).
RETURNS | DESCRIPTION |
---|---|
Extracts
|
A single Extracts whose values have been grouped into batches. |
Source code in tensorflow_model_analysis/utils/util.py
787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 |
|
model_construct_fn
¶
model_construct_fn(
eval_saved_model_path: Optional[str] = None,
add_metrics_callbacks: Optional[
List[AddMetricsCallbackType]
] = None,
include_default_metrics: Optional[bool] = None,
additional_fetches: Optional[List[str]] = None,
blacklist_feature_fetches: Optional[List[str]] = None,
tags: Optional[List[str]] = None,
model_type: Optional[str] = TFMA_EVAL,
) -> Callable[[], Any]
Returns function for constructing shared models.
Source code in tensorflow_model_analysis/utils/model_util.py
unique_key
¶
Returns a unique key given a list of current keys.
If the key exists in current_keys then a new key with _1, _2, ..., etc appended will be returned, otherwise the key will be returned as passed.
PARAMETER | DESCRIPTION |
---|---|
key
|
desired key name.
TYPE:
|
current_keys
|
List of current key names. |
update_keys
|
True to append the new key to current_keys. |
Source code in tensorflow_model_analysis/utils/util.py
update_eval_config_with_defaults
¶
update_eval_config_with_defaults(
eval_config: EvalConfig,
maybe_add_baseline: Optional[bool] = None,
maybe_remove_baseline: Optional[bool] = None,
has_baseline: Optional[bool] = False,
rubber_stamp: Optional[bool] = False,
) -> EvalConfig
Returns a new config with default settings applied.
a) Add or remove a model_spec according to "has_baseline". b) Fix the model names (model_spec.name) to tfma.CANDIDATE_KEY and tfma.BASELINE_KEY. c) Update the metrics_specs with the fixed model name.
PARAMETER | DESCRIPTION |
---|---|
eval_config
|
Original eval config.
TYPE:
|
maybe_add_baseline
|
DEPRECATED. True to add a baseline ModelSpec to the config as a copy of the candidate ModelSpec that should already be present. This is only applied if a single ModelSpec already exists in the config and that spec doesn't have a name associated with it. When applied the model specs will use the names tfma.CANDIDATE_KEY and tfma.BASELINE_KEY. Only one of maybe_add_baseline or maybe_remove_baseline should be used. |
maybe_remove_baseline
|
DEPRECATED. True to remove a baseline ModelSpec from the config if it already exists. Removal of the baseline also removes any change thresholds. Only one of maybe_add_baseline or maybe_remove_baseline should be used. |
has_baseline
|
True to add a baseline ModelSpec to the config as a copy of the candidate ModelSpec that should already be present. This is only applied if a single ModelSpec already exists in the config and that spec doesn't have a name associated with it. When applied the model specs will use the names tfma.CANDIDATE_KEY and tfma.BASELINE_KEY. False to remove a baseline ModelSpec from the config if it already exists. Removal of the baseline also removes any change thresholds. Only one of has_baseline or maybe_remove_baseline should be used. |
rubber_stamp
|
True if this model is being rubber stamped. When a model is rubber stamped diff thresholds will be ignored if an associated baseline model is not passed. |
RAISES | DESCRIPTION |
---|---|
RuntimeError
|
on missing baseline model for non-rubberstamp cases. |
Source code in tensorflow_model_analysis/utils/config_util.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
verify_and_update_eval_shared_models
¶
verify_and_update_eval_shared_models(
eval_shared_model: Optional[
MaybeMultipleEvalSharedModels
],
) -> Optional[List[EvalSharedModel]]
Verifies eval shared models and normnalizes to produce a single list.
The output is normalized such that if a list or dict contains a single entry, the model name will always be empty.
PARAMETER | DESCRIPTION |
---|---|
eval_shared_model
|
None, a single model, a list of models, or a dict of models keyed by model name. |
RETURNS | DESCRIPTION |
---|---|
Optional[List[EvalSharedModel]]
|
A list of models or None. |
Source code in tensorflow_model_analysis/utils/model_util.py
verify_eval_config
¶
Verifies eval config.