TFMA Writers¶
tensorflow_model_analysis.writers
¶
Init module for TensorFlow Model Analysis writers.
Attributes¶
Writer
module-attribute
¶
Writer = NamedTuple(
"Writer",
[("stage_name", str), ("ptransform", PTransform)],
)
Functions¶
EvalConfigWriter
¶
EvalConfigWriter(
output_path: str,
eval_config: EvalConfig,
output_file_format: str = EVAL_CONFIG_FILE_FORMAT,
data_location: Optional[str] = None,
data_file_format: Optional[str] = None,
model_locations: Optional[Dict[str, str]] = None,
filename: Optional[str] = None,
) -> Writer
Returns eval config writer.
PARAMETER | DESCRIPTION |
---|---|
output_path
|
Output path to write config to.
TYPE:
|
eval_config
|
EvalConfig.
TYPE:
|
output_file_format
|
Output file format. Currently on 'json' is supported.
TYPE:
|
data_location
|
Optional path indicating where data is read from. This is only used for display purposes. |
data_file_format
|
Optional format of the input examples. This is only used for display purposes. |
model_locations
|
Dict of model locations keyed by model name. This is only used for display purposes. |
filename
|
Name of file to store the config as. |
Source code in tensorflow_model_analysis/writers/eval_config_writer.py
MetricsPlotsAndValidationsWriter
¶
MetricsPlotsAndValidationsWriter(
output_paths: Dict[str, str],
eval_config: EvalConfig,
add_metrics_callbacks: Optional[
List[AddMetricsCallbackType]
] = None,
metrics_key: str = METRICS_KEY,
plots_key: str = PLOTS_KEY,
attributions_key: str = ATTRIBUTIONS_KEY,
validations_key: str = VALIDATIONS_KEY,
output_file_format: str = _TFRECORD_FORMAT,
rubber_stamp: Optional[bool] = False,
stage_name: str = METRICS_PLOTS_AND_VALIDATIONS_WRITER_STAGE_NAME,
) -> Writer
Returns metrics and plots writer.
Note, sharding will be enabled by default if a output_file_format is provided.
The files will be named
PARAMETER | DESCRIPTION |
---|---|
output_paths
|
Output paths keyed by output key (e.g. 'metrics', 'plots', 'validation'). |
eval_config
|
Eval config.
TYPE:
|
add_metrics_callbacks
|
Optional list of metric callbacks (if used).
TYPE:
|
metrics_key
|
Name to use for metrics key in Evaluation output.
TYPE:
|
plots_key
|
Name to use for plots key in Evaluation output. |
attributions_key
|
Name to use for attributions key in Evaluation output.
TYPE:
|
validations_key
|
Name to use for validations key in Evaluation output.
TYPE:
|
output_file_format
|
File format to use when saving files. Currently 'tfrecord' and 'parquet' are supported and 'tfrecord is the default'. If using parquet, the output metrics and plots files will contain two columns, 'slice_key' and 'serialized_value'. The 'slice_key' column will be a structured column matching the metrics_for_slice_pb2.SliceKey proto. The 'serialized_value' column will contain a serialized MetricsForSlice or PlotsForSlice proto. The validation result file will contain a single column 'serialized_value' which will contain a single serialized ValidationResult proto.
TYPE:
|
rubber_stamp
|
True if this model is being rubber stamped. When a model is rubber stamped diff thresholds will be ignored if an associated baseline model is not passed. |
stage_name
|
The stage name to use when this writer is added to the Beam pipeline.
TYPE:
|
Source code in tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py
Write
¶
Write(
evaluation_or_validation: Union[Evaluation, Validation],
key: str,
ptransform: PTransform,
) -> Optional[PCollection]
Writes given Evaluation or Validation data using given writer PTransform.
PARAMETER | DESCRIPTION |
---|---|
evaluation_or_validation
|
Evaluation or Validation data.
TYPE:
|
key
|
Key for Evaluation or Validation output to write. It is valid for the key to not exist in the dict (in which case the write is a no-op).
TYPE:
|
ptransform
|
PTransform to use for writing.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
If Evaluation or Validation is empty. The key does not need to exist in the Evaluation or Validation, but the dict must not be empty. |
RETURNS | DESCRIPTION |
---|---|
Optional[PCollection]
|
The result of the underlying beam write PTransform. This makes it possible |
Optional[PCollection]
|
for interactive environments to execute your writer, as well as for |
Optional[PCollection]
|
downstream Beam stages to make use of the files that are written. |
Source code in tensorflow_model_analysis/writers/writer.py
convert_slice_metrics_to_proto
¶
convert_slice_metrics_to_proto(
metrics: Tuple[
SliceKeyOrCrossSliceKeyType, MetricsDict
],
add_metrics_callbacks: Optional[
List[AddMetricsCallbackType]
],
) -> MetricsForSlice
Converts the given slice metrics into serialized proto MetricsForSlice.
PARAMETER | DESCRIPTION |
---|---|
metrics
|
The slice metrics.
TYPE:
|
add_metrics_callbacks
|
A list of metric callbacks. This should be the same list as the one passed to tfma.Evaluate().
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
MetricsForSlice
|
The MetricsForSlice proto. |
RAISES | DESCRIPTION |
---|---|
TypeError
|
If the type of the feature value in slice key cannot be recognized. |
Source code in tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 |
|