packflow.backend package
Submodules
packflow.backend.base module
- class packflow.backend.base.InferenceBackend(**kwargs)[source]
Bases:
ABCAbstract Base Class for the inference backend base
- backend_config_model
alias of
BackendConfig
- abstract execute(inputs)[source]
The main execution of inference or analysis for the developed application.
This method should remain targeted to passing data through the model/execution code for profiling purposes. Minimal pre- or post-processing should occur at this step unless completely necessary.
- Parameters:
inputs (List[Dict]) – The output of the transform_inputs method. If the transform_inputs method is not overridden, the data is formatted as records (list of dictionaries)
- Returns:
Model Outputs
- Return type:
Any
Notes
The transform_outputs() method should handle all postprocessing including calculating metrics, converting outputs back to Python types, and other postprocessing steps. Try to keep this method focused purely on inference/analysis.
- get_metrics()[source]
Utility for collecting user-defined metrics and validating their contents
- Return type:
packflow.backend.configuration module
- pydantic model packflow.backend.configuration.BackendConfig[source]
Bases:
BaseModelSee Backend Configuration for details.
Show JSON schema
{ "title": "BackendConfig", "description": "See :ref:`Backend Configuration<backend-configuration>` for details.", "type": "object", "properties": { "verbose": { "default": true, "title": "Verbose", "type": "boolean" }, "input_format": { "$ref": "#/$defs/InputFormats", "default": "records" }, "rename_fields": { "additionalProperties": true, "default": {}, "title": "Rename Fields", "type": "object" }, "feature_names": { "default": [], "items": { "type": "string" }, "title": "Feature Names", "type": "array" }, "flatten_nested_inputs": { "default": false, "title": "Flatten Nested Inputs", "type": "boolean" }, "flatten_lists": { "default": false, "title": "Flatten Lists", "type": "boolean" }, "nested_field_delimiter": { "default": ".", "title": "Nested Field Delimiter", "type": "string" }, "ignore_delimiter_collisions": { "default": false, "title": "Ignore Delimiter Collisions", "type": "boolean" } }, "$defs": { "InputFormats": { "description": "See :ref:`Preprocessors<preprocessors>` for details.", "enum": [ "passthrough", "records", "numpy" ], "title": "InputFormats", "type": "string" } } }
- Fields:
-
field feature_names:
List[str] = []
-
field flatten_lists:
bool= False
-
field flatten_nested_inputs:
bool= False
-
field ignore_delimiter_collisions:
bool= False
-
field input_format:
InputFormats= InputFormats.RECORDS
-
field nested_field_delimiter:
str= '.'
-
field rename_fields:
dict= {}
-
field verbose:
bool= True
- class packflow.backend.configuration.InputFormats(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
EnumSee Preprocessors for details.
- NUMPY = 'numpy'
- PASSTHROUGH = 'passthrough'
- RECORDS = 'records'
- packflow.backend.configuration.load_backend_configuration(backend_config_model=<class 'packflow.backend.configuration.BackendConfig'>, **backend_kwargs)[source]
Loads, resolves, and validates base configurations and overrides.
- Parameters:
backend_config_model (BackendConfig) – An instance of, or a subclass of, a BackendConfig Model to use for validation. Defaults to a base BackendConfig
**backend_kwargs – Optional keyword arguments to use as base parameters. These values are overridden by any configurations loaded from the environment configuration.
- Returns:
A validated configuration model
- Return type:
packflow.backend.metrics module
- pydantic model packflow.backend.metrics.ExecutionMetrics[source]
Bases:
BaseModelShow JSON schema
{ "title": "ExecutionMetrics", "type": "object", "properties": { "batch_size": { "title": "Batch Size", "type": "integer" }, "execution_times": { "$ref": "#/$defs/ExecutionTimes" }, "total_execution_time": { "default": null, "title": "Total Execution Time", "type": "number" } }, "$defs": { "ExecutionTimes": { "properties": { "preprocess": { "title": "Preprocess", "type": "number" }, "transform_inputs": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Transform Inputs" }, "execute": { "title": "Execute", "type": "number" }, "transform_outputs": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Transform Outputs" } }, "required": [ "preprocess", "execute" ], "title": "ExecutionTimes", "type": "object" } }, "required": [ "batch_size", "execution_times" ] }
- Fields:
- Validators:
calculate_total_execution_time»all fields
-
field batch_size:
int[Required] - Validated by:
-
field execution_times:
ExecutionTimes[Required] - Validated by:
-
field total_execution_time:
float= None - Validated by:
- pydantic model packflow.backend.metrics.ExecutionTimes[source]
Bases:
BaseModelShow JSON schema
{ "title": "ExecutionTimes", "type": "object", "properties": { "preprocess": { "title": "Preprocess", "type": "number" }, "transform_inputs": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Transform Inputs" }, "execute": { "title": "Execute", "type": "number" }, "transform_outputs": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Transform Outputs" } }, "required": [ "preprocess", "execute" ] }
- Fields:
-
field execute:
float[Required]
-
field preprocess:
float[Required]
-
field transform_inputs:
Optional[float] = None
-
field transform_outputs:
Optional[float] = None
packflow.backend.preprocessors module
- class packflow.backend.preprocessors.NumpyPreprocessor(config)[source]
Bases:
PreprocessorConverts input records to a numpy array.
- class packflow.backend.preprocessors.PassthroughPreprocessor(config)[source]
Bases:
PreprocessorPassthrough preprocessor that does nothing.
- class packflow.backend.preprocessors.Preprocessor(config)[source]
Bases:
ABCBase class for Packflow preprocessors
- class packflow.backend.preprocessors.RecordsPreprocessor(config)[source]
Bases:
Preprocessor- Records preprocessor that is highly optimized for:
filtering fields
ensuring proper order of keys
flattening nested fields/lists
- process(raw_inputs)[source]
Process an input batch and make transformations as necessary.
- This preprocessor focuses on three main transformations:
Renaming input fields
Filtering to only required fields
Flattening objects (dictionaries and/or lists)
- Parameters:
raw_inputs (list[dict]) – Raw input records to transform
- Returns:
Transformed records
- Return type:
list[dict]
Notes
When accessing nested fields via delimiter notation (e.g., “a.b.c”) in feature_names or rename_fields, missing keys will be silently skipped and not included in the output.
If flatten_nested_inputs=False, the preprocessor will never flatten the input, preserving all keys exactly as they appear (including keys containing delimiters). Nested path access will be done via direct traversal instead of flattening.
- resolve()[source]
Resolve the config to ensure that required fields are satisfied.
- Return type:
None
Notes
This preprocessor may default to being a passthrough if a specific subset of config fields are not set. For example, if the config specifies that there are no fields to rename, no filtering to be done, or flattening is not enabled, then the run() method will default to passing through data for performance.
- packflow.backend.preprocessors.get_preprocessor(config)[source]
Instantiate a preprocessor based on the provided BackendConfig.
- Parameters:
config (BackendConfig) – The packflow BackendConfig for the inference backend.
- Return type:
- Raises:
Notes
This is a general factory function for simpler loading of preprocessor objects. All subclasses of the Preprocessor object follow the same API and be executed generically.