CachedModel
Model Examples
Examples of using the Model Class are in the Examples section at the bottom of this page. AWS Model setup and deployment are quite complicated to do manually but the Workbench Model Class makes it a breeze!
CachedModel: Caches the method results for Workbench Models
CachedModel
Bases: CachedArtifactMixin
, ModelCore
CachedModel: Caches the method results for Workbench Models
Note: Cached method values may lag underlying Model changes.
Common Usage
Source code in src/workbench/cached/cached_model.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
__init__(uuid)
confusion_matrix(capture_uuid='latest')
Retrieve the confusion matrix for the model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
capture_uuid
|
str
|
Specific capture_uuid (default: latest) |
'latest'
|
Returns:
Type | Description |
---|---|
Union[DataFrame, None]
|
pd.DataFrame: DataFrame of the Confusion Matrix (might be None) |
Source code in src/workbench/cached/cached_model.py
details(**kwargs)
Retrieve the CachedModel Details.
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
A dictionary of details about the CachedModel |
get_endpoint_inference_path()
Retrieve the Endpoint Inference Path.
Returns:
Name | Type | Description |
---|---|---|
str |
Union[str, None]
|
The Endpoint Inference Path |
get_inference_metrics(capture_uuid='latest')
Retrieve the captured prediction results for this model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
capture_uuid
|
str
|
Specific capture_uuid (default: latest) |
'latest'
|
Returns:
Type | Description |
---|---|
Union[DataFrame, None]
|
pd.DataFrame: DataFrame of the Captured Metrics (might be None) |
Source code in src/workbench/cached/cached_model.py
get_inference_predictions(capture_uuid='auto_inference')
Retrieve the captured prediction results for this model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
capture_uuid
|
str
|
Specific capture_uuid (default: training_holdout) |
'auto_inference'
|
Returns:
Type | Description |
---|---|
Union[DataFrame, None]
|
pd.DataFrame: DataFrame of the Captured Predictions (might be None) |
Source code in src/workbench/cached/cached_model.py
health_check(**kwargs)
Retrieve the CachedModel Health Check.
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
A dictionary of health check details for the CachedModel |
Source code in src/workbench/cached/cached_model.py
list_inference_runs()
Retrieve the captured prediction results for this model
Returns:
Type | Description |
---|---|
list[str]
|
list[str]: List of Inference Runs |
summary(**kwargs)
Retrieve the CachedModel Details.
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
A dictionary of details about the CachedModel |
workbench_meta()
Retrieve the Enumerated Model Type (REGRESSOR, CLASSIFER, etc).
Returns:
Name | Type | Description |
---|---|---|
str |
Union[str, None]
|
The Enumerated Model Type |
Examples
All of the Workbench Examples are in the Workbench Repository under the examples/
directory. For a full code listing of any example please visit our Workbench Examples
Pull Inference Run
from workbench.cached.cached_model import CachedModel
# Grab a Model
model = CachedModel("abalone-regression")
# List the inference runs
model.list_inference_runs()
['auto_inference', 'model_training']
# Grab specific inference results
model.get_inference_predictions("auto_inference")
class_number_of_rings prediction id
0 16 10.516158 7
1 9 9.031365 8
.. ... ... ...
831 8 7.693689 4158
832 9 7.542521 4167