Pandas Dataframe Algorithms
Pandas Dataframes
Pandas dataframes are obviously not going to scale as well as our Spark and SQL Algorithms, but for 'moderate' sized data these algorithms provide some nice functionality.
Pandas Dataframe Algorithms
Workbench has a growing set of algorithms and data processing tools for Pandas Dataframes. In general these algorithm will take a dataframe as input and give you back a dataframe with additional columns.
FeatureSpaceProximity
Bases: Proximity
Source code in src/workbench/algorithms/dataframe/feature_space_proximity.py
__init__(df, id_column, features, target=None, n_neighbors=10)
Initialize the FeatureSpaceProximity class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
df
|
DataFrame
|
DataFrame containing feature data. |
required |
id_column
|
str
|
Name of the column used as an identifier. |
required |
features
|
List[str]
|
List of feature column names to be used for neighbor computations. |
required |
target
|
str
|
Optional name of the target column. |
None
|
n_neighbors
|
int
|
Number of neighbors to compute. |
10
|
Source code in src/workbench/algorithms/dataframe/feature_space_proximity.py
from_model(model)
classmethod
Create a FeatureSpaceProximity instance from a Workbench model object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
Model
|
A Workbench model object. |
required |
Returns:
Name | Type | Description |
---|---|---|
FeatureSpaceProximity |
FeatureSpaceProximity
|
A new instance of the FeatureSpaceProximity class. |
Source code in src/workbench/algorithms/dataframe/feature_space_proximity.py
FingerprintProximity
Bases: Proximity
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
__init__(df, fingerprint_column, id_column, n_neighbors=10)
Initialize the FingerprintProximity class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
df
|
DataFrame
|
DataFrame containing fingerprints and other features. |
required |
fingerprint_column
|
str
|
Name of the column containing fingerprints. |
required |
id_column
|
Union[int, str]
|
Name of the column used as an identifier. |
required |
n_neighbors
|
int
|
Number of neighbors to compute. |
10
|
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
all_neighbors(include_self=False)
Compute nearest neighbors for all rows in the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
include_self
|
bool
|
Whether to include self-loops in the results. |
False
|
Returns:
Type | Description |
---|---|
DataFrame
|
pd.DataFrame: A DataFrame of neighbors and their Tanimoto similarities. |
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
get_edge_weight(row)
neighbors(query_id, similarity=None, include_self=False)
Return neighbors of the given query ID, either by fixed neighbors or above a similarity threshold.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
query_id
|
Union[int, str]
|
The ID of the query point. |
required |
similarity
|
float
|
Optional similarity threshold above which neighbors are to be included. |
None
|
include_self
|
bool
|
Whether to include the query ID itself in the neighbor results. |
False
|
Returns:
Type | Description |
---|---|
DataFrame
|
pd.DataFrame: Filtered DataFrame that includes the query ID, its neighbors, and their similarities. |
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
ResidualsCalculator
Bases: BaseEstimator
, TransformerMixin
A custom transformer for calculating residuals using cross-validation or an endpoint.
This transformer performs K-Fold cross-validation (if no endpoint is provided), or it uses the endpoint to generate predictions and compute residuals. It adds 'prediction', 'residuals', 'residuals_abs', 'prediction_100', 'residuals_100', and 'residuals_100_abs' columns to the input DataFrame.
Attributes:
Name | Type | Description |
---|---|---|
model_class |
Union[RegressorMixin, XGBRegressor]
|
The machine learning model class used for predictions. |
n_splits |
int
|
Number of splits for cross-validation. |
random_state |
int
|
Random state for reproducibility. |
endpoint |
Optional
|
The Workbench endpoint object for running inference, if provided. |
Source code in src/workbench/algorithms/dataframe/residuals_calculator.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
|
__init__(endpoint=None, reference_model_class=XGBRegressor)
Initializes the ResidualsCalculator with the specified parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
endpoint
|
Optional
|
A Workbench endpoint object to run inference, if available. |
None
|
reference_model_class
|
Union[RegressorMixin, XGBRegressor]
|
The reference model class for predictions. |
XGBRegressor
|
Source code in src/workbench/algorithms/dataframe/residuals_calculator.py
fit(X, y)
Fits the model. If no endpoint is provided, fitting involves storing the input data and initializing a reference model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
The input features. |
required |
y
|
Series
|
The target variable. |
required |
Returns:
Name | Type | Description |
---|---|---|
self |
BaseEstimator
|
Returns an instance of self. |
Source code in src/workbench/algorithms/dataframe/residuals_calculator.py
transform(X)
Transforms the input DataFrame by adding 'prediction', 'residuals', 'residuals_abs', 'prediction_100', 'residuals_100', and 'residuals_100_abs' columns.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X
|
DataFrame
|
The input features. |
required |
Returns:
Type | Description |
---|---|
DataFrame
|
pd.DataFrame: The transformed DataFrame with additional columns. |
Source code in src/workbench/algorithms/dataframe/residuals_calculator.py
DimensionalityReduction: Perform Dimensionality Reduction on a DataFrame
DimensionalityReduction
Source code in src/workbench/algorithms/dataframe/dimensionality_reduction.py
__init__()
DimensionalityReduction: Perform Dimensionality Reduction on a DataFrame
fit_transform(df, features=None, projection='TSNE')
Fit and Transform the DataFrame Args: df: Pandas DataFrame features: List of feature column names (default: None) projection: The projection model to use (TSNE, MDS or PCA, default: PCA) Returns: Pandas DataFrame with new columns x and y
Source code in src/workbench/algorithms/dataframe/dimensionality_reduction.py
resolve_coincident_points(df)
staticmethod
Resolve coincident points in a DataFrame Args: df(pd.DataFrame): The DataFrame to resolve coincident points in Returns: pd.DataFrame: The DataFrame with resolved coincident points
Source code in src/workbench/algorithms/dataframe/dimensionality_reduction.py
test()
Test for the Dimensionality Reduction Class
Source code in src/workbench/algorithms/dataframe/dimensionality_reduction.py
Questions?
The SuperCowPowers team is happy to answer any questions you may have about AWS and Workbench. Please contact us at workbench@supercowpowers.com or on chat us up on Discord