Pandas Dataframe Algorithms
Pandas Dataframes
Pandas dataframes are obviously not going to scale as well as our Spark and SQL Algorithms, but for 'moderate' sized data these algorithms provide some nice functionality.
Pandas Dataframe Algorithms
Workbench has a growing set of algorithms and data processing tools for Pandas Dataframes. In general these algorithm will take a dataframe as input and give you back a dataframe with additional columns.
FeatureSpaceProximity
Bases: Proximity
Proximity computations for numeric feature spaces using Euclidean distance.
Source code in src/workbench/algorithms/dataframe/feature_space_proximity.py
__init__(df, id_column, features, target=None, include_all_columns=False)
Initialize the FeatureSpaceProximity class.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
DataFrame containing data for neighbor computations. |
required |
id_column
|
str
|
Name of the column used as the identifier. |
required |
features
|
List[str]
|
List of feature column names to be used for neighbor computations. |
required |
target
|
Optional[str]
|
Name of the target column. Defaults to None. |
None
|
include_all_columns
|
bool
|
Include all DataFrame columns in neighbor results. Defaults to False. |
False
|
Source code in src/workbench/algorithms/dataframe/feature_space_proximity.py
FingerprintProximity
Bases: Proximity
Proximity computations for binary fingerprints using Tanimoto similarity.
Note: Tanimoto similarity is equivalent to Jaccard similarity for binary vectors. Tanimoto(A, B) = |A ∩ B| / |A ∪ B|
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 | |
__init__(df, id_column, fingerprint_column=None, target=None, include_all_columns=False, radius=2, n_bits=1024, counts=False)
Initialize the FingerprintProximity class for binary fingerprint similarity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
DataFrame containing fingerprints or SMILES. |
required |
id_column
|
str
|
Name of the column used as an identifier. |
required |
fingerprint_column
|
Optional[str]
|
Name of the column containing fingerprints (bit strings). If None, looks for existing "fingerprint" column or computes from SMILES. |
None
|
target
|
Optional[str]
|
Name of the target column. Defaults to None. |
None
|
include_all_columns
|
bool
|
Include all DataFrame columns in neighbor results. Defaults to False. |
False
|
radius
|
int
|
Radius for Morgan fingerprint computation (default: 2). |
2
|
n_bits
|
int
|
Number of bits for fingerprint (default: 1024). |
1024
|
counts
|
bool
|
Whether to use count simulation (default: False). |
False
|
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
isolated(top_percent=1.0)
Find isolated data points based on Tanimoto similarity to nearest neighbor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
top_percent
|
float
|
Percentage of most isolated data points to return (e.g., 1.0 returns top 1%) |
1.0
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame of observations with lowest Tanimoto similarity, sorted ascending |
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
neighbors(id_or_ids, n_neighbors=5, min_similarity=None, include_self=True)
Return neighbors for ID(s) from the existing dataset.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
id_or_ids
|
Union[str, int, List[Union[str, int]]]
|
Single ID or list of IDs to look up |
required |
n_neighbors
|
Optional[int]
|
Number of neighbors to return (default: 5, ignored if min_similarity is set) |
5
|
min_similarity
|
Optional[float]
|
If provided, find all neighbors with Tanimoto similarity >= this value (0-1) |
None
|
include_self
|
bool
|
Whether to include self in results (default: True) |
True
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame containing neighbors with Tanimoto similarity scores |
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
proximity_stats()
Return distribution statistics for nearest neighbor Tanimoto similarity.
Returns:
| Type | Description |
|---|---|
DataFrame
|
DataFrame with similarity distribution statistics (count, mean, std, percentiles) |
Source code in src/workbench/algorithms/dataframe/fingerprint_proximity.py
Projection2D
Perform Dimensionality Reduction on a DataFrame using TSNE, MDS, PCA, or UMAP.
Source code in src/workbench/algorithms/dataframe/projection_2d.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | |
__init__()
fit_transform(input_df, features=None, feature_matrix=None, metric='euclidean', projection='UMAP')
Fit and transform a DataFrame using the selected dimensionality reduction method.
This method creates a copy of the input DataFrame, processes the specified features for normalization and projection, and returns a new DataFrame with added 'x' and 'y' columns containing the projected 2D coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_df
|
DataFrame
|
The DataFrame containing features to project. |
required |
features
|
list
|
List of feature column names. If None, numeric columns are auto-selected. |
None
|
feature_matrix
|
ndarray
|
Pre-computed feature matrix. If provided, features is ignored and no scaling is applied (caller is responsible for appropriate preprocessing). |
None
|
metric
|
str
|
Distance metric for UMAP (e.g., 'euclidean', 'jaccard'). Default 'euclidean'. |
'euclidean'
|
projection
|
str
|
The projection to use ('UMAP', 'TSNE', 'MDS' or 'PCA'). Default 'UMAP'. |
'UMAP'
|
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: A new DataFrame (a copy of input_df) with added 'x' and 'y' columns. |
Source code in src/workbench/algorithms/dataframe/projection_2d.py
resolve_coincident_points(df)
staticmethod
Resolve coincident points using random jitter
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
df
|
DataFrame
|
DataFrame with x and y coordinates. |
required |
Returns:
| Type | Description |
|---|---|
DataFrame
|
pd.DataFrame: DataFrame with resolved coincident points |
Source code in src/workbench/algorithms/dataframe/projection_2d.py
Questions?

The SuperCowPowers team is happy to answer any questions you may have about AWS and Workbench. Please contact us at workbench@supercowpowers.com or on chat us up on Discord