hyperoptim
¶
Functions:
-
main–Command-line interface for Optuna hyperparameter optimization.
-
objective–Optuna objective function for multi-objective neural-network architecture search.
main
¶
main()
Command-line interface for Optuna hyperparameter optimization.
This function parses command-line arguments, creates (or resumes) an Optuna study, builds training/validation datasets, and runs multi-objective optimization over neural-network depth/width.
The optimization minimizes both validation loss and model size (parameter
count) using :class:optuna.samplers.NSGAIISampler. Study results and
per-trial artifacts (TensorBoard logs, CSV histories, and model checkpoints)
are persisted to the configured Optuna storage and project output
directories.
Command Line Parameters
study_name: str Name of the Optuna study. This is used to namespace results and artifacts in the configured Optuna storage and project output directories.--resume_study: bool, optional If set, the script will attempt to load an existing study with the same name and resume optimization from there. If not set and a study with the same name already exists, the script will exit with an error to prevent accidental overwriting.--storage: str, optional Optuna storage URL (for example,sqlite:///example.db). Default issqlite:///optuna.db.--n_trials: int, optional Number of trials to run. Default is 1000. The maximum number of trials is also capped at 1000 to prevent excessively long runs.--rfp_only: bool, optional If set, the training and validation datasets will be filtered to include only RFP equilibria. Default is False (i.e., use the full dataset).
Notes
- GPU memory growth is enabled (if GPUs are available) via
tf.config.experimental.set_memory_growthto reduce OOM issues. - When
--resume_studyis used, the remaining number of trials is reduced by the number of already completed trials. - A first trial with default parameters is enqueued for new studies to provide a known baseline configuration.
Source code in src/fpga_profile_reco/core/hyperoptim.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 | |
objective
¶
objective(trial: Trial, train_ds: Dataset, val_ds: Dataset, study_name: str) -> tuple[float, int]
Optuna objective function for multi-objective neural-network architecture search.
This objective builds a :class:fpga_profile_reco.core.models.HardNN model
from hyperparameters suggested by Optuna, trains it on the provided datasets,
and returns a pair of objectives:
1) best validation loss (minimize) 2) model parameter count (minimize)
The function also attaches additional metrics to the trial via
:meth:optuna.trial.Trial.set_user_attr for later analysis and writes
TensorBoard logs, CSV history, and a best-checkpoint model for each trial.
Parameters:
-
(trial¶Trial) –Current Optuna trial used to sample hyperparameters and record results.
-
(train_ds¶Dataset) –Training dataset.
-
(val_ds¶Dataset) –Validation dataset.
-
(study_name¶str) –Name of the Optuna study, used to namespace output directories for logs and checkpoints.
Returns:
-
objectives(tuple[float, int]) –A 2-tuple
(best_val_loss, n_params)suitable for a study created withdirections=["minimize", "minimize"].
Notes
- Duplicate hyperparameter configurations are detected by comparing
trial.paramsto the params of completed trials; duplicates are pruned by raising :class:optuna.exceptions.TrialPruned. - This code compiles the model with an optimizer only; the project uses a custom training loop inside the model to handle losses/metrics.
- Per-trial artifacts are written under (subpaths may vary by config):
cfg.TENSORBOARD_LOGS_DIR / "optuna" / study_name / <trial_id>,cfg.HISTORY_DIR / "optuna" / study_name,cfg.MODELS_DIR / "optuna" / study_name.
Source code in src/fpga_profile_reco/core/hyperoptim.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | |