Skip to content

models

Classes:

  • DeltaAModel

    Predict delta_a using a Horner-evaluated total-degree polynomial.

  • HardNN

    Floating-point neural network with physics-informed initial conditions.

  • QHardNN

    Quantized neural network variant of :class:HardNN using HGQ QDense layers.

DeltaAModel

DeltaAModel(model_path: Path)

Predict delta_a using a Horner-evaluated total-degree polynomial.

The model is loaded from a NumPy .npz file containing the polynomial coefficients and metadata. Inputs must be provided in physical units (i.e., not scaled), matching the units used during training/fitting.

Parameters:

  • model_path

    (Path) –

    Path to the .npz file containing the model parameters. Expected keys are "C" (coefficients), "intercept", and "degree".

Attributes:

  • C (ndarray) –

    Polynomial coefficient tensor with float64 dtype and contiguous memory layout.

  • intercept (float) –

    Additive intercept of the polynomial.

  • degree (int) –

    Total degree of the polynomial.

Notes

On initialization, Numba kernels are "warmed up" (JIT-compiled) using representative scalar and vector inputs to reduce first-call latency.

Source code in src/fpga_profile_reco/core/models.py
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
def __init__(self, model_path: Path):
    data = np.load(model_path)
    self.C = np.ascontiguousarray(data["C"], dtype=np.float64)
    self.intercept = float(data["intercept"])
    self.degree = int(data["degree"])

    # Warm-up compile with representative dtypes/shapes
    predict_delta_a_single(0.0, 0.0, 0.0, self.degree, self.C, self.intercept)

    a = np.ascontiguousarray(np.zeros(16, dtype=np.float64))
    t = np.ascontiguousarray(np.zeros(16, dtype=np.float64))
    h = np.ascontiguousarray(np.zeros(16, dtype=np.float64))
    out = np.ascontiguousarray(np.zeros(16, dtype=np.float64))

    predict_delta_a_batch(a, t, h, out, self.degree, self.C, self.intercept)
    predict_delta_a_parallel(a, t, h, out, self.degree, self.C, self.intercept)

HardNN

HardNN(architecture: dict, *, R0=R0, RA=RA, RMIN=RMIN, RMAX=RMAX, ALPHA_MAX=ALPHA_MAX, ALPHA_MIN=ALPHA_MIN, DELTA_H_MAX=DELTA_H_MAX, DELTA_H_MIN=DELTA_H_MIN, THETA_0_MAX=THETA_0_MAX, THETA_0_MIN=THETA_0_MIN, **kwargs)

Bases: Model

Floating-point neural network with physics-informed initial conditions.

The model predicts a 6-dimensional state vector and enforces the initial condition (IC) structure by combining a learned residual with an analytic IC term.

Inputs are expected to be scaled features with shape (batch, 5) and columns [r, alpha, theta_0, delta_h, delta_a]. Only the first four columns are passed through the neural network; delta_a is used only inside the IC computation.

Parameters:

  • architecture

    (dict) –

    Network architecture specification. Expected keys: - "units": list of integers, hidden layer sizes - "activation": activation for hidden layers - "output_size": integer, output dimension (expected 6) - "output_activation": activation for output layer

  • R0

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • RA

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • RMIN

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • RMAX

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • ALPHA_MAX

    (float, default: ALPHA_MAX ) –

    Bounds used to map scaled alpha back to physical units.

  • ALPHA_MIN

    (float, default: ALPHA_MAX ) –

    Bounds used to map scaled alpha back to physical units.

  • DELTA_H_MAX

    (float, default: DELTA_H_MAX ) –

    Bounds used to map scaled delta_h back to physical units.

  • DELTA_H_MIN

    (float, default: DELTA_H_MAX ) –

    Bounds used to map scaled delta_h back to physical units.

  • THETA_0_MAX

    (float, default: THETA_0_MAX ) –

    Bounds used to map scaled theta_0 back to physical units.

  • THETA_0_MIN

    (float, default: THETA_0_MAX ) –

    Bounds used to map scaled theta_0 back to physical units.

  • **kwargs

    Forwarded to keras.Model.

Attributes:

  • hidden_layers (list of keras.layers.Layer) –

    Hidden dense layers as specified by architecture["units"].

  • output_layer (Layer) –

    Final dense layer producing the per-radius residual output.

  • loss_tracker, obs_loss_tracker (Mean) –

    Training metrics tracked during train_step.

  • val_loss_tracker, val_obs_loss_tracker (Mean) –

    Validation metrics tracked during test_step.

Source code in src/fpga_profile_reco/core/models.py
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
def __init__(self, architecture: dict,
             *,
             R0=R0, RA=RA, RMIN=RMIN, RMAX=RMAX,
             ALPHA_MAX=ALPHA_MAX, ALPHA_MIN=ALPHA_MIN,
             DELTA_H_MAX=DELTA_H_MAX, DELTA_H_MIN=DELTA_H_MIN,
             THETA_0_MAX=THETA_0_MAX, THETA_0_MIN=THETA_0_MIN,
             **kwargs):
    super().__init__(**kwargs)

    self.architecture = architecture

    # store physical constants as attributes for use in the rhs and ic functions
    self.R0 = R0
    self.RA = RA
    self.RMIN = RMIN
    self.RMAX = RMAX
    self.ALPHA_MAX = ALPHA_MAX
    self.ALPHA_MIN = ALPHA_MIN
    self.DELTA_H_MAX = DELTA_H_MAX
    self.DELTA_H_MIN = DELTA_H_MIN
    self.THETA_0_MAX = THETA_0_MAX
    self.THETA_0_MIN = THETA_0_MIN

    # instantiate layers
    self.hidden_layers = []
    for units in self.architecture['units']:
        self.hidden_layers.append(keras.layers.Dense(units, activation=self.architecture['activation']))
    self.output_layer = keras.layers.Dense(self.architecture['output_size'], activation=self.architecture['output_activation'])

    self.loss_tracker = keras.metrics.Mean(name="loss")
    self.obs_loss_tracker = keras.metrics.Mean(name="obs_loss")

    self.val_loss_tracker = keras.metrics.Mean(name="val_loss")
    self.val_obs_loss_tracker = keras.metrics.Mean(name="val_obs_loss")

QHardNN

QHardNN(architecture: dict, quantization: dict, *, R0=R0, RA=RA, RMIN=RMIN, RMAX=RMAX, ALPHA_MAX=ALPHA_MAX, ALPHA_MIN=ALPHA_MIN, DELTA_H_MAX=DELTA_H_MAX, DELTA_H_MIN=DELTA_H_MIN, THETA_0_MAX=THETA_0_MAX, THETA_0_MIN=THETA_0_MIN, **kwargs)

Bases: Model

Quantized neural network variant of :class:HardNN using HGQ QDense layers.

This model mirrors the structure of HardNN but uses quantized dense layers (HGQ) for FPGA-/hardware-oriented deployment. Quantization configs are supplied via the quantization dictionary.

Inputs are expected to be scaled features with shape (batch, 5) and columns [r, alpha, theta_0, delta_h, delta_a]. Only the first four columns are passed through the network; delta_a is used only in ICs.

Parameters:

  • architecture

    (dict) –

    Network architecture specification (see HardNN).

  • quantization

    (dict) –

    Quantization configuration for HGQ layers. Expected keys typically include w_config, b_config, d_config_input, d_config, and last-layer variants (e.g. w_config_last).

  • R0

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • RA

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • RMIN

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • RMAX

    (float, default: R0 ) –

    Physical constants used by IC/observable computations.

  • ALPHA_MAX

    (float, default: ALPHA_MAX ) –

    Bounds used to map scaled alpha back to physical units.

  • ALPHA_MIN

    (float, default: ALPHA_MAX ) –

    Bounds used to map scaled alpha back to physical units.

  • DELTA_H_MAX

    (float, default: DELTA_H_MAX ) –

    Bounds used to map scaled delta_h back to physical units.

  • DELTA_H_MIN

    (float, default: DELTA_H_MAX ) –

    Bounds used to map scaled delta_h back to physical units.

  • THETA_0_MAX

    (float, default: THETA_0_MAX ) –

    Bounds used to map scaled theta_0 back to physical units.

  • THETA_0_MIN

    (float, default: THETA_0_MAX ) –

    Bounds used to map scaled theta_0 back to physical units.

  • **kwargs

    Forwarded to keras.Model.

Attributes:

  • hidden_layers (list of hgq.layers.QDense) –

    Quantized hidden layers. The first layer may use a distinct input quantizer configuration.

  • output_layer (QDense) –

    Quantized output layer, typically with output quantization enabled.

  • loss_tracker, obs_loss_tracker (Mean) –

    Training metrics tracked during train_step.

  • val_loss_tracker, val_obs_loss_tracker (Mean) –

    Validation metrics tracked during test_step.

Notes

The serialization helpers _ser/_deser are used so that HGQ quantization objects can be saved/restored via Keras configs.

Source code in src/fpga_profile_reco/core/models.py
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
def __init__(self, architecture: dict, quantization: dict,
             *,
             R0=R0, RA=RA, RMIN=RMIN, RMAX=RMAX,
             ALPHA_MAX=ALPHA_MAX, ALPHA_MIN=ALPHA_MIN,
             DELTA_H_MAX=DELTA_H_MAX, DELTA_H_MIN=DELTA_H_MIN,
             THETA_0_MAX=THETA_0_MAX, THETA_0_MIN=THETA_0_MIN,
             **kwargs):
    super().__init__(**kwargs)

    self.architecture = architecture
    self.quantization = quantization

    # store physical constants as attributes for use in the rhs and ic functions
    self.R0 = R0
    self.RA = RA
    self.RMIN = RMIN
    self.RMAX = RMAX
    self.ALPHA_MAX = ALPHA_MAX
    self.ALPHA_MIN = ALPHA_MIN
    self.DELTA_H_MAX = DELTA_H_MAX
    self.DELTA_H_MIN = DELTA_H_MIN
    self.THETA_0_MAX = THETA_0_MAX
    self.THETA_0_MIN = THETA_0_MIN

    # instantiate layers
    self.hidden_layers = []
    for i, units in enumerate(self.architecture['units']):
        if i == 0:
            self.hidden_layers.append(hgq.layers.QDense(units=units,
                                                        activation=architecture['activation'],
                                                        kq_conf=quantization['w_config'],
                                                        bq_conf=quantization['b_config'],
                                                        iq_conf=quantization['d_config_input'],
                                                        enable_ebops=True,
                                                        )
                                    )
        else:
            self.hidden_layers.append(hgq.layers.QDense(units=units,
                                                        activation=architecture['activation'],
                                                        kq_conf=quantization['w_config'],
                                                        bq_conf=quantization['b_config'],
                                                        iq_conf=quantization['d_config'],
                                                        enable_ebops=True,
                                                        )
                                    )

    # last layer, enforce output quantization 
    self.output_layer = hgq.layers.QDense(self.architecture['output_size'],
                                          activation=self.architecture['output_activation'],
                                          kq_conf=quantization['w_config_last'],
                                          bq_conf=quantization['b_config_last'],
                                          iq_conf=quantization['d_config_last'],
                                          enable_ebops=True,
                                          enable_oq=True,
                                          oq_conf=quantization['d_config_last'],
                                        )

    self.loss_tracker = keras.metrics.Mean(name="loss")
    self.obs_loss_tracker = keras.metrics.Mean(name="obs_loss")

    self.val_loss_tracker = keras.metrics.Mean(name="val_loss")
    self.val_obs_loss_tracker = keras.metrics.Mean(name="val_obs_loss")