neurophox.ml package

neurophox.ml.linear module

class neurophox.ml.linear.LinearMultiModelRunner(experiment_name, layer_names, layers, optimizer, batch_size, iterations_per_epoch=50, iterations_per_tb_update=5, logdir=None, train_on_test=False, store_params=True)[source]

Bases: object

Complex mean square error linear optimization experiment that can run and track multiple model optimizations in parallel.

Parameters
  • experiment_name (str) – Name of the experiment

  • layer_names (List[str]) – List of layer names

  • layers (List[MeshLayer]) – List of transformer layers

  • optimizer (Union[OptimizerV2, List[OptimizerV2]]) – Optimizer for all layers or list of optimizers for each layer

  • batch_size (int) – Batch size for the optimization

  • iterations_per_epoch (int) – Iterations per epoch

  • iterations_per_tb_update (int) – Iterations per update of TensorBoard

  • logdir (Optional[str]) – Logging directory for TensorBoard to track losses of each layer (default to None for no logging)

  • train_on_test (bool) – Use same training and testing set

  • store_params (bool) – Store params during the training for visualization later

iterate(target_unitary, cost_fn=<function complex_mse>)[source]

Run gradient update toward a target unitary \(U\).

Parameters
  • target_unitary (ndarray) – Target unitary, \(U\).

  • cost_fn (Callable) – Cost function for linear model (default to complex mean square error)

run(num_epochs, target_unitary, pbar=None)[source]
Parameters
  • num_epochs (int) – Number of epochs (defined in terms of iterations_per_epoch)

  • target_unitary (ndarray) – Target unitary, \(U\).

  • pbar (Optional[Callable]) – Progress bar (tqdm recommended)

save(savepath)[source]

Save results for the multi-model runner to pickle file.

Parameters

savepath (str) – Path to save results.

update_tensorboard(name)[source]

Update TensorBoard variables.

Parameters

name (str) – Layer name corresponding to variables that are updated

neurophox.ml.linear.complex_mse(y_true, y_pred)[source]
Parameters
  • y_true (Tensor) – The true labels, \(V \in \mathbb{C}^{B \times N}\)

  • y_pred (Tensor) – The true labels, \(\widehat{V} \in \mathbb{C}^{B \times N}\)

Returns

The complex mean squared error \(\boldsymbol{e} \in \mathbb{R}^B\), where given example \(\widehat{V}_i \in \mathbb{C}^N\), we have \(e_i = \frac{\|V_i - \widehat{V}_i\|^2}{N}\).

neurophox.ml.linear.generate_keras_batch(units, target_unitary, batch_size)[source]
neurophox.ml.linear.normalized_fidelity(u, u_hat)[source]
Parameters
  • u (Tensor) – The true (target) unitary, \(U \in \mathrm{U}(N)\)

  • u_hat (Tensor) – The estimated unitary (not necessarily unitary), \(\widehat{U}\)

Returns

The normalized fidelity independent of the norm of \(\widehat{U}\).

neurophox.ml.nonlinearities module

neurophox.ml.nonlinearities.cnorm(inputs)[source]
Parameters

inputs (Tensor) – The input tensor, \(V\).

Return type

Tensor

Returns

An output tensor that performs elementwise absolute value operation, \(f(V) = |V|\).

neurophox.ml.nonlinearities.cnormsq(inputs)[source]
Parameters

inputs (Tensor) – The input tensor, \(V\).

Return type

Tensor

Returns

An output tensor that performs elementwise absolute value squared operation, \(f(V) = |V|^2\).