Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"
Image for the paper "Quantum speed-up in global optimization of binary neural nets"

Quick quantum neural nets

Neural networks

The notion of quantum superposition speeds up the training process for binary neural networks and ensures that their parameters are optimal.

Quantum speed-up in global optimization of binary neural nets

The performance of a neural network (NN) for a given task is largely determined by the initial calibration of the network parameters. Yet, it has been shown that the calibration, also referred to as training, is generally NP-complete. This includes networks with binary weights, an important class of networks due to their practical hardware implementations. We therefore suggest an alternative approach to training binary NNs. It utilizes a quantum superposition of weight configurations. We show that the quantum training guarantees with high probability convergence towards the globally optimal set of network parameters. This resolves two prominent issues of classical training: (1) the vanishing gradient problem and (2) common convergence to sub-optimal network parameters. We prove that a solution is found after approximately $4{n}^{2}\enspace \mathrm{log}\left(\frac{n}{\delta }\right)\sqrt{\tilde {N}}$ calls to a comparing oracle, where δ represents a precision, n is the number of training inputs and $\tilde {N}$ is the number of weight configurations. We give the explicit algorithm and implement it in numerical simulations.