Towards Real-World Quantum Machine Learning
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Université d'Ottawa / University of Ottawa
Abstract
Quantum machine learning (QML) promises new representational and computational capabilities, yet practical deployment on near-term hardware is hampered by resource overheads, depth constraints, and fragile trainability. This thesis advances resource-aware QML by proposing architectures and kernels that retain expressivity while sharply reducing qubit counts, circuit depth, and entangling-gate budgets. The work is presented in an article-based format with three core contributions.
First, I introduce a Coherent Feed-Forward Quantum Neural Network (CFF-QNN) that preserves quantum coherence across all layers, mirrors the flexibility of classical feed-forward networks (adjustable hidden layers and nodes), and decouples qubit requirements from input feature dimension. Compared to prevailing QNN baselines, the CFF-QNN reduces both depth and CNOT count by over 50% while achieving strong performance on standard benchmarks (e.g., 91% accuracy on Wisconsin breast cancer and 85% on credit-card fraud). This contribution is accompanied by an international patent filing (WO2025050205A1).
Second, I develop a resource-efficient quantum kernel that enables high-dimensional embeddings with substantially fewer qubits and entanglers, achieving linear scaling of entangling gates in the number of qubits. Empirically, the kernel delivers competitive or superior performance to widely used classical kernels and to popular quantum feature maps (e.g., on the Parkinson's disease dataset), with noisy simulations and small-scale runs on superconducting hardware indicating suitability for near-term devices. This contribution is covered by a companion patent filing (WO2025073041A1).
Third, I propose a quantum reservoir computing (QRC) scheme that reuses a fixed quantum feature-map circuit as the reservoir while injecting temporal memory via an explicit feedback loop. The register is recycled across time, so quantum resources remain constant, no quantum parameters are trained, and only a lightweight classical readout is fitted. Experiments on chaotic time series (e.g., Mackey-Glass) show competitive predictive accuracy with strict resource efficiency, and illuminate how feedback delay and entangling structure affect memory and error.
Collectively, these results chart a path toward practical QML: coherent architectures and kernels that are trainable, scalable in input dimension, frugal in quantum resources, and viable on noisy intermediate-scale devices—while providing design guidelines for future, larger-scale implementations.
Description
Keywords
Quantum Machine Learning, Quantum AI, Quantum Computing, Machine Learning, Quantum Information Science
