Applications of Technology:
- Quantum validation systems
- Quantum computing research
- Quantum hardware manufacturers
Benefits:
- High latency, low error system
- System fully integrated into control hardware
- N-state discrimination
- Facilitates mid-circuit measurements
Background: Current quantum state discrimination methods implement quantum state discrimination on the software level on the host computer, which introduces error and creates a high latency overhead, thus hindering extensibility. It is critical to address this bottleneck to meet the growing needs of efficient quantum computing.
Technology Overview:
This technology enables real-time quantum state discrimination on field programmable gate array (FPGA) hardware by integrating a multi-layer neural network onto an RFSoC platform, eliminating the latency and errors associated with transferring quantum data to host computers for processing, and achieving precise and rapid computation.
This approach leverages machine learning algorithms directly on FPGA hardware to provide feedback and verification of quantum states, offering a robust and scalable solution for real-time quantum computing applications that surpasses existing methods limited by latency and state discrimination capabilities.
The key features of this technology include:
- In-situ neural network: The system makes use of a multi-layer feed-forward neural network on the FPGA platform to provide feedback and verification of quantum states. Each layer’s operations have been tuned for FPGA integration.
- Increased state discrimination capability: This system can support N-state configurations rather than the simple two-state qubit configurations. This state discrimination facilitates mid-circuit measurements, needed for advanced quantum algorithm development and error correction, by performing state discrimination within the control hardware, thus eliminating the need for data transfer to host computers.
- Efficient normalization and quantization: Input data from qubit readouts are converted to a standard fixed-point representation needed for neural network computations. These conversions are implemented on the FPGA. This allows the entire process to remain on the FPGA, leading to minimal latency and error.
Development Stage: Engineering/pilot scale, with system validation in relevant environment (TRL 6)
Principal Investigators: Yilun Xu, Neel Vora, Gang Huang, Phuc Nguyen
IP Status: Patent pending.Opportunities: Available for licensing or collaborative research.