Quantum Feature Detector Demonstration Notebooks

The notebooks on this page provide an introduction to how Quantum Feature Detectors might be used. To run the notebooks locally, or on your own QMI, you will need to install QxQFD and its requirements, as outlined here, as well as Jupyter Notebook.

Note

Please be mindful that applying the quantum transformations can take some time. Certain cells in these notebooks, where the quantum model is transforming data for training or prediction, can take a few of minutes to complete. This is due to the volume of quantum circuits being run, and as quantum hardware develops further these times will decrease.

Notebook 1: The Basics

Download this notebook here: demo/qxqfd_1_basics.ipynb

This notebook demonstrates the basics of using QxBranch’s Quantum Feature Detector library (QxQFD) on a simple problem. We will first see how a classical linear model performs on a simple prediction problem, and then compare this to the accuracy of results using two different quantum feature detectors.

The full documentation for QxQFD,including its requirements and where to download it, is available on QxBranch’s website.

The first step is to import the packages we will need, including generic python packages as well as QxQFD itself.

import random
import math
import numpy as np
import pylab as plt

from sklearn import linear_model
from sklearn import metrics

import qxbranch.quantum_feature_detector as qxqfd

For this example, we will use a simple toy problem, classifying which of two synthetic distributions a data point belongs to. The following code generates and plots this data as a blue set and a red set:

def generate_train_and_test_circles_data(num_training_samples, num_testing_samples):
    """Generate a set of x,y data points assigned to one of two sets"""
    train_data, train_labels = generate_data_and_labels_for_circle_data(num_training_samples)
    test_data, test_labels = generate_data_and_labels_for_circle_data(num_testing_samples)
    return train_data, train_labels, test_data, test_labels

RED = 1
BLUE = 2

def generate_data_and_labels_for_circle_data(num_samples):
    data = []
    labels = []
    for i in range(num_samples):
        random_angle = random.random() * 2 * math.pi
        if random.random() > 0.5:
            # Assign the point to the red set
            radius = 2
            sigma = 0.1
            labels.append(RED)
        else:
            # Assign the point to the blue set
            radius = 1
            sigma = 0.05
            labels.append(2)
        x = math.cos(random_angle) * radius + np.random.normal(0, sigma)
        y = math.sin(random_angle) * radius + np.random.normal(0, sigma)
        data.append([x, y])
    return data, labels


train_data, train_labels, test_data, test_labels = generate_train_and_test_circles_data(150, 150)
plt.figure(figsize = (5, 5))
# Plot the data points, colouring one set red and the other blue
plt.plot([train_data[i][0] for i in range(len(train_data)) if train_labels[i] == RED],
         [train_data[i][1] for i in range(len(train_data)) if train_labels[i] == RED], 'sr',
         [train_data[i][0] for i in range(len(train_data)) if train_labels[i] == BLUE],
         [train_data[i][1] for i in range(len(train_data)) if train_labels[i] == BLUE], 'sb')
_images/notebook_1_figure_1.png

Now, let’s train a logistic regression model on this binary classification problem:

# Train multi-class logistic regression model
lr = linear_model.LogisticRegression(solver='lbfgs')
lr.fit(train_data, train_labels)
print("Logistic regression Test Accuracy :: ", metrics.accuracy_score(test_labels, lr.predict(test_data)))
Logistic regression Test Accuracy ::  0.5333333333333333

That’s not much better than random guessing. Let’s introduce some quantum feature detectors (QFDs).

QFDs operate as complex, nonlinear functions where classical information is encoded into a quantum circuit, the circuit is run, and the measured results are decoded back to classical information. Using the features generated by transforming our training data using these quantum circuits is expected to help increase accuracy.

Let’s see how this model performs when we build the same logistic regression model on top of the QFD transformed data.

# Generate 10 quantum feature detectors with random 4-qubit, 10-gate circuits
qubits = 4
gate_count = 10
data_size = int(2)
qfd_count = 10
circuit_generator = qxqfd.pyquil.CircuitGenerator(qubit_count=qubits, gate_count=gate_count, random_instance=random.Random(1))
qfds = [qxqfd.QuantumFeatureDetector(circuit=circuit) for circuit in circuit_generator.generate_multiple(qfd_count)]

# Build a model to use these QFD's, using the quantum kitchen sinks encoder & decoder included in QxQFD
model = qxqfd.QuantumModel(quantum_feature_detectors=qfds,
                           encoder=qxqfd.pyquil.EncoderQuantumKitchenSinks(qubit_count=qubits, data_size=data_size),
                           decoder=qxqfd.pyquil.DecoderMostCommon())

The next cell runs the quantum circuits we’ve generated on the Forest SDK QVM. If you’re running this on your local machine, please ensure you’ve started the QVM with the “qvm -S” command. On a Forest QMI, there should be no need to manually activate the QVM.

Please be mindful that the process of running the data through the quantum circuits during training and testing can take a few minutes.

# Train the model on the quantum-transformed data. By default, the underlying classical model is a logistic regression
model.train(training_data=train_data, data_labels=train_labels)
# Score the accuracy of the model on our previously-generated test data
metrics.accuracy_score(test_labels, model.predict(test_data))
0.9333333333333333

A much better result. This is because the QFD’s are able to capture non-linear features. There are other methods of performing non-linear feature detection, and it remains an open research question whether quantum methods such as this provide advantages over classical methods.

Notebook 2: Customization

Download this notebook here: demo/qxqfd_2_customization.ipynb

This notebook builds on the concepts introduced in the first notebook, introducing how to customize QxBranch’s Quantum Feature Detector library (QxQFD). This includes changing the gate set used to generate circuits, using different encoders and decoders, or even writing your own circuits, encoders and decoders.

We’ll start out by importing everything we need:

import random
import math
import numpy as np
import pylab as plt
from sklearn import metrics

from pyquil.quil import Program
import pyquil.gates

import qxbranch.quantum_feature_detector as qxqfd

In QxQFD, we make it easy for you to build quantum feature detectors (QFDs) by providing a CircuitGenerator class, that builds random quantum circuits for Pyquil. By default, the CircuitGenerator uses the following gate sets to generate its circuits:

One-qubit gates: ('X', 'Y', 'Z', 'H', 'S', 'T', 'RX', 'RY', 'RZ')

Two-qubit gates: ('CZ', 'CNOT')

Perhaps we’d like to extend the generator to include the CPHASE gate, or to not use X, Y, or Z gates? We can simply provide lists of valid Pyquil gate names to the CircuitGenerator class on instantiation:

custom_circuit_generator = qxqfd.pyquil.CircuitGenerator(qubit_count=4, gate_count=10,
                                                         one_qubit_gate_types=('H', 'S', 'T', 'RX', 'RY', 'RZ'),
                                                         two_qubit_gate_types=('CZ', 'CNOT', 'CPHASE'))

So long as the provided strings match up to the names of gates in the pyquil.gates library, the CircuitGenerator will be able to use those gates, including the generation of random angles for parameterized gates. It is worth noting that three-qubit gates are not currently supported by the generator.

Random gates are of course only a part of the story. As the field of quantum machine learning advances, we may find that certain circuits perform better for certain datasets, or to detect certain features. QxQFD supports the inclusion of your own circuits as quantum feature detectors.

As an example, let’s say we want to use three-qubit gates (such as CSWAP and CCNOT), which are not currently supported in the CircuitGenerator. To use three-qubit gates in a QFD circuit, we can define our own circuit and use it in a QFD. For this simple example, let’s define a 4-qubit circuit that applies a Hadamard transformation to each of the qubits, and then a couple of CCNOT gates.

To use this circuit in a QFD, we just define it as we would any pyquil.quil.Program, then wrap it in a QxQFD Pyquil Circuit:

# First, define and build our pyquil quantum program
quantum_program = Program()
for qubit_index in range(4):
    # Hadamard each of the qubits
    quantum_program.inst(pyquil.gates.H(qubit_index))
# Apply a couple of CCNOT gates
quantum_program.inst(pyquil.gates.CCNOT(0, 1, 2))
quantum_program.inst(pyquil.gates.CCNOT(1, 2, 3))

# Wrap the pyquil Program in a qxqfd_pyquil.Circuit class for use in a quantum_feature_detector
ccnot_circuit = qxqfd.pyquil.Circuit(quantum_transforms=quantum_program)
ccnot_qfd = qxqfd.QuantumFeatureDetector(circuit=ccnot_circuit)

Next, let’s have a look at encoders and decoders methods, and how to write your own.

Encoding and decoding are critically important to the performance of your QFDs. Encoders are used to take the classical data and program it onto the quantum circuit of your QFD, while decoders perform classical post-processing to interpret the circuit measurement results as classically usable information. There are a number of these included in QxQFD, described in detail in the QxQFD documentation.

As an example though, the EncoderThreshold encodes classical data onto qubits as either the |0\rangle or |1\rangle state, based on whether it is above a given threshold.

The DecoderMostCommon decodes classical data by measuring the qubits after the QFD circuit has executed, and then if multiple runs of the circuit were performed, it selects the mostly commonly measured set of qubit results and returns that as a binary bit string.

These are just a few examples of how you might encode or decode the data, out of infinitely many possible approaches. QxQFD provides a simple interface for writing your own coders, and we’ll start with a simple custom Encoder that takes the classical data and uses it to directly inform some RX rotations on the qubits. Here’s how to implement that:

# Firstly, we define our class, being sure to inherit the qxqfd_pyquil.Encoder as our super class
class EncoderXRotational(qxqfd.pyquil.Encoder):

    # Next we define our __init__, being sure to include the qubit_count arg that is required by
    # qxqfd.EncoderPyquil
    def __init__(self, qubit_count):
        # In the __init__, more complicated encoders might have more logic, but for this one we just
        # need to call super
        super(EncoderXRotational, self).__init__(qubit_count=qubit_count)

    def encode_qubits(self, input_data, available_qubits):
        # This is the method used by the QFD to encode classical data. It takes in the vector of data,
        # and produces a pyquil Program that encodes that data onto a circuit

        # available_qubits is a list of the qubit indices provided by the Forest device being used to
        # run the circuit. It is primarily used to support the hardware QPU lattices, where some qubits
        # may be unavailable

        quantum_program = Program()

        # As this encoder uses a qubit for each element of data, this for-loop definition matches each
        # data element to the index of a qubit on the device lattice
        for qubit_index, data_point in zip(available_qubits, input_data):

            # For simplicity, let's assume that all the input data fits between -2.pi and +2.pi
            # For each data element, we apply a rotation gate to a qubit that rotates the qubit by
            # the data's value about the X-axis
            quantum_program.inst(pyquil.gates.RX(angle=data_point, qubit=qubit_index))

        # Finally, we return the resulting encoder program
        return quantum_program

Now we have our custom encoder, let’s look at writing a decoder to go with it. For this decoder, let’s interpret the measured 0 & 1 qubit values as a binary number:

# Once again, we define our class, being sure to inherit the qxqfd_pyquil.Decoder as our super class
class DecoderBinary(qxqfd.pyquil.Decoder):
    def __init__(self):
        # The qxqfd.DecoderPyquil class doesn't have any arguments we need to pass through
        super(DecoderBinary, self).__init__()

    def decode_measurement(self, measurement_results):
        # This is the method used by the QFD to decode pyquil circuit measurement results. It takes
        # in the vector of measured qubit values, and returns something usable classically (if you
        # want to use the raw measurement values, try the base Decoder)

        # The results returned by pyquil circuits are dictionaries where the qubit indices are the
        # keys, and their values are the measured values. We'll want to convert this to a simple
        # list, sorted by qubit index
        qubit_indices = list(measurement_results.keys())
        decoded_result = [measurement_results[q][0] for q in sorted(qubit_indices)]

        # Using bitshifting to convert from the list of binary (1's and 0's) to an integer
        result = 0
        for bit in decoded_result:
            result = (result << 1) | bit
        return result

Let’s try this QFD on example data generated as in the previous notebook:

def generate_train_and_test_circles_data(num_training_samples, num_testing_samples):
    """Generate a set of x,y data points assigned to one of two sets"""
    train_data, train_labels = generate_data_and_labels_for_circle_data(num_training_samples)
    test_data, test_labels = generate_data_and_labels_for_circle_data(num_testing_samples)
    return train_data, train_labels, test_data, test_labels

RED = 1
BLUE = 2

def generate_data_and_labels_for_circle_data(num_samples):
    data = []
    labels = []
    for i in range(num_samples):
        random_angle = random.random() * 2 * math.pi
        if random.random() > 0.5:
            # Assign the point to the red set
            radius = 2
            sigma = 0.1
            labels.append(RED)
        else:
            # Assign the point to the blue set
            radius = 1
            sigma = 0.05
            labels.append(2)
        x = math.cos(random_angle) * radius + np.random.normal(0, sigma)
        y = math.sin(random_angle) * radius + np.random.normal(0, sigma)
        data.append([x, y])
    return data, labels


train_data, train_labels, test_data, test_labels = generate_train_and_test_circles_data(150, 150)
plt.figure(figsize = (5, 5))
# Plot the data points, colouring one set red and the other blue
plt.plot([train_data[i][0] for i in range(len(train_data)) if train_labels[i] == RED],
         [train_data[i][1] for i in range(len(train_data)) if train_labels[i] == RED], 'sr',
         [train_data[i][0] for i in range(len(train_data)) if train_labels[i] == BLUE],
         [train_data[i][1] for i in range(len(train_data)) if train_labels[i] == BLUE], 'sb')
_images/notebook_2_figure_1.png

We’ll build a QuantumModel using the encoder and decoder we’ve defined, generate some circuits using our customized gate set, and include our circuit with CCNOT gates. Then we’ll train and test our model - bearing in mind that all the customization examples in this notebook are arbitrary.

# Generate some QFDs using our custom gate sets
qfds = [qxqfd.QuantumFeatureDetector(circuit=circuit) for circuit in custom_circuit_generator.generate_multiple(4)]

# Add in our CCNOT circuit
qfds.append(ccnot_qfd)

# Build a model with our custom Encoder and Decoder
quantum_model = qxqfd.QuantumModel(quantum_feature_detectors=qfds,
                                   encoder=EncoderXRotational(qubit_count=4),
                                   decoder=DecoderBinary())

# Finally, let's score our QFD model
quantum_model.train(training_data=train_data, data_labels=train_labels)
metrics.accuracy_score(test_labels, quantum_model.predict(test_data))
0.7133333333333334

Not bad for a few QFDs and a couple of arbitrary encoders / decoders. As the field of quantum machine learning develops, we expect efficient and accurate encoding / decoding methods to be designed and tailored depending on the application.

Notebook 3: Advanced Use

Download this notebook here: demo/qxqfd_3_advanced.ipynb

This notebook looks at how the QxBranch Quantum Feature Detector library (QxQFD) works on a real data science image classification problem. It builds on the concepts presented in the first and second notebooks, exploring how to classify images in the CIFAR-10 dataset using quantum feature detectors (QFDs), and explores their performance under various conditions.

We’ll start out by importing everything we need:

import numpy as np
import pickle
import pylab as plt
from sklearn import linear_model
from sklearn import metrics

import qxbranch.quantum_feature_detector as qxqfd

For these experiments, we’ll be using batch 1 of the CIFAR-10 dataset. This data may have been included with this notebook, but if not, extracting the dataset from the python tarball into a data directory colocated with this notebook should be sufficient to run the experiment.

# If you store the data elsewhere, this filename can be changed
filename = 'data/cifar-10-batches-py/data_batch_1'
with open(filename, 'rb') as fo:
        cifar_10_batch_1 = pickle.load(fo, encoding='bytes')

Let’s start out by exploring the data:

# 1. Check out the fields of this data object to get data and labels fields
print(cifar_10_batch_1.keys())
dict_keys([b'batch_label', b'labels', b'data', b'filenames'])
# 2. Determine the overall dimensions of the data field
image_data = cifar_10_batch_1[b'data']
print(image_data.shape)
(10000, 3072)
# 3. Make sure there are 10 object classes in the labels field, and visualize the number of each
labels = cifar_10_batch_1[b'labels']
_ = plt.hist(labels, rwidth=0.9)
_images/notebook_3_figure_1.png
# 4. Plot some example pictures to validate the data matches the labels

# Reshape the data into something we can plot
plottable_images = image_data.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("uint8")
plt.figure(figsize=(12, 12))

# We'll look at the first 9 images
image_id_range = range(12)
for image_id in image_id_range:
    plt.subplot(3, 4, image_id + 1)
    plt.title("Label = " + str(labels[image_id]))
    plt.imshow(np.reshape(plottable_images[image_id], (32, 32, 3)))

# Validate that the 2 cars and 2 dump trucks have the same labels (1 & 9, respectively)
_images/notebook_3_figure_2.png

Now that we have our data and labels, let’s do some simple modeling. For these experiments, we’ll just use a subset of the data, and classify images of cats (3) & dogs (5).

Once you’re done working through this tutorial, you can edit the selected_labels in the method call to look at more label, like maybe birds (2) or airplanes (0).

def select_images_based_on_labels(image_data, labels, selected_labels, num_train, num_test):
    """Method that selects a subset of images from the CIFAR-10 dataset with the chosen labels"""
    current_index = 0
    train_data = []
    test_data = []
    train_labels = []
    test_labels = []
    while len(train_data) < num_train:
        if labels[current_index] in selected_labels:
            train_data.append(image_data[current_index])
            train_labels.append(labels[current_index])
        current_index = current_index + 1
    while len(test_data) < num_test:
        if labels[current_index] in selected_labels:
            test_data.append(image_data[current_index])
            test_labels.append(labels[current_index])
        current_index = current_index + 1
    # Return the training & test data sets with their labels
    return train_data, train_labels, test_data, test_labels

train_data, train_labels, test_data, test_labels = select_images_based_on_labels(image_data,
                                                                                 labels,
                                                                                 selected_labels=[3, 5], # cats &dogs
                                                                                 num_train=100,
                                                                                 num_test=100)

First up, we’ll see how a simple logistic regression linear model goes classifying the data.

lr = linear_model.LogisticRegression(solver='lbfgs')
lr.fit(train_data, train_labels)
print("Logistic regression test accuracy :: ", metrics.accuracy_score(test_labels, lr.predict(test_data)))
Logistic regression test accuracy ::  0.57
/home/qxsde/.local/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:757: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
  "of iterations.", ConvergenceWarning)

The model’s performance was ok, but it didn’t converge.

Let’s see how this model performs when we use a single quantum feature detector (QFD) on the data before the logistic regression model.

# Generate 1 QFD with a random 10-qubit, 20-gate circuit that runs on 3-by-3 squares of pixels
qubits = 10
gate_count = 20
data_size = 3072
non_zero_count = 1500
qfd_count = 1
circuit_generator = qxqfd.pyquil.CircuitGenerator(qubit_count=qubits, gate_count=gate_count)
circuit = circuit_generator.generate()

qks_encoder = qxqfd.pyquil.EncoderQuantumKitchenSinks(qubit_count=qubits, data_size=data_size, non_zero_count=non_zero_count)
qks_decoder = qxqfd.pyquil.DecoderMostCommon()

# Build a model using this QFD, with the quantum kitchen sink encoding and decoding framework.
quantum_model = qxqfd.QuantumModel(quantum_feature_detectors=(qxqfd.QuantumFeatureDetector(circuit=circuit),),
                                   encoder=qks_encoder, decoder=qks_decoder)

Now let’s see how our quantum model performs with just this QFD:

quantum_model.train(training_data=train_data, data_labels=train_labels)
print("Quantum model test accuracy :: ", metrics.accuracy_score(test_labels, quantum_model.predict(test_data)))
Quantum model test accuracy ::  0.49

A drop in performance is not surprising when we only have a single QFD.

Let’s see what happens to the test accuracy as we increase the number of QFDs in the model up to 40.

The code in this cell also demonstrates how to use the QFDs in the quantum model to transform the data and then manipulate that data external to the model. This is useful while the run time of quantum hardware and simulation is still maturing, as running models with different QFD counts each time up to 40 would be very time consuming.

The code presented here applies all 40 QFDs to the training and test datasets first, then takes the subset of data transformed by the desired number of QFDs and trains and scores a regression model on that subset.

qfd_count = 40
qfds = [qxqfd.QuantumFeatureDetector(circuit=circuit) for circuit in circuit_generator.generate_multiple(qfd_count)]
quantum_model = qxqfd.QuantumModel(quantum_feature_detectors=qfds, encoder=qks_encoder, decoder=qks_decoder)
# Apply the QFDs to the training and test datasets
transformed_training_data = quantum_model.quantum_transform_data(train_data)
transformed_testing_data = quantum_model.quantum_transform_data(test_data)

regression_model = linear_model.LogisticRegression(solver='lbfgs')

accuracy = []
for number_qfds in range(1, qfd_count):
    # These lines of code select the subsets of data transformed by varying numbers of QFDs
    modified_training_data = [x[0:(number_qfds+1)*10] for x in transformed_training_data]
    modified_testing_data = [x[0:(number_qfds+1)*10] for x in transformed_testing_data]
    regression_model.fit(modified_training_data, train_labels)
    accuracy.append(metrics.accuracy_score(test_labels, regression_model.predict(modified_testing_data)))

# Plot accuracy against the number of QFDs
plt.plot(accuracy)
plt.xlabel("Number of QFDs")
plt.ylabel("Test set accuracy")
_images/notebook_3_figure_3.png

We observe that performance has improved marginally; we can see that adding more QFDs can increase test set accuracy. We can also perform a test to see how adding different amounts of training data influence test accuracy.

# Increase the number of training images from 100 to 1000
train_data, train_labels, test_data, test_labels = select_images_based_on_labels(image_data,
                                                                                 labels,
                                                                                 selected_labels=[3, 5],
                                                                                 num_train=1000,
                                                                                 num_test=100)

# Let's build a quantum model using only 5 of the QFDs from the last step
quantum_model = qxqfd.QuantumModel(quantum_feature_detectors=qfds[0:5], encoder=qks_encoder, decoder=qks_decoder)

# Apply the subset of QFDs to the training and test datasets
transformed_training_data = quantum_model.quantum_transform_data(train_data)
transformed_testing_data = quantum_model.quantum_transform_data(test_data)

accuracy = []
for number_training_data in range(100):
    # These lines of code train regression models on subsets of the transformed training images
    regression_model.fit(transformed_training_data[0:(number_training_data+1)*10],
                         train_labels[0:(number_training_data+1)*10])
    accuracy.append(metrics.accuracy_score(test_labels, regression_model.predict(transformed_testing_data)))

# Plot accuracy vs the number of training images
plt.plot(np.linspace(10, 1000, 100), accuracy)
plt.xlabel("Number of training images")
plt.ylabel("Test set accuracy")
_images/notebook_3_figure_4.png

We can also experiment with other ways of processing the data classically - perhaps we want to break the images into 4 different “chunks” and then run the QFDs on each subsection of the image. This would allow for some spatial preservation in the images.

# These helper functions will be used to be able to break apart an image into subsections.

def get_transformed_split_data(data, quantum_model):
    """This method splits up the CIFAR-10 data and transforms it using a quantum model"""
    # Split the image into four quadrants
    image = data.reshape(1, 3, 32, 32).transpose(0, 2, 3, 1).astype("uint8")[0]
    split_data = split_image(image)
    chunked_data = []
    chunk_dimension = split_data[0].shape[0] * split_data[1].shape[0] * split_data[0].shape[2]
    for chunk in split_data:
        reshaped_chunk = np.reshape(chunk, chunk_dimension)
        chunked_data.append(reshaped_chunk)

    # Apply the quantum model to each chunk
    transformed_data = quantum_model.quantum_transform_data(chunked_data)
    # Flatten and return the transformed data
    return flatten(transformed_data)


def flatten(list_of_lists):
    return [item for sublist in list_of_lists for item in sublist]


def split_image(image):
    M = image.shape[0]//2
    N = image.shape[1]//2
    return [image[x:x + M, y:y + N] for x in range(0, image.shape[0], M) for y in range(0, image.shape[1], N)]
train_data, train_labels, test_data, test_labels = select_images_based_on_labels(image_data,
                                                                                 labels,
                                                                                 selected_labels=[3, 5],
                                                                                 num_train=200,
                                                                                 num_test=100)

# For this model, we need to change the data size being fed into the encoder - we can use the same QFDs from before
data_size = int(3072/4)
non_zero_count = int(data_size/2)
qks_encoder = qxqfd.pyquil.EncoderQuantumKitchenSinks(qubit_count=qubits, data_size=data_size, non_zero_count=non_zero_count)
quantum_model = qxqfd.QuantumModel(quantum_feature_detectors=qfds[0:10], encoder=qks_encoder, decoder=qks_decoder)


# Apply the subset of QFDs to the training and test datasets
transformed_training_data = [get_transformed_split_data(x, quantum_model) for x in train_data]
transformed_testing_data = [get_transformed_split_data(x, quantum_model) for x in test_data]

# Train and score the model
regression_model.fit(transformed_training_data, train_labels)
print("Quantum model test accuracy :: ", metrics.accuracy_score(test_labels,
                                                                regression_model.predict(transformed_testing_data)))
Quantum model test accuracy ::  0.54

Finally, we can also combine our QFD-transformed features with our original dataset and train a regression model on both. Combining the QFD output with the original classical data will use both datasets.

# Take the original training data and zip it together with the transformed data
train_quantum_classical_data = [list(x[0]) + list(x[1][0:100]) for x in zip(transformed_training_data, train_data)]
test_quantum_classical_data = [list(x[0]) + list(x[1][0:100]) for x in zip(transformed_testing_data, test_data)]

# Train and score the model
regression_model.fit(train_quantum_classical_data, train_labels)
print("Quantum model test accuracy :: ", metrics.accuracy_score(test_labels,
                                                                regression_model.predict(test_quantum_classical_data)))
Quantum model test accuracy ::  0.54

These results are not very competitive with modern classical computing hardware, but we’ve only been working with 10 qubits and 20 gate transformations, which is a very small scale. As quantum computing hardware matures and larger scale hardware becomes available, we expect to see these results improve. Can you tweak this model to get better results?

There are no constraints on how these QFDs may be used in practice, and this library is intended as a first step to enabling experimentation with these constructs. We hope you enjoy using the quantum feature detector library, and find it to be an interesting and useful tool for exploring the possibilities of quantum machine learning.