Working with QxBranch’s Quantum Feature Detector Library

In a recent blog post, we announced the release of QxBranch’s Quantum Feature Detector library (QxQFD) on the Rigetti Quantum Cloud Services (QCS) platform. In this post, we will work through an example of how to use the library to perform a simple classification task.

If you’d like to work through this example yourself, or experiment with the code to try and get a better result, you can download QxQFD from PyPI here, and the example as an iPython notebook here.

The first step is to import the packages we will need, including generic python packages as well as QxQFD itself.

import random
import math
import numpy as np
import pylab as plt

from sklearn import linear_model
from sklearn import metrics

import qxbranch.quantum_feature_detector as qxqfd

For this example, we will use a simple toy problem, classifying which of two synthetic distributions a data point belongs to. The following code generates and plots this data as a blue set and a red set:

def generate_train_and_test_circles_data(num_training_samples, num_testing_samples):
    """Generate a set of x,y data points assigned to one of two sets"""
    train_data, train_labels = generate_data_and_labels_for_circle_data(num_training_samples)
    test_data, test_labels = generate_data_and_labels_for_circle_data(num_testing_samples)
    return train_data, train_labels, test_data, test_labels

RED = 1
BLUE = 2

def generate_data_and_labels_for_circle_data(num_samples):
    data = []
    labels = []
    for i in range(num_samples):
        random_angle = random.random() * 2 * math.pi
        if random.random() > 0.5:
            # Assign the point to the red set
            radius = 2
            sigma = 0.1
            labels.append(RED)
        else:
            # Assign the point to the blue set
            radius = 1
            sigma = 0.05
            labels.append(2)
        x = math.cos(random_angle) * radius + np.random.normal(0, sigma)
        y = math.sin(random_angle) * radius + np.random.normal(0, sigma)
        data.append([x, y])
    return data, labels


train_data, train_labels, test_data, test_labels = generate_train_and_test_circles_data(150, 150)
plt.figure(figsize = (5, 5))
# Plot the data points, colouring one set red and the other blue
plt.plot([train_data[i][0] for i in range(len(train_data)) if train_labels[i] == RED],
         [train_data[i][1] for i in range(len(train_data)) if train_labels[i] == RED], 'sr',
         [train_data[i][0] for i in range(len(train_data)) if train_labels[i] == BLUE],
         [train_data[i][1] for i in range(len(train_data)) if train_labels[i] == BLUE], 'sb')

Now, let’s train a logistic regression model on this binary classification problem:

# Train multi-class logistic regression model
lr = linear_model.LogisticRegression(solver='lbfgs')
lr.fit(train_data, train_labels)
print("Logistic regression Test Accuracy :: ", metrics.accuracy_score(test_labels, lr.predict(test_data)))
Logistic regression Test Accuracy ::  0.5333333333333333

That’s not much better than random guessing. Let’s introduce some quantum feature detectors (QFDs).

QFDs operate as complex, nonlinear functions where classical information is encoded into a quantum circuit, the circuit is run, and the measured results are decoded back to classical information. Using the features generated by transforming our training data using these quantum circuits is expected to help increase accuracy.

Let’s see how this model performs when we build the same logistic regression model on top of the QFD transformed data.

# Generate 10 quantum feature detectors with random 4-qubit, 10-gate circuits
qubits = 4
gate_count = 10
data_size = int(2)
qfd_count = 10
circuit_generator = qxqfd.pyquil.CircuitGenerator(qubit_count=qubits, gate_count=gate_count, 
                                                  random_instance=random.Random(1))
qfds = [qxqfd.QuantumFeatureDetector(circuit=circuit) for circuit in circuit_generator.generate_multiple(qfd_count)]

# Build a model to use these QFD's, using the quantum kitchen sinks encoder & decoder included in QxQFD
model = qxqfd.QuantumModel(quantum_feature_detectors=qfds,
                           encoder=qxqfd.pyquil.EncoderQuantumKitchenSinks(qubit_count=qubits, data_size=data_size),
                           decoder=qxqfd.pyquil.DecoderMostCommon())

The next cell runs the quantum circuits we’ve generated on the Forest SDK QVM. If you’re running this on your local machine, please ensure you’ve started the QVM with the “qvm -S” command. On a Forest QMI, there should be no need to manually activate the QVM.

Please be mindful that the process of running the data through the quantum circuits during training and testing can take a few minutes.

# Train the model on the quantum-transformed data. 
# By default, the underlying classical model is a logistic regression
model.train(training_data=train_data, data_labels=train_labels)
# Score the accuracy of the model on our previously-generated test data
metrics.accuracy_score(test_labels, model.predict(test_data))
0.9333333333333333

A much better result. This is because the QFD’s are able to capture non-linear features. There are other methods of performing non-linear feature detection, and it remains an open research question whether quantum methods such as this provide advantages over classical methods.

If you’re interested in learning more about QxBranch’s Quantum Feature Detector library, or downloading and using it yourself, check out the documentation, including a getting started section, and a collection of example notebooks, such as this one.

Subscribe to our newsletter

Recent Posts