General comment: This manuscript applies the quantum annealing for machine learning with zooming (QAMLZ) algorithm to the search for top squarks at the LHC. The authors’ main contribution appears to be the addition of a preprocessing step of the input variables with principal component analysis (PCA), which improves the performance of the algorithm. In particular, there is a possible improvement over the classical algorithm used in a CMS search: a boosted decision tree (BDT). While the analysis presented here is sound, I have some major comments. My main major comments are (1) to expand the description of and comparison to the classical ML algorithm (in terms of performance and time to solution) and (2) to clarify the novel contributions of this work relative to the previously published literature on QAML and QAMLZ methods. Below, please find my linebyline comments and questions.
General reply
Dear journal, we would like to thank the referee for his/her comments & questions as these have helped us to improve the quality of the paper.
Concerning comment (1)
Concerning comment (2), there are four contributions which are novel at various degrees, and it is true that some of them are not clear enough in the first version of the paper:
Physics comments:
Q1: 79 How is the BDT trained? Is it optimal? Could a DNN or other classical ML algorithm achieve better performance? Given that one of the main claims of the paper is that there may be an improvement for the quantum algorithm compared to the classical ML algorithm, it would be good to better understand the specifics of the classical ML algorithm used.
The BDT has been trained with the TMVA package, which is a standard multivariate analysis package in Root, widely used in HEP. The number of trees NT is 400. The maximal depth of the trees MD is 3. The maximal node size MN, which is the percentage of number of the Signal or Background events at which the splitting of data stops, is 2.5%; MN is a stopping condition of the training. Finally, and as mentioned in the paper, the data is diagonalized. These internal parameters of the BDT (NT, MD, MN, diagonalization or not) have been varied, a new BDT trained each time, and its performance assessed via the FOM maximization, all these while ensuring that there is no overtraining. The chosen parameters correspond to the best performance, while having a trustworthy BDT training (no overtraining).
For the choice of input variables for the BDT: as briefly explained in lines 8084 of the version 1, a new variable v is added to an already existing set of variables S, a new BDT trained, and the FOM maximized versus the output of the BDT. If the maximal FOM reached for the set S+v is higher than for S, v is incorporated as input variable; if it is compatible with the one of S, it is not. This procedure is repeated until there is no new variable at disposal. In view of our approach, we are confident that the BDT is optimal.
After the study of the BDT, we purposefully took the same input variables to train a DNN, varying its internal parameters, retraining a new DNN, and assessing its performance via the FOM maximization. The question to address here was whether for a given classification problem (here stop versus SM), and with the same set of input variables as for the BDT, a DNN architecture can achieve a better result than a BDT. Among all options explored (more than 30) covering different number of nodes and hidden layers, batch sizes, number of epochs, learning rates, weight initializer and activation functions, we did not observe an improvement of the performance with a DNN versus the BDT.
Q2: 91 How big of an impact does the choice of f=20% play? Do the optimal solutions vary depending on this choice? Similarly, does assuming no systematic uncertainty on the signal play a role (I assume it’s an even smaller effect than the background systematic uncertainty)?
It should first be stressed that extreme values of f do not correspond to any realistic analysis in HEP: no measurement of the SM background is without any systematic uncertainty (f=0), nor do we have f=100% which corresponds to the case where the prediction of the background is totally out of control and the corresponding search isn't worth pursuing. Values of f between 15% and 40%, which correspond to realistic values on the precision of background predictions, have been tested. They mainly result in the FOM (1) being maximized at a different value of the BDT output, and (2) having a different maximal value. However, the very choice of the input variables, which is what we want to determine with the FOM maximization, doesn't change with values of f in this range. Please note that a systematic uncertainty on the signal is not considered in similar metrics, as most discoveries are foremost limited by statistical uncertainty on the signal. Now indeed, assuming some systematic uncertainty on the signal has no significant effect on the outcome of the FOM maximization.
Q3: 9598 Maybe point to a reference that shows why maximizing FOM is a good idea
This is done in the second version, thank you.
Q4: 122 Could a quick summary of the weak classifier construction be given? In particular the equation shown in the Methods section of [3] doesn’t seem to give binary values as this paper claims.
For a variable i, a weak classifier chi_i is a function built as a function of different percentiles of signal and background; the corresponding details are given the supplementary material of ref. [3]. Now for a given event of index tau, it is indeed the binary value sgn(chi_i(tau)) = +1 which is taken. The very final value is then divided by the number N of total weak classifiers. The text has been clarified in this regard, many thanks.
Q5: 210 Could you report how many events are used for training/testing/validation for both signal and background?
As mentioned in line 266, N(Train)=50.E3 events. Lines 200203 mention that the size of the training and testing samples are equal, so N(Test)=50.E3. Each of these samples includes 19.E3 events for signal and 31.E3 events for background. These points have been clarified in the text. Finally, N(Assess) is 196.E3 events for background (i.e. all the rest of the background sample), and 7.E3 for signal as we asses the performance of the signal on a single signal point as mentioned in lines 209210; this has also been clarified in the text.
Q6: 216 Is it possible to run the algorithm on a DWave Advantage machine with a Pegasus graph? Or discuss the gains possible by doing that?
We don't have access to the Pegasus version of DWave graphs yet, the access to the latest hardware being more difficult. However, we indeed hope to get access to this machine, our plan being to run the different options of table III as to obtain a systematic comparison of the same settings, input variables, etc across 2 different machines. We kindly refer the referee to the lines 366375 for the discussion about the possible gains.
Q7: 237 How are the cutoff C and variablefixing scheme related? If you use both, does it effectively remove more variables?
The cutoff is the percentile pruning of the J_{ij} matrix to allow the implementation of the classification problem on a hardware with a limited number of connected qubits; as such, it is part of the embedding. The variable fixing scheme is a classical polynomialtime procedure to fix the value of a portion of the input variables to values that have a high probability of being optimal. If used, the variable fixing allows to find a fraction of the solution spins via a classical approach; the quantum annealing is therefore used only on the nonfixed variables, i.e. it is less used (hence the remark in lines 341343). As such, these two schemes are not related, even though they both contribute to the embedding by decreasing the size (number of terms) of the Ising Hamiltonian to embed.
When both are used, the variable fixing algorithm is run after the pruning of J_{ij} coefficients (cutoff). Therefore the more coefficients we remove, the easier it is for the algorithm to find classical solutions for some variables (variable fixing).
Q8: 237 Could a citation or brief description of the variablefixing theme be added?
The description provided above has been added to the text.
Q9: 240 Could a full comparison be made to classical ML and/or classical simulated annealing?
The paper is built to provide an as solid as possible comparison between the QAMLZ and a classical ML (here BDT) approach for classification: same problem (stop versus SM background), same input variables, same preselection, and diagonalization of data applied to both. We are confident that the comparison of the performances of quantum annealing (different settings and input variables) with the BDT provided in table III is the most complete we can provide. As mentioned above, we have tested a DNN and observed that its performance isn't better than the one of BDT.
The purpose of this paper is to test whether a quantum based approach can outperform well tested classical ML tools in a classification problem of a HEP search. We built our approach on the experience taken from the first QAMLZ paper [4] in this respect (where it was observed that the classical simulated annealing has a very similar performance to the QAMLZ algorithm), and the usual methodology used for this type of search. Therefore, we concentrated our effort in the comparison between a really quantum approach and a classical ML tool.
Q10: 253 Is there a justification that 10 times is enough?
For the same quantum annealing setting, for the same set of input variables and with the same events, we determined the standard deviation sigma with 5, 10 and 20 runs. We observed that the value of sigma determined with 5 was already very close to the one obtained with 10 or 20. Being conservative, we kept 10 as the final number of annealing runs to determine sigma.
Q11: 342 It is my understanding that the variable fixing scheme is necessary to put it on a physical quantum computer with limited qubits. Is that correct? So is the point of this statement that once more qubits are available, the performance will improve?
It is the embedding scheme used by DWave which is foremost necessary to implement the classification problem on the physical quantum computer, namely to embed the Ising Hamiltonian on a graph. We point the referee to what is explained in lines 217219. As explained in question Q7, the variable fixing also helps to implement the Hamiltonian on the quantum computer. It is indeed true that once more qubits are available, a larger number of qubits will be connected: this will allow to use a larger number/fraction of the J_{i,j} matrix, i.e. to prune less the J_{i,j} terms, by using lower values of C as mentioned in lines 366372. Another consequence of the larger number of connected qubits is that the chain will be more stable, thus less information will be lost by broken chains (please refer to lines 373375).
About line 342: The variables which are fixed by variable fixing don't enter the quantum annealing process; in those cases, the classification uses a smaller fraction of qubits, and the quantum annealing is therefore less put at use. With this statement, we want to stress that the best result is achieved with the full use of the quantum annealing.
Q12: 399 I was under the impression that the BDT used was the same/similar to the one from the CMS publication [5]. However, clearly, the data used here is based on Delphes simulation. So, presumably the BDT was retrained on this more simplified dataset?
Indeed: in order to make the comparison with the results of quantum annealing as valid as possible, a BDT was retrained with the the Delphes simulation. It has to be noted that the performance of the BDT (for the same signal) is compatible between this new simulation and the full simulation of the CMS detector.
Text comments
200 Usually in ML, the convention of “training” (used in training), “validation” (used to validate / select the “best” model), and “testing” (held out for final performance checks) datasets are used.
We acknowledge the usual convention. However, since the "QA", "Train" and "Test" samples are unequivocally defined and better correspond to the needs of this work (i.e. 3 orthogonal samples with one specifically sent to the quantum annealing algorithm), we prefer to keep this notation.
8 TeV stop studies can be found here
Replies to JHEP's referee can be found here
This can be found here
This can be found here
Some Jet and MET studies can be found here
 PedrameBargassa  28 Jan 2009
I  Attachment  History  Action  Size  Date  Who  Comment 

png  LM_Parallel_Coor_Back.png  r1  manage  79.6 K  20130116  17:33  PedrameBargassa  LM ttbar VarCorr 
png  LM_Parallel_Coor_Sig.png  r1  manage  87.2 K  20130116  17:35  PedrameBargassa  LM sig VarCorr 
png  MVA_4j_antiiso_El.png  r1  manage  26.6 K  20121108  20:49  PedrameBargassa  Singleelectron MVA(LM) with RelIso>0.3 
png  MVA_4j_antiiso_Mu.png  r1  manage  25.9 K  20121108  20:50  PedrameBargassa  Singlemuon MVA(LM) with RelIso>0.3 
gif  cumulativehltrate.1e31.gif  r1  manage  25.5 K  20090527  22:42  PedrameBargassa  Cumulative, per sector rates 
gif  ele10sumhtscanthreshold.1E31.gif  r1  manage  12.6 K  20090521  12:17  PedrameBargassa  Rate variation versus HT and Jetinput threshold for Ele10_HT 
gif  mu5sumhtscanthreshold.1E31.gif  r1  manage  11.4 K  20090521  12:18  PedrameBargassa  Rate variation versus HT and Jetinput threshold for Mu5_HT 
gif  sumhtscanthreshold.1E31.gif  r1  manage  11.9 K  20090521  12:19  PedrameBargassa  Rate variation versus HT and Jetinput threshold for HT 

