Review DisDock

Review DisDock

Published

Recently, the authors Menghan Lin, Keqiao Li, Yuan Zhang, Feng Pan, Wei Wu and Jinfeng Zhang at Florida State University reported a new tool called DisDock for predicting where metals bind in proteins.

The paper is called: DisDock: A Deep Learning Method for Metal Ion-Protein Redocking https://doi.org/10.1101/2023.12.07.570531

The model has U-Net architecture and predicts a distance a matrix of location of a metal ion bound to the protein. Image CC By 4.0 Lin et. al, BioRxiv preprint

Here is an unsolicited review for this work:

In order to claim that their method is Physics-driven, the authors should show that the distance features learnt by the model actually emulate physical terms such as coulombic interactions. Just by analogy of distances this is not enough. A physics-driven method would also provide some form of binding energy. Since the output here is simply a distance matrix I don't think it's fair to call this a physics driven method.

The model also lacks a way to indicate confidence or "binding energy" if you will. What happens if I run the prediction on a pocket that does not contain a metal site? The model would still place the ion somewhere, no?

Authors should explain how DisDock has the potential to accommodate the flexibility of both ligands and proteins. In l.47 or l.76 authors state that rigid protein structures are used.

Table 1 is confusing. Are the percentiles referring to mean distance between predicted and experimental position? This is only mentioned in the text but not in the caption Is 25% the best predictions or the worst ones? This is not clear. The authors also justify that they do not compare against Metal3D because it only was trained on zinc, yet they compare the predictor by Wang et.al trained only on copper with their method. For Metal3D it was also shown that it performs well for 10 of the 16 metals in the training set for DisDock even if it was trained only on zinc.

The authors should also provide a segmented analysis of the performance of their method for the different metal ions in the dataset in the main text of the paper. I don't think it makes sense to train the method on 5 CD sites and have actually 0 examples in the test set. In this case this metal should be excluded from training at all.

For inference there seems to be a bit of divergence of where the actual metal is placed depending on the input search region. The authors should quantify this and provide a recommendation how many runs should be run starting from different location based on this analysis. Otherwise they cannot claim as in l.201 that the performance is consistent irrespective of the chosen initial location. In Figure S1 they just analyze the dependence on the starting distance. But there might also be an influence which equidistant starting point is used.

For BioMetAll, the authors should clearly detail in the methods section with what parameters the results have been computed and what is used as reference (any probe or just the cluster centers).

It is also not correct that Metal3D takes the entire protein as input. Metal3D operates on residue centered voxel grids, that can be aggregated to compute a prediction for the whole protein but it is also possible to compute the binding probability around a specific residue.

The authors should also clarify about code/data availability.

Disclaimer: I am one of the authors of Metal3D (Simon Duerr).

This review is licensed under CC BY 4.0.