Bookmark and Share

The foundations of macromolecular structure refinement

Techniques for macromolecular refinement continue to advance apace. The quality of data is improving with easier access to synchrotron sources, computing power is increasing while the cost is declining, and new algorithms are helping to put the calculations on a firmer statistical basis. In the past six months, three papers have appeared in Acta Cryst. A and D discussing the role of the normal matrix, the second-derivative matrix of the refinement target function.

The first problem to address is simply one of size; the normal matrix has the dimension of the number of parameters squared, and the time taken to generate it by classical techniques is proportional to the square of the number of atoms multiplied by the number of observations. The papers by Tronrud ["Efficient calculation of normal matrix in least-squares refinement of macromolecular structures", Acta Cryst. A55 (1999), 700-703]; Murshudov, Vagin, Lebedev, Wilson and Dodson ["Efficient anisotropic refinement of macromolecular structures using FFT", Acta Cryst. D55 (1999), 247-255]; and Templeton ["Faster calculation of the full matrix for least-squares refinement", Acta Cryst. A55 (1999), 695-699] all discuss ways of speeding up the calculations. These methods can be traced back first to Cruickshank who described an approach to refinement using Fourier methods in the 1950s [Acta Cryst. 5 (1952), 511-518; Acta Cryst. 9 (1956), 747-753], and then to Agarwal who showed how to exploit fast Fourier transforms (FFTs) [Acta Cryst. A34 (1978), 791-809]. He outlined a method for the fast calculation of second-derivative matrices, which is extended by Tronrud and generalized by Murshudov et al. Templeton extends another aspect of Agarwal's results, i.e., he makes approximations by replacing summations with integration assuming that the reciprocal space sampling is dense enough to justify this.

Most experience in exploiting the full normal matrix is in the field of small-molecule crystallography; here, it is feasible to generate the derivatives analytically, and to perform the inversion. When there is a high ratio of observations to parameters, its use has been shown to speed up the rate of convergence, and to reduce the likelihood of reaching a false minimum. When the refinement has converged, the inverse of the normal matrix reveals both the precision of the model parameters and the correlations between them. The eigenvectors and eigenvalues of the normal matrix provide information about parameters or parameter combinations that are not determined by the original data.

For large macromolecules, even if the full normal matrix is generated, there will be considerable problems in the inversion of such a large array. However, there are now many numerical tools developed within a variety of disciplines addressing such problems. The stability of the inversion procedure can be analysed using eigenvalues and -vectors, and this was discussed by Cowtan and  Ten Eyck at the 18th European Crystallographic Meeting in Prague, Czech Republic in August 1998. They have performed such an eigensystem analysis on the normal matrices resulting from the least-squares refinement of a small metalloprotein using two datasets and models determined at different resolutions (all performed by classical techniques using the program SHELXL at the San Diego Supercomputer Center) [Sheldrick (1995), SHELXL93, a Program for the Refinement of Crystal Structures from Diffraction Data. Institut für Anorganische Chemie, Göttingen, Germany]. As a protein refinement is usually underdetermined without the application of geometric restraints, and these contributions are routinely included in the minimization residual, they have repeated their analysis including the contributions from such restraints. They show that the eigenvalue spectra reveal considerable information about the conditioning of the problem as the resolution varies. In the case of a restrained refinement, the spectra also provide information about the impact of various restraints on the refinement.

The established procedure used in macromolecular refinement programs such as SFALL/PROLSQ, XPLOR, and REFMAC has been to generate only the diagonal elements of the normal matrix for the X-ray data. Although this limits the rate of convergence, it does mean that each cycle is completed quickly because structure factors and gradients can both be generated using FFTs. All such programs incorporate the restraint derivative terms into the matrix, with some rather arbitrary weighting relative to the X-ray elements. The derivatives for these contributions can be generated simply, and provide off-diagonal terms for linked atoms only. Templeton's approximations for the maximum contribution for the X-ray off-diagonal terms show that these fall off rapidly with the distance between the atom pairs. Hence the geometric restraint contributions may well swamp any X-ray contribution to off-diagonal elements, especially when the resolution of the X-ray data is limited. But as Cowtan and Ten Eyck show, there are special situations where unexpected interactions link distant atoms, and these can degrade the course of refinement.

The FFT techniques for estimating the X-ray off-diagonal elements will make it possible to derive a much improved form for the full normal matrix in a realistic time, and we can look forward to further analyses, and a better understanding of the proper parameterization and accuracy of macromolecular structures.

Eleanor Dodson, U. of York, UK