Researchers at Edinburgh University believe his mathematical techniques could be used to help measure the effectiveness of existing diagnostic tools.
Currently, the accuracy of diagnostic tests is assessed using statistical techniques developed in the 1980s, with these unable to gauge how useful a test could be in determining an individuals risk of developing a disease.
But now specialists at Edinburgh University’s Usher Institute of Population Health Sciences and Informatics believe that Turing’s methods could improve these.
Working at Bletchley Park in 1941, Turing came up with the method used to break the German forces’ Enigma code.
His approach investigated the distribution of so-called weights of evidence – which establish the likely outcomes in a given situation – to help him decide the best strategy for cracking Enigma.
Researchers think that applying the same principle could potentially aid the development of personalised treatments, a study published in Statistical Methods in Medical Research has revealed.
Turing worked out how the weight of evidence was expected to vary over repeated experiments, with these ideas developed further in 1968 and published by his former assistant Jack Good.
Professor Paul McKeigue, of the university’s Usher Institute of Population Health Sciences and Informatics, said the same principle of how the weight of evidence varies can be applied to evaluate the diagnostic tests used for personalised treatments.
In this way, the performance of a test can be quantified.