A stethoscope, introduced more than two centuries ago, is still a tool providing potentially valuable information gained during one of the most common examinations. However, the biggest drawback of auscultation is its subjectivity. It depends mainly on the experience and ability of the doctor to perceive and distinguish pathological signals. Many research has shown very low efficiency of doctors in this area.
Moreover, most of physicians are aware of this problem and needs supporting device. Therefore we have developed the Artificial Intelligence (AI) algorithms which recognise pathological sounds (wheezes, rhonchi, fine and coarse crackles). Here we present the comparison of the performance of physicians and AI in detection of those sounds.
A database of more than 10 000 recordings described by a consilium of specialists (pulmonologists and acousticians) was used for AI learning. Then another set of more than 500 real auscultatory sounds were used to investigate the efficiency of AI in comparison to a group of doctors. The standard F1-score was used for evaluation, because it considers both the precision and the recall. For each phenomena, the results for the AI is higher than for doctors with an average advantage of 8.4 percentage points, reaching even 13,5 p.p. for fine crackles.
The results suggest that the implementation of AI can significantly improve the efficiency of auscultation in everyday practice making it more objective, leading to a minimization of errors. The solution is now being tested with a group of hospitals and medical providers and proves its efficiency and usability in everyday practice making this examination faster and more reliable.
Tomasz Grzywalski, Marcin Szajek, Honorata Hafke-Dys, Anna Bręborowicz, Jędrzej Kociński, Anna Pastusiak, Riccardo Belluzzo