Wednesday, March 21, 2018

EAU 2018: Smart Software Can Diagnose Prostate Cancer As Well As a Pathologist

From practice update.
----------------------------------------------
March 20, 2018—Copenhagen, Denmark—A learning artificial intelligence system has been developed that can diagnose and identify cancerous prostate samples as accurately as a pathologist.

The system holds the possibility of streamlining and eliminating variation in cancer diagnosis. It may also help overcome local shortages of trained pathologists. In the longer term, it may lead to automated or partially automated prostate cancer diagnoses.

This outcome of an evaluation of the system was reported at the 33rd Annual Congress of the European Association of Urology, from March 16 – 20.

Hongqian Guo, MD, of Nanjing University, China, explained that prostate cancer is the most common male cancer, with approximately 1.1 million annual diagnoses worldwide (about 4 times the male population of Copenhagen). Confirmation of the diagnosis normally requires a biopsy sample, which is then examined by a pathologist.

An artificial intelligence learning system has shown similar levels of accuracy to a pathologist. In addition, the software may classify the level of malignancy of cancer accurately, eliminating the variability of human diagnosis.

According to Dr. Guo, the software will not replace a human pathologist because an experienced pathologist is still needed to take responsibility for the final diagnosis. The software will help pathologists make faster and better diagnoses, and eliminate judgment variation that may enter human evaluations.

Dr. Guo’s group took 918 prostate whole mount pathology section samples from 283 patients and ran them through the analysis system, with the software gradually learning and improving diagnosis. These pathology images were subdivided into 40,000 smaller samples; 30,000 samples were used to “train” the software, and the remaining 10,000 to test accuracy.

Results showed an accurate diagnosis in 99.38% of cases (using a human pathologist as a gold standard), which is as accurate as a human pathologist. AI was then able to identify different Gleason grades in pathology sections.

To date, 10 whole mount prostate pathology sections have been tested, with similar Gleason grade in the AI and human pathologists’ diagnoses. The group has not begun testing the system with human patients.

Dr. Guo noted that the system was designed to gradually learn and improve sample interpretation, with results demonstrating that AI-reported diagnoses were comparable with that of a pathologist.

Dr. Guo also noted that the system could accurately classify malignant prostate cancer. He believes this is the first automated work that can provide accurate reporting and diagnosis of prostate cancer, which may offer quicker processing and increased diagnosis consistency across pathologists, hospitals, and countries.

Dr. Guo added that with the advancement of artificial intelligence as evidenced by facial recognition in smartphones and driverless cars, it is important for cancer detection and diagnosis to use this technology as well.

Rodolfo Montironi, MD, of the Polytechnic University of the Marche, Ancona, Italy, noted that this study demonstrates how artificial intelligence can be incorporated into clinical practice, which may be very useful in areas lacking trained pathologists.

He also noted that although use of the system will lead to decreased reliance on human expertise, it is critical for final decisions on treatment to remain with a trained pathologist. Most importantly, he continued, the highest standard of patient care must be ensured. Dr. Montironi predicted that the future will be interesting.

The software was developed in conjunction with Nanjing Innovative Data Technologies, Inc., which was not involved in funding. The system is so new that no information is available on cost or implementation.

Dr. Guo noted limitations to the work. More samples were Gleason grade 3 and 4 than other grades, possibly influencing AI calculation. The team is seeking suitably objective standards to allow for direct comparison of Gleason grade with AI.

No comments: