In what's expected to soon be commonplace, artificial intelligence is being harnessed to pick up signs of cancer more accurately than the trained human eye. This latest AI model has a near 100% success rate and serves as a clear sign of things to come.
I am able to identify 100% of cancer: just say “It is cancer” to each picture.
The article does not mention any other metrics than detection rate. What about recall etc.? Without it, this news is basically worthless.I stand corrected, see the comments below. While the article still lacks important context, accuracy is well defined for this topic.
Accuracy in a classification context is defined as (N correct classifications / total classifications). So classifying everything as cancer would, in a balanced dataset, give you ~50% accuracy.
This article is indeed badly written PR fluff. I linked the paper in a sister comment. Both the confusion matrix and the ROC curve look phenomenal. Train/test/validation split seems fine too, as do the training diagnostics, so I’m optimistic that it isn’t a case of overfitting.
Ofcourse 3rd party replication would be welcome, and I can’t speak to the medical relevanve of the dataset. But the computer vision side of things seems well executed.
Thx for the comment! I edited my post accordingly.
I feel this would be a blatant lie if it included a bunch of false positives.
https://mander.xyz/comment/17810389
I’m not educated enough to know what recall means in this context, but there’s tables with percentages for it in the page. (Would love an explanation; I’m not sure what to search for to get the right definition.)
This wiki describes the terminology for a binary classification. I always have to refer to that page too, as it’s very confusing :)
Thx for the comment! I edited my post accordingly.