The confusion matrix is a tool commonly used by data scientists to understand the performance of classifiers. It enables you to visualize in a tangible way how good your model is at predicting true positives and how many false negatives and false positives you generate in the process. It is generated by the G2M platform using the portion of the dataset that was held back for validation purposes, and uses a propensity threshold of 0.5 unless otherwise stated.
This article briefly discusses what a confusion matrix is
Updated over 6 months ago