
What is a good accuracy score in machine learning?
What is accuracy for machine learning models?
Accuracy is a metric used to assess the performance of classification machine learning models. It is one of the simplest and most widely understood machine learning metrics by end users and data scientists alike. However, it's simplicity is also it's weakness as it struggles to convey the nuance of error in machine learning models.
What are the positives and negatives of using accuracy as a metric?
The positives of accuracy as an error metric are:
- Easy to implement
- Easily understood by many
The negatives of accuracy as an error metric are:
- Doesn't work well on imbalanced datasets
- Not able to distinguish between precision and recall ability
These negatives aspects of accuracy are reasons to be cautious of using it on your project. Accuracy should only be used on balanced datasets and in the context of other metrics that provide other aspects of the machine learning model's performance.
What is a good accuracy score
If we assume that we are working with a balanced dataset then a good accuracy score would be over 70%. There is a general rule when it comes to understanding accuracy scores:
- Over 90% - Very good
- Between 70% and 90% - Good
- Between 60% and 70% - OK
- Below 60% - Poor
Implementing accuracy score in Python
Accuracy is easily implemented in Python using the popular library scikit-learn and their accuracy_score method.
from sklearn.metrics import accuracy_score
y_pred = [1, 0, 1, 1]
y_true = [0, 1, 1, 0]
accuracy = accuracy_score(y_true, y_pred)
Related articles
Classification metrics
F1 score
AUC score
Balanced accuracy
Baseline metrics
Classification metrics for imbalanced data
Classification metric comparisons
Accuracy vs balanced accuracy
AUC vs accuracy
F1 score vs accuracy
Metric calculators
Accuracy calculator
Confusion matrix calculator