Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

5
  • 1
    $\begingroup$ Not knowing your data, it is difficult to answer. But Overfitting is a strong candidate for this kind of effect. $\endgroup$ Commented Oct 7, 2015 at 8:03
  • 1
    $\begingroup$ This is not an answer, but rather a comment. And one more thing it's not overfitting. $\endgroup$ Commented Oct 7, 2015 at 10:10
  • $\begingroup$ @eliasah: How do you know that it is not overfitting (of course by Random Forest, not by the other three)? $\endgroup$ Commented Oct 7, 2015 at 12:06
  • $\begingroup$ the Random Forest algorithm (statistically) performs better than the other three. But the issue here is that he's considering a strong heuristic "voting" from different classifiers. $\endgroup$ Commented Oct 7, 2015 at 12:23
  • $\begingroup$ Imagine the case where RF guesses 85/100 correctly and SVM guesses 78/100 correctly. if the SVM predicts (78+15 =) 93 items the same as RF, but gets the other 7 wrong, it is easy to see how voting would be worse than RF alone (SVM made all the same mistakes and then some). So it's certainly possible, particularly if one classifier regularized more strongly for example. $\endgroup$ Commented Oct 29, 2015 at 20:03