Problem Detail: Which machine learning algorithms (besides SVM’s) use the principle of structural risk minimization?
Asked By : Classifire
Answered By : jmad
The structural risk minimization principle is a principle that is at least partly ‘used’ in all machine learning methods, since overfitting is often to be taken into account: reducing the complexity of the model is (supposedly and in practice) a good way to limit overfitting.
- SVMs explicitly have a parameter for the complexity (the dimension of the feature space, or even the kernel function) and it’s necessary because increasing the complexity is a part of the learning algorithm.
- Neuronal networks also have a easy indicator of their complexity (number of ‘cells’) and is part of the associated learning algorithm.
- Without this principle grammar inference would be both stupid and perfect grammar is the list of all possible words, so every non-trivial algorithm at least acknowledges this principle.
- Decision trees have their own notion of entropy.
- Clusters can be simply counted or kind of ‘use’ the principle intrinsically or have a fixed number of clusters and in that case you apply the principle at a higher level.
To be perfectly honest I don’t really know about what happens in genetic programming but they don’t have an intrinsic notion of complexity. I don’t know well Inductive logic programming but it doesn’t seem to scale very well to this principle.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/2006