The challenge is over, but a new challenge is on-going using the same datasets, check it out!
This project is dedicated to stimulate research and reveal the state-of-the art in "model selection" by organizing a competition followed by a workshop. Model selection is a problem in statistics, machine learning, and data mining. Given training data consisting of input-output pairs, a model is built to predict the output from the input, usually by fitting adjustable parameters. Many predictive models have been proposed to perform such tasks, including linear models, neural networks, trees, and kernel methods. Finding methods to optimally select models, which will perform best on new test data, is the object of this project. The competition will help identifying accurate methods of model assessment, which may include variants of the well-known cross-validation methods and novel techniques based on learning theoretic performance bounds. Such methods are of great practical importance in pilot studies, for which it is essential to know precisely how well desired specifications are met.
|ADA||Marc Boulle with SNB(CMA)+10k F(2D) tv|
|GINA||Kari Torkkola & Eugene Tuv with ACE+RLSC|
|HIVA||Gavin Cawley with Final #3 (corrected)|
|NOVA||Gavin Cawley with Final #1|
|SYLVA||Marc Boulle with SNB(CMA) + 10k F(3D) tv|
Requète : SELECT COUNT(DISTINCT entrant_id) FROM entrant LEFT JOIN entry ON entrant.id=entry.entrant_id AND method IS NOT NULL AND name <> 'reference'
Requète : SELECT COUNT(*) FROM entry LEFT JOIN entrant ON entrant_id=entrant.id WHERE name<>'reference'
|Valid challenge entries (*)||Erreur :|
Requète : SELECT COUNT(*) FROM result LEFT JOIN entry ON entry_id=entry.id LEFT JOIN entrant ON entrant_id=entrant.id WHERE name<>'reference' AND dataset='all' AND be_test IS NOT NULL