The best Championship sides based on OptaAnalyst power ranking system.
don't get it
Does it mean I need to book my Wembley hotel ?
no..don't get it
Does it mean I need to book my Wembley hotel ?
So this is a positive ?Btw my prediction of second in the league being backed up by the boffins at Opta
Sent from my Pixel 7 using Tapatalk
“MrBlueSky” saying something is done on an “ELO” system seems a bit surreal tbh.It's done on an Elo rating system. I tried to explain this but couldn't find the right words so ChatGPT has done a summary for me:
The Elo rating system is a method used to calculate the relative skill levels of players in games like chess, sports, or other competitive activities. It works like this:
1. Each player starts with a rating, often represented by a number.
2. When two players compete, the winner gains rating points, while the loser loses points. The amount of points exchanged depends on the difference in their ratings and the outcome of the match.
3. A higher-rated player is expected to win against a lower-rated player, so if they do, they gain fewer points. If they lose, they lose more points.
4. Over time, players' ratings adjust based on their performance in matches. This helps create a ranking that reflects their current skill levels.
It's a way to keep track of and compare the skill of players in a fair and balanced manner.
Sent from my SM-G990B using Tapatalk
Get it booked mate no doubt in my minddon't get it
Does it mean I need to book my Wembley hotel ?
Thanks, that was really interesting. Honestly.Opta is a poor excuse for an 'insight' to be honest.
On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.
They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.
I was running the latest version of Glicko-2 here.
To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.
I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system
Here were the rules and criteria for the competition;
Rules - Deloitte/FIDE Chess Rating Challenge | Kaggle
Kaggle is your home for data science. Learn new skills, build your career, collaborate with other data scientists, and compete in world class machine learning challenges.web.archive.org
And the winner, which is basically Glicko-2 with white advantage added.
Some good information below.
Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.
Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!
World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”
We interviewed Alec after his big win…
What made you decide to enter?
I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.
What was your background prior to entering this challenge?
My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.
What preprocessing and supervised learning methods did you use?
Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.
What was your most important insight?
The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.
Do you have any advice for other Kaggle competitors?
My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.
Which tools did you use?
My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.
What have you taken away from this competition?
An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.
Originally published at blog.kaggle.com on March 20, 2012.
Not with me,The number of times the Elo rating system has needed explaining on this forum makes me think Sunak’s chess in the park policy has even more merit.
Opta is a poor excuse for an 'insight' to be honest.
On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.
They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.
I was running the latest version of Glicko-2 here.
To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.
I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system
Here were the rules and criteria for the competition;
Rules - Deloitte/FIDE Chess Rating Challenge | Kaggle
Kaggle is your home for data science. Learn new skills, build your career, collaborate with other data scientists, and compete in world class machine learning challenges.web.archive.org
And the winner, which is basically Glicko-2 with white advantage added.
Some good information below.
Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.
Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!
World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”
We interviewed Alec after his big win…
What made you decide to enter?
I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.
What was your background prior to entering this challenge?
My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.
What preprocessing and supervised learning methods did you use?
Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.
What was your most important insight?
The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.
Do you have any advice for other Kaggle competitors?
My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.
Which tools did you use?
My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.
What have you taken away from this competition?
An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.
Originally published at blog.kaggle.com on March 20, 2012.
Don’t bother - no need to go to Wembley if you finish 2ndGet it booked mate no doubt in my mind
Can well believe that.If I remember right the data to date is not really accurate.
Can well believe that.
I'll go further, I would imagine that measuring group dynamics are virtually non-existent.
It's a complete cottage industry.
What you're after first is the accurate digitization of games with the least amount of data used. From there you can actually get measurements which are useful, and opens up a whole new world of options,
I did look into this. Technology has moved on so much.
Opta is a poor excuse for an 'insight' to be honest.
On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.
They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.
I was running the latest version of Glicko-2 here.
To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.
I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system
Here were the rules and criteria for the competition;
Rules - Deloitte/FIDE Chess Rating Challenge | Kaggle
Kaggle is your home for data science. Learn new skills, build your career, collaborate with other data scientists, and compete in world class machine learning challenges.web.archive.org
And the winner, which is basically Glicko-2 with white advantage added.
Some good information below.
Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.
Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!
World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”
We interviewed Alec after his big win…
What made you decide to enter?
I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.
What was your background prior to entering this challenge?
My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.
What preprocessing and supervised learning methods did you use?
Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.
What was your most important insight?
The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.
Do you have any advice for other Kaggle competitors?
My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.
Which tools did you use?
My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.
What have you taken away from this competition?
An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.
Originally published at blog.kaggle.com on March 20, 2012.
Can well believe that.
I'll go further, I would imagine that measuring group dynamics are virtually non-existent.
It's a complete cottage industry.
What you're after first is the accurate digitization of games with the least amount of data used. From there you can actually get measurements which are useful, and opens up a whole new world of options,
I did look into this. Technology has moved on so much.
There are open elo stats on football including the Prem, EFL and much of world. The methodology sucks big time though, inflation and all that malarkey.Reckon with the new breed of video annotation software you might see more open stats.
As someone who has always been in IT and knows Swinburne uni, boy Alec is a "smart Alec"Opta is a poor excuse for an 'insight' to be honest.
On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.
They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.
I was running the latest version of Glicko-2 here.
To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.
I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system
Here were the rules and criteria for the competition;
Rules - Deloitte/FIDE Chess Rating Challenge | Kaggle
Kaggle is your home for data science. Learn new skills, build your career, collaborate with other data scientists, and compete in world class machine learning challenges.web.archive.org
And the winner, which is basically Glicko-2 with white advantage added.
Some good information below.
Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.
Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!
World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”
We interviewed Alec after his big win…
What made you decide to enter?
I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.
What was your background prior to entering this challenge?
My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.
What preprocessing and supervised learning methods did you use?
Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.
What was your most important insight?
The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.
Do you have any advice for other Kaggle competitors?
My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.
Which tools did you use?
My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.
What have you taken away from this competition?
An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.
Originally published at blog.kaggle.com on March 20, 2012.
Yep, and the interesting thing in all this is I believe all the improvements he made to Glicko-2 are in the version which I was using here minus the white advantage.As someone who has always been in IT and knows Swinburne uni, boy Alec is a "smart Alec"
If I remember right the data to date is not really accurate.
No it was more about I think your own view that it didn't build an accurate picture until about the 10th game.Hey @wingy, do you know if they have sorted the problem yet?
Does sound like a calibration issue but would like to know more. Purely out of curiosity
No it was more about I think your own view that it didn't build an accurate picture until about the 10th game.
Anyway I have missed themthis year, hope all is good.
No it was more about I think your own view that it didn't build an accurate picture until about the 10th game.
Anyway I have missed themthis year, hope all is good.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?