Anybody tell me what this means (5 Viewers)

shmmeee

Well-Known Member
It’s basically @Philosorapter’s system AFAICT. So take with a pinch of salt I guess, useful for knowing how we’ve started against teams with different starts.

 

The Philosopher

Well-Known Member
It's done on an Elo rating system. I tried to explain this but couldn't find the right words so ChatGPT has done a summary for me:
The Elo rating system is a method used to calculate the relative skill levels of players in games like chess, sports, or other competitive activities. It works like this:

1. Each player starts with a rating, often represented by a number.

2. When two players compete, the winner gains rating points, while the loser loses points. The amount of points exchanged depends on the difference in their ratings and the outcome of the match.

3. A higher-rated player is expected to win against a lower-rated player, so if they do, they gain fewer points. If they lose, they lose more points.

4. Over time, players' ratings adjust based on their performance in matches. This helps create a ranking that reflects their current skill levels.

It's a way to keep track of and compare the skill of players in a fair and balanced manner.

Sent from my SM-G990B using Tapatalk
“MrBlueSky” saying something is done on an “ELO” system seems a bit surreal tbh.
 
Last edited:

Philosoraptor

Well-Known Member
Opta is a poor excuse for an 'insight' to be honest.

On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.

They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.

I was running the latest version of Glicko-2 here.

To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.

I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system

Here were the rules and criteria for the competition;


And the winner, which is basically Glicko-2 with white advantage added.

Some good information below.

Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.

Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!

World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”

We interviewed Alec after his big win…

What made you decide to enter?

I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.

What was your background prior to entering this challenge?

My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.

What preprocessing and supervised learning methods did you use?

Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.

What was your most important insight?

The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.

Do you have any advice for other Kaggle competitors?

My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.

Which tools did you use?

My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.

What have you taken away from this competition?

An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.

Originally published at blog.kaggle.com on March 20, 2012.
 

MalcSB

Well-Known Member
Opta is a poor excuse for an 'insight' to be honest.

On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.

They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.

I was running the latest version of Glicko-2 here.

To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.

I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system

Here were the rules and criteria for the competition;


And the winner, which is basically Glicko-2 with white advantage added.

Some good information below.

Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.

Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!

World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”

We interviewed Alec after his big win…

What made you decide to enter?

I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.

What was your background prior to entering this challenge?

My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.

What preprocessing and supervised learning methods did you use?

Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.

What was your most important insight?

The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.

Do you have any advice for other Kaggle competitors?

My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.

Which tools did you use?

My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.

What have you taken away from this competition?

An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.

Originally published at blog.kaggle.com on March 20, 2012.
Thanks, that was really interesting. Honestly.
 

wingy

Well-Known Member
The number of times the Elo rating system has needed explaining on this forum makes me think Sunak’s chess in the park policy has even more merit.
Not with me,🤫
Opta is a poor excuse for an 'insight' to be honest.

On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.

They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.

I was running the latest version of Glicko-2 here.

To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.

I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system

Here were the rules and criteria for the competition;


And the winner, which is basically Glicko-2 with white advantage added.

Some good information below.

Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.

Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!

World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”

We interviewed Alec after his big win…

What made you decide to enter?

I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.

What was your background prior to entering this challenge?

My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.

What preprocessing and supervised learning methods did you use?

Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.

What was your most important insight?

The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.

Do you have any advice for other Kaggle competitors?

My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.

Which tools did you use?

My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.

What have you taken away from this competition?

An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.

Originally published at blog.kaggle.com on March 20, 2012.
 

Philosoraptor

Well-Known Member
If I remember right the data to date is not really accurate.
Can well believe that.

I'll go further, I would imagine that measuring group dynamics are virtually non-existent.

It's a complete cottage industry.

What you're after first is the accurate digitization of games with the least amount of data used. From there you can actually get measurements which are useful, and opens up a whole new world of options,

I did look into this. Technology has moved on so much.
 

shmmeee

Well-Known Member
Can well believe that.

I'll go further, I would imagine that measuring group dynamics are virtually non-existent.

It's a complete cottage industry.

What you're after first is the accurate digitization of games with the least amount of data used. From there you can actually get measurements which are useful, and opens up a whole new world of options,

I did look into this. Technology has moved on so much.

Reckon with the new breed of video annotation software you might see more open stats.
 

harvey098

Well-Known Member
Opta is a poor excuse for an 'insight' to be honest.

On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.

They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.

I was running the latest version of Glicko-2 here.

To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.

I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system

Here were the rules and criteria for the competition;


And the winner, which is basically Glicko-2 with white advantage added.

Some good information below.

Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.

Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!

World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”

We interviewed Alec after his big win…

What made you decide to enter?

I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.

What was your background prior to entering this challenge?

My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.

What preprocessing and supervised learning methods did you use?

Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.

What was your most important insight?

The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.

Do you have any advice for other Kaggle competitors?

My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.

Which tools did you use?

My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.

What have you taken away from this competition?

An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.

Originally published at blog.kaggle.com on March 20, 2012.

I’ll explain to my girlfriend that her internet is slow because I was using all the bandwidth trying to download a philosorapter post
 

Fergusons_Beard

Well-Known Member
Can well believe that.

I'll go further, I would imagine that measuring group dynamics are virtually non-existent.

It's a complete cottage industry.

What you're after first is the accurate digitization of games with the least amount of data used. From there you can actually get measurements which are useful, and opens up a whole new world of options,

I did look into this. Technology has moved on so much.

I completely understand you dismissing XG as flawed data but isn’t this how the owners of Brentford and Brighton made their money?

Or is that a simplistic viewpoint of a much more complex methodology of finding gaps in under developed (betting) markets?


Sent from my iPhone using Tapatalk Pro
 

Philosoraptor

Well-Known Member
Two problems I see with xG.

Firstly, they are trying to put closed systems into an open system environment.

Secondly, and more importantly: the whole xG house is built on sand. The foundations they have built the system on aren't a reliable source of information. It is at best secondary information.

There's a uniqueness in games in general which can't be quantified.
 

rob9872

Well-Known Member
Surely all data is currently still a little flawed and won't settle down until a few more games in whilst players were moving about? The cross that I'm sure you've all seen where after week one we were fast and direct has changed since Gus left but with only small data is still skewing our results. Whatever of the above systems you believe in and someone like ne who struggles to understand some of it sees more merit in the simpler systems, but need at least another 10 games before drawing any conclusions from it imo and even then still need to look at current form and injuries to base any predictions.
 

Philosoraptor

Well-Known Member
Reckon with the new breed of video annotation software you might see more open stats.
There are open elo stats on football including the Prem, EFL and much of world. The methodology sucks big time though, inflation and all that malarkey.

It was fun looking to see how they have tried to fix the problem of a pyramid system. I would of said, if it ain't broke, then why try to fix it.

I believe they have even put an Elo Stat on MR and all the others managers around the world.

May have a sneak peak!
 
Last edited:

Philosoraptor

Well-Known Member
Apparently not managing a side for the last 544 days.

I love that site.
 

play_in_skyblue_stripes

Well-Known Member
Opta is a poor excuse for an 'insight' to be honest.

On a level with xG or whatever you call it for information which is, well how do I put it, slightly dodgy to say the least.

They looks like they are based on science but they are far from it. To give an analogy on Opta and xG, it is like taking a few spray cans to a car which has been totally written off and saying, that'll fix it.

I was running the latest version of Glicko-2 here.

To understand how Glicko-2 fits in to the big scheme of things with Elo then the easiest way is by the two links below.

I guess everyone missed out on the $10,000 FIDE prize for inventing a new rating system

Here were the rules and criteria for the competition;


And the winner, which is basically Glicko-2 with white advantage added.

Some good information below.

Congratulations to Alec Stephenson, who was recently announced as winner of the FIDE Prize in the Deloitte/FIDE Chess Rating Challenge! This prize was awarded to the submission which was the most promising practical chess rating system (the criteria can be found here). The World Chess Federation (FIDE) has administered the world championship for over 60 years and manages the world chess rating system.

Here at Kaggle we’re very excited about Alec’s achievement. This is a major breakthrough in an area which has been extensively studied by some of the world’s best minds. Alec wins a trip to the FIDE meeting to be held in Warsaw this April, where he will present his winning method. The next world chess rating system could be based on his model!

World chess ratings have always used the Elo system, but in the last few years there has been a movement to make the rating system more dynamic. One approach is to modify the Elo system by adjusting the so-called ‘K-factors’, which determine how quickly individual match results change the overall rankings. Professor Mark Glickman, chairman of the United States Chess Federation ranking committee, has proposed the Glicko system, which was a key inspiration behind Microsoft’s TrueSkill algorithm. Jeff Sonas, with the backing of FIDE, initiated this Kaggle contest to bring in fresh ideas. He says “of all the things learned during the contest, the one that I am most excited about is the degree to which Alec was able to improve the accuracy of the well-established Glicko model without significantly increasing its complexity.”

We interviewed Alec after his big win…

What made you decide to enter?

I make a couple of submissions in most competitions and then decide from that point whether my interest is sufficient to spend the time competing seriously. What I liked about the chess competition was that, unlike more traditional data mining competitions, the data was extremely simple, containing just player identifiers and results. This meant that the competition was more theoretical than is usually the case, which benefited me as a mathematician.

What was your background prior to entering this challenge?

My background is in mathematics and statistics. I am currently an academic, teaching courses in R, SAS and SPSS, and have worked in a number places including The National University of Singapore and Swinburne University in Australia. I will soon be taking a position at CSIRO, Australia’s national science agency.

What preprocessing and supervised learning methods did you use?

Because of the simplicity of the data I took the view that the best approach would be to build upon methods that already exist in the literature. I took the Glicko sytem of Mark Glickman, added a couple of ideas from Yannis Sismanis and then used a data driven approach to inform further modifications. The Glicko system is based on a Bayesian statistical model; I took this and then let predictive performance, rather than statistical theory, determine my final scheme. I suspect my approach is less useful for other types of two-player games as it was essentially optimized for the chess data.

What was your most important insight?

The most important and suprising thing was how competitive an iteratively updated ratings scheme could be in terms of predictive performance. It got in the top 20 overall, which was a great surprise to me, particularly given that the unrestricted schemes obtained an additional advantage from using future information that would not be applicable in practice.

Do you have any advice for other Kaggle competitors?

My three tips are (1) Have a go! Start with some random numbers and progress from there. (2) Concentrate on learning new skills rather than the leaderboard. (3) Beware of anything that takes more than 10 minutes to run.

Which tools did you use?

My usual tool set is R, C, Perl and SQL, but for this competition I just used R with compiled C code incorporated via the .C interface. I’m currently working on an R package allowing users to examine different iteratively updated rating schemes for themselves. Hopefully it will also allow me to make my method a bit simpler without losing predictive performance, which may make it more palatable to the FIDE.

What have you taken away from this competition?

An interest in methods for modelling two-player games, and a motivation to learn how to play chess! It’s my second win in public Kaggle competitions, which is a nice personal achievement.

Originally published at blog.kaggle.com on March 20, 2012.
As someone who has always been in IT and knows Swinburne uni, boy Alec is a "smart Alec"
 

Philosoraptor

Well-Known Member
As someone who has always been in IT and knows Swinburne uni, boy Alec is a "smart Alec"
Yep, and the interesting thing in all this is I believe all the improvements he made to Glicko-2 are in the version which I was using here minus the white advantage.

There's a good reason for this.

It is like nothing I've ever seen before and I have a tiny bit of experience in this field.
 

Philosoraptor

Well-Known Member
If I remember right the data to date is not really accurate.

Hey @wingy, do you know if they have sorted the problem yet?

Does sound like a calibration issue but would like to know more. Purely out of curiosity 😜
 

wingy

Well-Known Member
Hey @wingy, do you know if they have sorted the problem yet?

Does sound like a calibration issue but would like to know more. Purely out of curiosity 😜
No it was more about I think your own view that it didn't build an accurate picture until about the 10th game.
Anyway I have missed themthis year, hope all is good.
 

Philosoraptor

Well-Known Member
No it was more about I think your own view that it didn't build an accurate picture until about the 10th game.
Anyway I have missed themthis year, hope all is good.

Could I ask which system they are using?

Opta?

Just something to look into.

I have a passing interest.

😎
 

Philosoraptor

Well-Known Member
No it was more about I think your own view that it didn't build an accurate picture until about the 10th game.
Anyway I have missed themthis year, hope all is good.

I am travelling the World at the moment, otherwise I would of given regular updates.

Just don't have the technology with me to do this.
 

Users who are viewing this thread

Top