Changing the chip...
-
- Senior Member
- Posts: 375
- Joined: Fri May 23, 2008 8:06 pm
Re: Changing the chip...
It makes one wonder how many people were involved in convincing our former President that Iraq had WMD? Or in convincing the congress and Senate that Obamacare was really in the best interest of the populous...How many people are involved in the movement of drugs from Mexico to the USA...Ever watch the show Holmes on Homes? How many involved in shoddy building and how many inspectors that approve the same...It's indeed a sad commentary on society but it's been that way since time began. Entire nations have fallen due to fraud, deceit, and greed. The USSR in our life time...If governments can be and are corrupt so are industries and sadly it's not confined to gambling. But all that being said I'll still play knowing that not all machines have been tampered with. I'm far more concerned with what I EAT with all the nasty chemicals and hormones that what machine I play...I play for fun and eat to live....Love this site..Lots of good thinkers.
-
- VP Veteran
- Posts: 819
- Joined: Tue Nov 07, 2006 9:21 pm
The testing is much more serious than you make out. While I doubt they review the code with a fine tooth comb, they run industry standard randomness tests on the machine's generation of cards.
Right but even those test are only run at a 95% confidence rate (you can read that right off NGC publication) But the kicker is 95 percent confidence interval is then defined as the interval such that in approximately 95 of the experiments, the parameter (true randomness) they are estimating is “captured” within the confidence interval, and in approximately five of the experiments, the parameter (true randomness) they are estimating is not “captured” within the confidence interval. Of course, theyy don’t repeat the experiment 100 times; they just do it once. What they are counting on is that the experiment they do is not one of the approximately five for which the confidence interval does not capture the parameter.
So I wouldn't put to much confidence in the chipset testing either, its bascially 1 step above the software that is run blindly by disgruntled Q&A testers that wish they were the programmers.
Even if their 95% was statistically 100% accurate that, at the minimum still leaves 5% and in a world were .03% makes all the difference in a game we play imagine what 5% would do to a paytable.
Bottom line, like I said its pretty much a rubber stamp. Don't just read the cover of the book and have faith you know the whole story.
Right but even those test are only run at a 95% confidence rate (you can read that right off NGC publication) But the kicker is 95 percent confidence interval is then defined as the interval such that in approximately 95 of the experiments, the parameter (true randomness) they are estimating is “captured” within the confidence interval, and in approximately five of the experiments, the parameter (true randomness) they are estimating is not “captured” within the confidence interval. Of course, theyy don’t repeat the experiment 100 times; they just do it once. What they are counting on is that the experiment they do is not one of the approximately five for which the confidence interval does not capture the parameter.
So I wouldn't put to much confidence in the chipset testing either, its bascially 1 step above the software that is run blindly by disgruntled Q&A testers that wish they were the programmers.
Even if their 95% was statistically 100% accurate that, at the minimum still leaves 5% and in a world were .03% makes all the difference in a game we play imagine what 5% would do to a paytable.
Bottom line, like I said its pretty much a rubber stamp. Don't just read the cover of the book and have faith you know the whole story.
-
- Video Poker Master
- Posts: 1842
- Joined: Mon Sep 11, 2006 4:02 am
Right but even those test are only run at a 95% confidence rate (you can read that right off NGC publication) But the kicker is 95 percent confidence interval is then defined as the interval such that in approximately 95 of the experiments, the parameter (true randomness) they are estimating is “captured” within the confidence interval, and in approximately five of the experiments, the parameter (true randomness) they are estimating is not “captured” within the confidence interval. Of course, theyy don’t repeat the experiment 100 times; they just do it once. What they are counting on is that the experiment they do is not one of the approximately five for which the confidence interval does not capture the parameter.
So I wouldn't put to much confidence in the chipset testing either, its bascially 1 step above the software that is run blindly by disgruntled Q&A testers that wish they were the programmers.
Even if their 95% was statistically 100% accurate that, at the minimum still leaves 5% and in a world were .03% makes all the difference in a game we play imagine what 5% would do to a paytable.
Bottom line, like I said its pretty much a rubber stamp. Don't just read the cover of the book and have faith you know the whole story.spx, the following is not arguing for or against your conclusion about whether or not machines are random. It is also not an endorsement nor an indictment of the quality of regulation. But your post suggests a serious misunderstanding and interpretation of statistical tests.When someone runs a test at a 95% level, 5% is the chance of the test suggesting that the chip was not random if in fact it was random. This is known as a false positive. If this were a test for, say TB, it would mean that the test would diagnose TB in about 5% of patients that were in fact disease-free. Such medical tests also have known confidence levels and an indication of failure generally requires further testing before treatment begins because the idea of false positives is well-known by those doing testing.There is generally no such thing as a 100% confidence test unless you measure every single member of a population. In this case, that would mean having a test that had an infinite sample size that could never be concluded. So in order to run a test in a reasonable period of time, a confidence level has to be chosen and 95% is a common, though arbitrary, choice by statisticians. It would be extremely inefficient to run the same test 100 times; instead, the theory of sampling provides guidance for how large a particular sample has to be so that the tester can have a reasonable level of assurance that the test can detect serious departures from randomness.I am not saying that every departure from randomness would be detected 100% of the time or even 95% of the time, but a fair number of tests in this area have been devised by mathematicians, partly to help researchers to produce better RNGs. And, these tests are calibrated so that even random machines will require retesting 5% of the time.Ironically, if we used a 99% confidence level in our testing (closer to the 100% that you suggest), fewer random machines would get false positives but MORE non-random machines would slip through undetected.A 95% test does NOT by any stretch of the imagination mean that a sample would pass the test so long as it is within 5% of the population EV. Mixing the idea that a random machine will fail the test for randomness 5% of the time with the idea that a machine could pass a test even if the EV were off by a bit less than 5% is not clear thinking and will lead you to draw incorrect conclusions.There is nothing in what you interpreted that suggests your conclusion that this is a rubber stamp. Oversight may be lacking, but not because of these regulations.Don't take my word for it. Find a statistics professor or instructor (or even a good student who recently got an "A" in a stat course) that you trust and have him read your words and mine and see which ones s/he thinks are more on point.
-
- VP Veteran
- Posts: 819
- Joined: Tue Nov 07, 2006 9:21 pm
New, thanks for the input and that was a bad analogy on my part comparing 5% to the paytable and I understand the RNG does not have to pass within the 5% threshold, that is not what I was pointing out. For the puposes of this requirement, equivalent is defined as within a 5% tolerance for expected value and no more than a 1% tolerance on return to player or payback.
If my memory serves me right the Gaming Commission uses the chi-squared test on RNG. This tells me that only a small sample size is used (in thier case 1) and anything below 10 makes the chi-squared test unreliable.
If my memory serves me right the Gaming Commission uses the chi-squared test on RNG. This tells me that only a small sample size is used (in thier case 1) and anything below 10 makes the chi-squared test unreliable.
-
- Video Poker Master
- Posts: 1842
- Joined: Mon Sep 11, 2006 4:02 am
New, thanks for the input and that was a bad analogy on my part comparing 5% to the paytable and I understand the RNG does not have to pass within the 5% threshold, that is not what I was pointing out. For the puposes of this requirement, equivalent is defined as within a 5% tolerance for expected value and no more than a 1% tolerance on return to player or payback.
If my memory serves me right the Gaming Commission uses the chi-squared test on RNG. This tells me that only a small sample size is used (in thier case 1) and anything below 10 makes the chi-squared test unreliable.You're still not there in terms of understanding; at least I don't see it in what you've written. Again, the 5% refers to the number of false positives that one gets when using 95% confidence intervals, not tolerance of expected value. And since expected value is pretty much correlated with return to player or payback, it really is unclear why you would use 5% for one of these as tolerance and 1% for the other.Using a chi-squared test does not imply a small sample size and such a test need not be unreliable. I don't know what would cause you to presume that the regulators would purposely use an unreliable test. The methodology can be laid out fairly easily by an expert to prevent it from being unreliable and once laid out, it shouldn't be that difficult to replicate it. Sample size is the number of observations in the test, not the the number of tests, so when you say "in their case 1", you are not talking about sample size.A chi-square test that would be particularly effective would compare the number of times cards of a particular rank and suit come up with the number of times that they would be expected to come up. Those numbers would rarely be the same, but a test statistic can be constructed to see if the machine was producing cards in the proper proportion at various times.I see you want the conclusion to be that the machines aren't properly tested and might therefore not be as random as shadowman is suggesting, but this chain of logic does not seem to support your case.
-
- Video Poker Master
- Posts: 3587
- Joined: Mon Oct 23, 2006 5:42 pm
I've seen this same lack of understanding of the 95% chi-square testing over and over again. Almost always the logic follows what spx stated above. And, in almost all cases it is used to claim the machines are not really random. Oh well, it is a difficult concept so I can see where people can be confused. However, just to add to the discussion ... this is a standard test and used not just for VP machines but for checking randomness algorithms in places you might believe are more important, for example banking.