The European Tribune is a forum for thoughtful dialogue of European and international issues. You are invited to post comments and your own articles.
Please REGISTER to post.
Especially beware of highlighting - I'm sure highlighting single data points has legitimate uses, but off the top of my head, I cannot think of a single one. A very good indication that Someone Is Up To No Good.
- Jake Friends come and go. Enemies accumulate.
If that's not possible, you sit down and cry.
And when you're done crying, you stop trying to prove abuses that are impossible to prove, and concentrate on the abuses that are possible to prove - such as the excessive durations of the trials (for all the accused).
BUT if your data set is too small to work on statistical significance testing... and if you're not allowed to highlight single data points (if you don't want to be accused of being up to no good) then what can you do with small sets figures?
Just a simple question. How many coin tosses do you need to reject the hypothesis that a coin is unbiased with 99% confidence? 95%? 90%? And if your coin is used fewer times than that and then is lost, how are you going to use statistics to argue it was biased? Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith
99% confidence means you are accepting a 1% error on the ideal.
95% means that you are accepting a 5% error on the ideal.
It would be hard to get within 1% or 5% of the ideal with very few tosses. If you do a few tosses you may actually conclude that the coin is biased even if it's not. I suspect that within 10 or 20 tosses you should seriously approach your ideal 50-50.
With 4 coin tosses, HHHH has a probability of 6.25% which allows you to reject at 90% but not 95%.
With 5 coin tosses, HHHHH has a probability of 3.125% which allows you to reject at 95% but not at 99%.
The point is that, with less than 4 coin tosses you cannot show bias, no matter what. Sometimes you simply don't have enough data to argue statistically.
And statistics can only suggest where to look for actual evidence, it can't prove (or disprove) bias all by itself.
For instance, the contingency table analysis I did yesterday suggests looking for actual (not statistical) evidence of bias in the duration or the trials, not in the result. JakeS posted a theory that indictments were issued in the hopes of gathering sufficient evidence by the time the cases came to trial, which in some cases hasn't happened, resulting in prolongued imprisonments without trial rather than dismissals for lack of evidence. But a theory consistent with statistical suggestions is not evidence. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith
The ideal result should be heads 50% of the time and tails 50% of the time.
Because the ratio of Croat convicted to Serb civilian casualties in Croatia DOES show up as being particularly lenient to Croats - the ratio is 0 to some 2 300 dead.
by gmoke - Nov 28
by gmoke - Nov 12 7 comments
by Oui - Dec 1
by gmoke - Nov 30
by Oui - Nov 3010 comments
by Oui - Nov 2837 comments
by Oui - Nov 278 comments
by Oui - Nov 2511 comments
by Oui - Nov 24
by Oui - Nov 221 comment
by Oui - Nov 22
by Oui - Nov 2119 comments
by Oui - Nov 1615 comments
by Oui - Nov 154 comments
by Oui - Nov 1319 comments
by Oui - Nov 1224 comments
by gmoke - Nov 127 comments
by Oui - Nov 1114 comments
by Oui - Nov 10
by Oui - Nov 928 comments
by Oui - Nov 8