The European Tribune is a forum for thoughtful dialogue of European and international issues. You are invited to post comments and your own articles.
Please REGISTER to post.
Especially beware of highlighting - I'm sure highlighting single data points has legitimate uses, but off the top of my head, I cannot think of a single one. A very good indication that Someone Is Up To No Good.
- Jake Friends come and go. Enemies accumulate.
If that's not possible, you sit down and cry.
And when you're done crying, you stop trying to prove abuses that are impossible to prove, and concentrate on the abuses that are possible to prove - such as the excessive durations of the trials (for all the accused).
BUT if your data set is too small to work on statistical significance testing... and if you're not allowed to highlight single data points (if you don't want to be accused of being up to no good) then what can you do with small sets figures?
Just a simple question. How many coin tosses do you need to reject the hypothesis that a coin is unbiased with 99% confidence? 95%? 90%? And if your coin is used fewer times than that and then is lost, how are you going to use statistics to argue it was biased? Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith
99% confidence means you are accepting a 1% error on the ideal.
95% means that you are accepting a 5% error on the ideal.
It would be hard to get within 1% or 5% of the ideal with very few tosses. If you do a few tosses you may actually conclude that the coin is biased even if it's not. I suspect that within 10 or 20 tosses you should seriously approach your ideal 50-50.
With 4 coin tosses, HHHH has a probability of 6.25% which allows you to reject at 90% but not 95%.
With 5 coin tosses, HHHHH has a probability of 3.125% which allows you to reject at 95% but not at 99%.
The point is that, with less than 4 coin tosses you cannot show bias, no matter what. Sometimes you simply don't have enough data to argue statistically.
And statistics can only suggest where to look for actual evidence, it can't prove (or disprove) bias all by itself.
For instance, the contingency table analysis I did yesterday suggests looking for actual (not statistical) evidence of bias in the duration or the trials, not in the result. JakeS posted a theory that indictments were issued in the hopes of gathering sufficient evidence by the time the cases came to trial, which in some cases hasn't happened, resulting in prolongued imprisonments without trial rather than dismissals for lack of evidence. But a theory consistent with statistical suggestions is not evidence. Most economists teach a theoretical framework that has been shown to be fundamentally useless. -- James K. Galbraith
The ideal result should be heads 50% of the time and tails 50% of the time.
Because the ratio of Croat convicted to Serb civilian casualties in Croatia DOES show up as being particularly lenient to Croats - the ratio is 0 to some 2 300 dead.
by Frank Schnittger - May 31
by Oui - May 30 15 comments
by Frank Schnittger - May 23 3 comments
by Frank Schnittger - May 27 3 comments
by Frank Schnittger - May 5 22 comments
by Oui - May 13 66 comments
by Carrie - Apr 30 7 comments
by Oui - Jun 17 comments
by Oui - May 3125 comments
by Oui - May 3015 comments
by Frank Schnittger - May 273 comments
by Oui - May 2726 comments
by Oui - May 24
by Frank Schnittger - May 233 comments
by Oui - May 1366 comments
by Oui - May 910 comments
by Frank Schnittger - May 522 comments
by Oui - May 450 comments
by Oui - May 312 comments
by Oui - Apr 30273 comments
by Carrie - Apr 307 comments
by Oui - Apr 2646 comments
by Oui - Apr 889 comments
by Oui - Mar 19144 comments