Final Scores 1990 to 2012 – Women

A new year with new competition formats is upon us. The first competitions to use the new format have already been wrapped up but many more is to come. We have both national and international competitions, continental championships (like the upcoming European in February) and World Cups. Since the system of firing ten single shots for everyone is now in the past, I thought this might be a good time to look into how the development of final scores over the past two decades has changed. I have written two posts on final scores (and qualification) in the past: Olympics and WC Finals, check those out for more information on the topic.

Since this is a big topic, more posts are to come. Men’s score development is next and then some comparisons between gender and competitions. Finally thee will be at least one post regarding statistical changes to ranking lists after switching to starting from zero. Can we expect different shooters to win now than when qualification was added to the total score? They might not show up all at once, but expect them within the next month or two.

Method & Material

If you’ve read any posts I’ve written on scores before, you’ll recognize the method used is the same as in those previous posts. The ISSF’s database was used to collect competition results for every championship sanctioned by the ISSF since 1990. That is; The Olympics, WC, WC Final and World Championships. I have only included competitions with participants from across the world and left continental championships and international competitions (that isn’t directly sanctioned by ISSF) out. The reason being to only get the highest scores produced each year and thus mirror the current “performance ceiling” (what can be achieved at that point in time).

Data was collected in a spreadsheet and worked on by means of descriptive statistics. If anyone is interested in the raw data, leave a comment or e-mail me and I’ll be happy to send it over to spare you from the work of importing it yourself.

The material was 4 WC and 1 WCF (WC final) every year. Every second year, an addition of either a WCH (world championship) or Olympics brings the average number of competitions up to 5.5 each year. Add to this that the number of WC’s were different in beginning of the 1990’s from today, making the total number of competitions 134. With 8 participants in each final, the total number of data points are 973. This should make the data set large enough to show trends. It has been a problem in the past with not enough ranking results or scores to back up the trends with (see for example any of the Olympic posts).

Results & Discussion

This is only the presentation of data, analysis will come up in a later post where I’ll include both men and women as well as other sub groups. The reason being that the posts tend to become quite long when everything goes into one, therefore the division into several.

We’ll start by looking at how scores fit in with a bell curve:

Plotted score distribution. The bars are the actual scores (right scale) and the curve what a large material would actually look like normalized (left scale). Height of bars indicate number of scores within each standard deviation.

From left to right: -3 standard deviation (SD), -2 SD, -1 SD, Mean, +1 SD, +2 SD and +3 SD. Mean is 101.84 ± 1.77. Height of each bar indicates how many scores that fits into that range as stipulated by the standard deviation. It’s directly visible that the two are separated from each other in two ways. First the obvious shift sideways with the average accounting for the middle bar (second highest). Second for the faster drop at +2 SD compared with the randomized normal curve. The reason being, and what can also be seen in the below trend graph, a shift to higher scores over time. The thing about scores is that there’s a cap in how high they can become (so far we haven’t seen anything higher than mid-105 p, the highest {inofficial} final score I’ve ever seen has been 107.1, but that’s a one in a lifetime achievement) but no real bottom end. With the score increase over time the would have to look like this to make sense. That is, top scores have long been up in the 102 – 103 p range but now we also find the average here which adds to the number of scores in this range. Together with the lower number of high scores, this accounts for the fast drop on the right side of the curve.

Average with standard deviation and a trend line for scores over the last 20 years.

A few interesting things to point out about the increase in averages over the years. It has roughly increased from 100.2 to 102.6, which is significant, not just a trend. The trend line fits in very well with a R² of 0.86 and an incline of 0.12 which means final scores have an average increase of 0.12 every year. There is one problem though, the increase hasn’t continued all across the years. If you look closely and disregard the trend line, you’ll see that the bars remain relatively steady at mid-102 p since 2002. Scores increase at a steady pace from 1990 to 2001, but have since then remained at the same level. The same can be seen when only looking at the Olympics and World Cup Finals but just for women, not men. Something that has changed is the SD, the range is smaller now. Scores are closer together with tighter competitions as a result. It was easier to win 20 years ago. There was a larger score margin in the top and the shooter didn’t have to be at his or her best all the time. Today (especially now when starting from zero) you are not as safe in your ranking position considering many shooters can score, and do score, high on a regular basis.

Average scores separated in the different ranking places. Note flattened out golden line (represents what the winner shot)

When the yearly average is separated into the individual ranking places (equals to a total of 5.5 data points going into each average) we can see the same thing noted in the previous graph. Scores have flattened out after increasing up to 2002 and is now steady with just a few bumps here and there. 2007 for example; the larger SD receives an explanation when seeing the drop in scores for the last ranking spot (red line).

An interesting aspect (which I will get into in more detail later on) is what will happen now when finals will start from zero. This is shown to some extent in the above graph. The winner before will remain the winner after the switch as evidence by the golden line (first place). She has remained as the best final shooter since pretty much 1996 except for two years (02 and 03) proving the best shooter of the day will also score well in a final. Or it could mean that final scores are more important to winning that qualification is. We can’t know which one is closer to reality considering this data set excludes qualification scores. After the first place, this link goes away as noted in how much lines cross over each other. Last two ranking places are also well separated from the rest but anything between top and bottom are all mixed up. More on this later though.

What we can take away from the graph, however, is that the increase in scores seems to continue a bit further than 2002. If we take out the 99 at 2007 (an outlier), there is a small increase all across that second part. It’s not enough to be significant or anywhere close to the first half of the graph, but still, an increase.

Leave a comment

Filed under Ranking Results & Scores

Leave a comment