The TS figures shown on the racecard are adjusted for the weight to be carried. Therein lies a problem.
Say a horse wins, carrying 9-8 and records a SF of say 80. That is the SF at 10-0. So, his actual “weight carrying” SF was 86. Next the horse is raised in class and is now due to carry say 8-8. His SF on the racecard would be 100 which, we will say is the top rated. This figure now implies that the horse WILL run faster, 20lb faster. Of course, the horse would most likely have to run faster to win this higher-class race. But is it really capable of doing so?
Only way to find out or challenge that premise
G
Garry is to create alternative algorithms within the realm of "Weight Carried" & "Running Handicap Base Weight" and comparing them to the original method.
Example
Form Rating to Base Weight (140 lbs) = 92
Time Figure to Base Weight (140 lbs) = 88
Weight Carried in race = 130 lbs
then sub-rating / algo 1 could be
Form Rating to Carried Weight = 102 @ 130 lbs
Time Figure to Carried Weight = 98 @ 130 lbs
and
1) if Next Race Weight To Carry is higher - adjust ratings to new weight from 130 lbs
2) if Next Race Weight To Carry is lower - leave on ratings @ 130 lbs
example 1) next race - weight to be carried = 136 lbs - then new ratings are Form 96 , Time 92
example 2) next race - weight to be carried = 124 lbs - then no change as horse is proven at physical weight
premise being that increase in weight has a higher probability of slowing horse down than probability of a decrease in weight speeding a horse up. - rinse and repeat 100 times - STOP and compare results to "Base Weight" method
sub rating / algo 2 could be
Adjust exactly the same as above to carried weight
1) if Next Race Weight To Carry is higher - adjust the same as above
2) if Next Race Weight To Carry is lower - take difference at weights 130-124 = 6 / constant eg 2 = hence add 3lbs
example 1) is same as above so new ratings are still Form 96 , Time 92
example 2) next race - weight to be carried = 124 lbs - then new ratings are Form 105 , Time 101
premise still being that increase in weight has a higher probability of slowing horse down than probability of a decrease in weight speeding a horse up but both might have an effect and you are accounting for the higher probability by using a 2:1 ratio of weight in favour of the increase of weight but still proportionally accounting for the lesser probability of weight speeding a horse up
rinse and repeat 100 times - STOP and compare results to "Base Weight" method & also Sub Rating / Algo 1
There are plenty more algos including using a Base RANGE of Max-Min weights which LIMITS ALL future weight race to race movements
Base Weight Range 10 lbs
Max Base Weight = 140 lbs
Base Weight = 135 lbs
Min Base Weight = 130 lbs
all ratings are adjusted to a base weight of 135 lbs
so if next race weight is >= 130 lbs AND <= 140 lbs
adjust to next race weight
any other deviations from this ie > 140 lbs AND < 130 lbs
adjust to either Max or Min Base Weight
essentially ignoring Rises above 140 lbs
and limiting decreases to 130 lbs
premise being that either a decrease or increase in weight does not have an exponential effect ie more in decrease/increase then greater the effects and whilst they do have a slowing / speeding effect that effect is limited
Rinse and Repeat - etc etc
Btw this is how you can come up with new variables for example most of the betting public use Days Since Last Run to ascertain "Race Fitness" only because it is the only information given on a racecard or a newspaper or even in Databases - but using the same parameters ie Run & Days and thinking cumulatively then there is a datapoint for cumDS2LR & cumDS3LR & even cumDS4LR - so by recording them and then looking at the distribution of each entry in a race - median datapoints can be found within the race distribution - and DSLR , cumDS2LR & cumDS3LR and to separate even cumDS4LR becomes new datapoints and you find yourself not only working in Days or Cumulative Days Since Last Run but also Runs within Cumulative Days and it opens up new concepts such as "over racing" or "peaking" and patterns can be found
For example a horse can have a "surging" pattern over the Cumulative Last 3 runs where it is returned to the track quicker every time than at the last cum Days Datapoint so it may have ran 40 days ago on CumDS3LR then returned 18 days later so ran 22 days ago on cumDS2LR, returns 12 days later on DSLR and so is back at the track from 10 days ago. The constants are the "Days in-between" which from LR are 10, 12, 18 which are decreasing( from right to left ) while the whole is the 40 cumDS3LR - this can be compared to a horse with a cumDS3LR of 78 whose "Days in-between" from LR are 35, 28, 15 ( a completely opposite pattern ) - the ranks at each data points and the differences from the race median distribution at each datapoint are what matter most but a significant pattern can add to that.
To these new datapoints Ratings can be added as well as Expected Ratings (on the day and adjusted) so "output" can be measured over the last 3 or 4 runs , singularly and as a cumulative whole - that surging pattern can turn to a negative if the horse has exceeded "output" - the chances of regression could increase as over-racing becomes a factor especially if it is facing "new" challenges - Class rise, going change or distance rise - or a combination.