Would it be correct to say that a 3.5/5 movie rating on a 0.5/5 scale isn’t exactly the same as a 7/10 rating on a 0.5/10 scale, even though it seems like it mathematically? The reason is that half a star on the 5-point scale visually represents less than a full point on the 10-point scale. So, while a great scene might earn you a half-point bump, it wouldn’t necessarily add a full point on the 10-point scale. If rated on a 10-point scale, it’d probably be closer to a 6.5, which converts to 3.25/5 or simply 3/5 on the 0.5/5 scale. This shows that converting ratings to different scales don’t always align perfectly with your intended rating.
Would I be right claiming this?
If there are the same number of values then the scales are identical, as you mentioned. There might be some differencss in how it is perceived on the individual level, but it doesn’t mean they are not the same scale.
For example, a numbered scale to me invites more comparisons to other values because they are numbers. This makes them feel less accurate when I disagree about whether one similarly scored movie is better than the average score. Stars don’t have this same urge for me, but might for someone else.
I kind of like the rotten tomatoes idea of whether something is mostly liked or not, because it doesn’t encourage comparisons.