Would it be correct to say that a 3.5/5 movie rating on a 0.5/5 scale isn’t exactly the same as a 7/10 rating on a 0.5/10 scale, even though it seems like it mathematically? The reason is that half a star on the 5-point scale visually represents less than a full point on the 10-point scale. So, while a great scene might earn you a half-point bump, it wouldn’t necessarily add a full point on the 10-point scale. If rated on a 10-point scale, it’d probably be closer to a 6.5, which converts to 3.25/5 or simply 3/5 on the 0.5/5 scale. This shows that converting ratings to different scales don’t always align perfectly with your intended rating.
Would I be right claiming this?
Well apparently for some reason you are starting both scales on 0.5 which means they are not “the same”. If you start the 5 scale on 0.5 you have to start the 10 scale on 1 to be similar. So it’s not weird they feel different
If there are the same number of values then the scales are identical, as you mentioned. There might be some differencss in how it is perceived on the individual level, but it doesn’t mean they are not the same scale.
For example, a numbered scale to me invites more comparisons to other values because they are numbers. This makes them feel less accurate when I disagree about whether one similarly scored movie is better than the average score. Stars don’t have this same urge for me, but might for someone else.
I kind of like the rotten tomatoes idea of whether something is mostly liked or not, because it doesn’t encourage comparisons.
I’m not sure about your visual interpretation, but I completely agree that the two scales don’t translate directly, and that if something is rated 7/10 I’d assume it’s better than something rated 3.5 stars / 5.
As to the reason? I wonder if the scales five different senses of the middle value? In a five star system, 3/5 film is the middle value, and not especially good nor bad, but I’d probably give the same “totally average, not good not bad” film 5/10. Similarly, it seems weird to translate “Awful, 1/5” into “Awful, 2/10”. So maybe the difference comes from a lack of clarity about half stars, it’s okay to give 0.5 / 5? But not 0? Or 5.5?
And that doesn’t even start to address the modern “if it’s rated less than 4.6* it’s probably awful” issue…
if something is rated 7/10 I’d assume it’s better than something rated 3.5 stars / 5.
I think I probably would too. And yet, I would tend to instinctively think of 70% as worse than 7/10, even though that makes no sense.
I think part of the problem is that most times that we use a 10 point scale for rating things, the low end tends to get ignored and unused, whereas on a 5 point scale there’s not as much room for throwing out the low end so ratings there tend to utilize more of the available spectrum. So where a 2/5 might be “below average” for example, a 4/10 would tend to be treated more harshly, more as a clear failure (but not without perhaps 1 or 2 redeeming qualities)