vgCreatePath scale and bias

Regarding scale and bias, the OVG 1.1 spec says:

The scale and bias parameters are used to interpret each coordinate of the
path data; an incoming coordinate value v will be interpreted as the value
(scale*v + bias).

I interpret that to mean that appending coordinate values -5.0, 0.0, and 5.0 to a path with a scale of 1.0 and bias of 0.0 should produce exactly the same results as appending 0.0, 0.5, and 1.0 to a path with scale of 10.0 and bias of -5.0. However, normalizing my input coordinates to 0-1 with an appropriate scale (max - min) and bias (min) significantly changes what is drawing.

Can someone explain how these are ACTUALLY used? I was just tweaking some things to see if anything improved performance, and now I’m confused by this. Did I misunderstand the spec?

OK, setting aside any questions about how scale and bias are used by implementations… is my understanding of the INTENT of scale and bias, at least, one that is shared by others?

My understanding is the same. The only spot I can see as a point of confusion, is that the transforms don’t affect points being appended to a path, they only affect a path when rendered. So if you create both paths, then apply associated described transformations before draw each path, I would expect the results you described. But if by some chance, your applying transforms, building a path, applying the second set of transforms, building the second path, then rendering both paths, I would expect both paths to having the last set of transformations applied.

If your doing things properly perhaps there’s a bug in the VG implementation. ;\