DaveS

08-11-2010, 03:48 PM

Regarding scale and bias, the OVG 1.1 spec says:

The scale and bias parameters are used to interpret each coordinate of the

path data; an incoming coordinate value v will be interpreted as the value

(scale*v + bias).

I interpret that to mean that appending coordinate values -5.0, 0.0, and 5.0 to a path with a scale of 1.0 and bias of 0.0 should produce exactly the same results as appending 0.0, 0.5, and 1.0 to a path with scale of 10.0 and bias of -5.0. However, normalizing my input coordinates to 0-1 with an appropriate scale (max - min) and bias (min) significantly changes what is drawing.

Can someone explain how these are ACTUALLY used? I was just tweaking some things to see if anything improved performance, and now I'm confused by this. Did I misunderstand the spec?

The scale and bias parameters are used to interpret each coordinate of the

path data; an incoming coordinate value v will be interpreted as the value

(scale*v + bias).

I interpret that to mean that appending coordinate values -5.0, 0.0, and 5.0 to a path with a scale of 1.0 and bias of 0.0 should produce exactly the same results as appending 0.0, 0.5, and 1.0 to a path with scale of 10.0 and bias of -5.0. However, normalizing my input coordinates to 0-1 with an appropriate scale (max - min) and bias (min) significantly changes what is drawing.

Can someone explain how these are ACTUALLY used? I was just tweaking some things to see if anything improved performance, and now I'm confused by this. Did I misunderstand the spec?