glcolor4 works in debugger but not in exe

Hi, I am using win 8.1 and Visual studio 13 developing a 64 bit mfc app
My app works fine in the debugger but seems to ignore the colours when I run the exe. Everything is red.

Anyone any clues as to where to start looking?

Regards

MacSam

Look for unitialized local variables - this is the most common cause of this kind of behavioural difference between debug and release builds.

[FONT=Consolas][SIZE=2][FONT=Consolas][SIZE=2][COLOR=#0000ff]Thanks, all these variables have been initialised. I appreciate we need to be precise. This works with 32 bit mfc exe. (maybe it was just luck)
[/SIZE][/FONT]if (cRed != 0)
gfRed = (GLfloat)(cRed / 255.0f); //(GLfloat)cRed / (GLfloat)255.0f;
else
gfRed = 0.0f;

if (cGreen != 0)
gfGreen = gfGreen = (GLfloat)(cGreen / 255.0f);// (GLfloat)cGreen / (GLfloat)255.0f;
else
gfGreen = 0.0f;

if (cBlue != 0)
gfBlue = (GLfloat)(cBlue / 255.0f);//(GLfloat)cBlue / (GLfloat)255.0f;
else
gfBlue = 0.0f;

glColor4f( (GLfloat)gfRed, (GLfloat)gfGreen, (GLfloat)gfBlue, (GLfloat)1.0f );
glVertex3f((GLfloat)(dE1), (GLfloat)(dN1), (GLfloat)dZ1);
[/COLOR][/SIZE][/FONT][FONT=Consolas][SIZE=2] [/SIZE][/FONT]
[FONT=Consolas][SIZE=2][/SIZE][/FONT]

What about the values of cRed, cGreen and cBlue? Are they initialized?

Also worth trying - since cRed, cGreen and cBlue seem to be of unsigned char type:

glColor4ub (cRed, cGreen, cBlue, 255);

Does this do anything different?

Thanks for your help Mhagain.

I seem to have fixed it. I think it was defaulting to the intel graphics card, I forced it to use the NVDIA card - it has more memory. Thank you for showing me the glColour4ub. That is another tool for me to use.

glColor4ub (cRed, cGreen, cBlue, 255); will give me an opportunity to use a bit less memory,every byte counts.

BTW how do you get the code to display in the box?

In this case memory usage is quite unimportant, but as a general rule beware of “every byte counts” thinking when it comes to programming with a hardware accelerated 3D API. There are several cases where memory alignment and packing rules are far more important for performance, and they can mean burning a little extra memory in exchange for orders of magnitude more performance. Because it’s not the 1970s any more you can treat memory as a cheap and plentiful resource that is there to be used (provided you don’t do anything silly). After all, if you have a GPU with 2gb of RAM but you only ever use 128mb of that - you’re wasting the other 1920mb.

For the code I used code tags. Try doing “reply with quote” to my post to see.

[QUOTE=mhagain;1286862]In this case memory usage is quite unimportant, but as a general rule beware of “every byte counts” thinking when it comes to programming with a hardware accelerated 3D API. There are several cases where memory alignment and packing rules are far more important for performance, and they can mean burning a little extra memory in exchange for orders of magnitude more performance. Because it’s not the 1970s any more you can treat memory as a cheap and plentiful resource that is there to be used (provided you don’t do anything silly). After all, if you have a GPU with 2gb of RAM but you only ever use 128mb of that - you’re wasting the other 1920mb.

For the code I used code tags. Try doing “reply with quote” to my post to see.[/QUOTE]

Thanks, is there anyway I can trap that there is a problem\issue with the graphics card?