talk about glTexImage*D...internal format and format....

https://www.opengl.org/wiki/GLAPI/glTexImage2D
To define texture image, call to glTexImage, The arguments of the third and seven are interalformat and format respectively.
Do they have to be same? how about if not?

2] suppose the image which will be loaded is of 300x300 size, can we load it as 200wx200h size?

Do they have to be same?

No. And generally speaking, they shouldn’t be.

how about if not?

They do two different things. The Internal Format is, well, that. The “format” is the pixel transfer format.

The internal format describes the way that OpenGL will story the data. Hence the phrase “internal format”. The pixel transfer format describes the way that you store the data in the array you’re passing to glTexImage. Also, unlike internal formats, the pixel transfer format alone isn’t enough information; you must also use the pixel transfer type.

suppose the image which will be loaded is of 300x300 size, can we load it as 200wx200h size?

If you’re wondering if OpenGL will automatically rescale the pixels for you, no. glTexImage allocates the storage for a texture mipmap. But it also allows you to upload data to it. And while OpenGL pixel transfers are pretty powerful, rescaling is not one of the things they allow.

They do two different things. The Internal Format is, well, that. The “format” is the pixel transfer format.

The internal format describes the way that OpenGL will story the data. Hence the phrase “internal format”. The pixel transfer format describes the way that you store

thanks, that means “format” is an external format which the image you will load possess?
but, if there is no consistent between them, what will take place?

[QUOTE=reader1;1266495]thanks, that means “format” is an external format which the image you will load possess?
but, if there is no consistent between them, what will take place?[/QUOTE]

The driver will convert the format during the transfer operation. Which could be slow.

[QUOTE=reader1;1266495]thanks, that means “format” is an external format which the image you will load possess?
but, if there is no consistent between them, what will take place?[/QUOTE]

Values are converted between representations, missing components are defaulted (1 for alpha, 0 for R/G/B), excess components are discarded, out-of-range values are clamped to the representable range.

All of this is covered in sections 8.4 and 8.5 of the current specification (it’s covered in all versions of the specification, but section numbers will be different in older versions).

That means we’d better keep them same. to avoid convert by the driver.

[QUOTE=GClements;1266499]Values are converted between representations, missing components are defaulted (1 for alpha, 0 for R/G/B), excess components are discarded, out-of-range values are clamped to the representable range.

All of this is covered in sections 8.4 and 8.5 of the current specification (it’s covered in all versions of the specification, but section numbers will be different in older versions).[/QUOTE]
That is the various format and type table of the parameters.

Then, the one of the important features of the commend is loading the texture from memory to gpu buffer, Does it have to follow glBind*Buffers? if not, how shall opengl allocate the memory of the texture possessed?

Unfortunately OpenGL complicates things a little because the internalFormat you request is not necessarily what the driver itself will use.

The classic example of this is that a GL_RGB internalFormat is actually more likely to be a 4-component format with the fourth (alpha) set to 1 (assuming a 0…1 range). So if you set GL_RGB for both internalFormat and format, you’re still probably going to get a conversion.

You should note that on many platforms the fast path is actually to use GL_BGRA for format, and that GL_BGRA does not exist as a valid value for internalFormat, so even the component ordering of internalFormat shouldn’t be seen as significant.

You should also note that on older OpenGL when you request an internalFormat of GL_RGB or GL_RGBA, OpenGL is not actually obliged to give you 8-bits per channel (or whatever else you may assume it is); so if it gives you something else and if you supply data that assumes otherwise, you’re definitely going to get a conversion.

In more recent versions of OpenGL things are a little bit better because there is a set list of internalFormats that drivers must support, and this list is of explicitly sized formats, but there is still confusion over the whole GL_RGB thing. This is completely in accord with the OpenGL philosophy, where drivers or hardware that may actually have native support for 3-component internalFormats are also supported, so it’s helpful for the developer who wants something that will work and support what they asked for, but not too helpful for the developer who wants to explicitly specify a fast mode without the driver getting in the way.

Things are also a little clearer with GL_ARB_texture_storage where the format and type parameters don’t exist when initially specifying a texture. This can aid understanding, because it helps to demonstrate that format and type don’t actually relate to how OpenGL itself stores the texture.

Note that you can also [query which pixel transfer formats are preferred for a particular internal format](https://www.opengl.org/wiki/Query Image Format), for a specific implementation.

[QUOTE=mhagain;1266508]Unfortunately OpenGL complicates things a little because the internalFormat you request is not necessarily what the driver itself will use.

The classic example of this is that a GL_RGB internalFormat is actually more likely to be a 4-component format with the fourth (alpha) set to 1 (assuming a 0…1 range). So if you set GL_RGB for both internalFormat and format, you’re still probably going to get a conversion.

You should note that on many platforms the fast path is actually to use GL_BGRA for format, and that GL_BGRA does not exist as a valid value for internalFormat, so even the component ordering of internalFormat shouldn’t be seen as significant.

You should also note that on older OpenGL when you request an internalFormat of GL_RGB or GL_RGBA, OpenGL is not actually obliged to give you 8-bits per channel (or whatever else you may assume it is); so if it gives you something else and if you supply data that assumes otherwise, you’re definitely going to get a conversion.

In more recent versions of OpenGL things are a little bit better because there is a set list of internalFormats that drivers must support, and this list is of explicitly sized formats, but there is still confusion over the whole GL_RGB thing. This is completely in accord with the OpenGL philosophy, where drivers or hardware that may actually have native support for 3-component internalFormats are also supported, so it’s helpful for the developer who wants something that will work and support what they asked for, but not too helpful for the developer who wants to explicitly specify a fast mode without the driver getting in the way.

Things are also a little clearer with GL_ARB_texture_storage where the format and type parameters don’t exist when initially specifying a texture. This can aid understanding, because it helps to demonstrate that format and type don’t actually relate to how OpenGL itself stores the texture.[/QUOTE]
Thank you for your detail explain, though Im afrid it’s a little rather beyond me.I will make sense of it slowly
odd enough, now that

that format and type don’t actually relate to how OpenGL itself stores the texture.
why it added so many confusion to users? instead ,it has glPixelStorei to relate to the way of store format.

GClements has listed at#5.in the newst version4.5. I wish to find a doc with illustration and math fomula. and examples.