Pergunta

As the title suggests, I've been having problems loading .tga files. I've been using this for reference and as far as I can tell (Apart from using c++ functions instead of c ones) I'm doing the same thing. I do get a texture but it's a garbled version of what it should be and I'm not really sure why. Please excuse all the mistakes, I'm just trying to get it working.

Header:

struct TGA
{
    GLuint Load(const char* filename);
    unsigned char header_[12];
    unsigned char bpp_;
    unsigned char id_;
    unsigned short width_;
    unsigned short height_;
    unsigned char* data_;
};

Cpp:

GLuint TGA::Load(const char* filename)
{
width_ = 0; height_ = 0;
bpp_ = 32; id_ = 8;
data_ = 0;

std::ifstream file;

file.open(filename, std::ios::binary | std::ios::ate);

if(file.fail())
{
    std::cout<<filename<<" could not be opened."<<std::endl;
    return 0;
}

file.seekg(0, std::ios::beg);
file.read((char*)header_, sizeof(unsigned char)*12);
file.read((char*)&width_, sizeof(unsigned short));
file.read((char*)&height_, sizeof(unsigned short));
file.read((char*)&bpp_, sizeof(unsigned char));
file.read((char*)&id_, sizeof(unsigned char));

int imageSize = width_ * height_;
std::cout<<imageSize<<std::endl;

data_ = new unsigned char[imageSize * 3];  //3 means RGB 
file.read((char*)data_, imageSize * 3);

file.close();

GLuint texture;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width_, height_, 0, GL_BGRA, GL_UNSIGNED_BYTE, data_);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

gluBuild2DMipmaps(GL_TEXTURE_2D, 3, width_, height_, GL_RGBA, GL_UNSIGNED_BYTE, data_);


delete [] data_;

return texture;
}
Foi útil?

Solução

When you read in the data, you assume the targa file has 24 bits per pixel (file.read((char*)data_, imageSize * 3)) without checking the value of bpp_. You then pass the image data to OpenGL, and say your data is in 32-bpp BGRA format.

You should check what the value of bpp_ is, allocate and read the data based on that value and also pass the correct data format to OpenGL (BGR or BGRA, depending on whether the alpha channel is included in the image or not).

Outras dicas

Yes, you're doing the same thing :)

If you look at the comments below the code you used for a reference, the code it reads data assuming that the computer running it is big endian (like the PS3). I'm guessing here, but are you running on a PC (which uses little endian)?

Getting endian-ness wrong would make all your datatypes larger than a byte get the wrong values (height_/width_), you'll need to check if the computer that runs the program uses big- or little-endian and correct the values accordingly.

To convert height_ and width_ to correct endian-ness you should simply be able to use; ntohs and ntohl (network byte order being big endian)

height_ = ntohs(height_);
width_  = ntohs(width_);

First let me note that you're doing things way better than the worst method out there (reading into a packed struct that is). For a robust file parser always read the file piece by piece. That's the only way to avoid bugs and exploits.

Unfortunately there are still a few problems, but those are easy to fix:

struct TGA
{
    GLuint Load(const char* filename);
    unsigned char header_[12];
    unsigned char bpp_;
    unsigned char id_;

    unsigned short width_;
    unsigned short height_;

width and height being short may become a problem. The devil is in the detail, but I'll explain in time. For now we should respect that on some architectures short may be in fact too short for us. To be on the safe side use uint16_t from stdint.h (part of C99 standard) or better yet uint32_t. We'll see why.

    unsigned char* data_;
};

GLuint TGA::Load(const char* filename)
{
width_ = 0; height_ = 0;
bpp_ = 32; id_ = 8;
data_ = 0;

We're going to overwrite those anyway, so no need to clear them, but no harm either.

std::ifstream file;

file.open(filename, std::ios::binary | std::ios::ate);

if(file.fail())
{
    std::cout<<filename<<" could not be opened."<<std::endl;
    return 0;
}

file.seekg(0, std::ios::beg);
file.read((char*)header_, sizeof(unsigned char)*12);

file.read((char*)&width_, sizeof(unsigned short));
file.read((char*)&height_, sizeof(unsigned short));

Okay, those are a bit problematic, because they assume the program to run on a little-endian machine. Say you were to develop for a PowerPC (think PlayStation 3, XBox or a ARM running in big endian mode) this code would break. Solution:

char buf[2]; file.read(buf, 2); width = buf[0] | buf[1]<<8; and the same for height. This code has still issues if running on a machine with a char size other than 8, but those are between far and few these days. If you want to be on the safe side, add a #if CHAR_BITS!=8 #error "char bits not 8" #endif somewhere.

file.read((char*)&bpp_, sizeof(unsigned char));
file.read((char*)&id_, sizeof(unsigned char));

Now the following line is wrong:

int imageSize = width_ * height_; // <<<<<<<<<

Since width_ and height_ are both of type short the multiplication will coerece to width short. This particular behaviour of C and C++ is one of the biggest source of nasty, exploitable bugs. Say I'd give you a picture in which header were declared width = height 2^15-1, a number fitting well into the expected 16 bits. If you multiply those two and coerce it into short you get… 1. And then there's a little other problem as well.

std::cout<<imageSize<<std::endl;

data_ = new unsigned char[imageSize * 3];  //3 means RGB 
file.read((char*)data_, imageSize * 3);

The good thing is, that you read only the amount of bytes you allocated via a proxy variable, so I can't inject code into your program. However I can crash it, because later on your glTexImage2D call will try to read (2^15 -1)^2 bytes from unallocated storage and thus cause a segfault. The trick to avoid this is to cast individual variables to a type large enough in the computation. Or you do as I suggested and use uint32_t for width and height as well. I code by the rule that every variable should hold at least twice the bits as required for the largest allowed value in it. Unfortunately the C standard can not be "fixed" to coerce to the size of the L type of a statement, because that would break a lot of existing code.

The other problem you have is, that you hardcoded 3 channels. Why? You've got the bpp-header, which tells you the format. So let's change this: uint32_t imageSize = (uint_32)width_ * height_ * bpp_/8;. Note that it's neccesary to cast only one variable of the expression to the larger type. However don't make this mistake: (uint32_t)(width_ * height_ * bpp_/8) which would cast the short typed result to uint32_t - mind the parentheses.

Last but not least we can pass this to glTexImage2D, or to gluBuildMipMap2D. I strongly suggest not using gluBuildMipmap because it will convert your texture to power-of-2 dimensions, which is no longer needed since OpenGL-2. Instead call glGenerateMipmap The format parameter should not be the number of channels, but a token of GL_RED, GL_RG, GL_RGB, GL_RGBA. So use a small switch statement for this:

GLenum internal_format;
GLenum format;
GLenum buffer_format;
switch(bpp_) {
case 8:
    internal_format = GL_R8;
    format = GL_RED;
    buffer_format = GL_UNSIGNED_BYTE;
    break;
case 16:
    internal_format = GL_RGB5;
    format = GL_RGB;
    buffer_format = GL_UNSIGNED_SHORT_5_6_5;
    break;
case 24:
    internal_format = GL_RGB8;
    format = RGB;
    buffer_format = GL_UNSIGNED_BYTE;
    break;
case 32:
    internal_format = GL_RGBA8;
    format = RGB;
    buffer_format = GL_UNSIGNED_BYTE;
    break;
default:
    format_unsupported_error(); return;
}

glTexImage2D(..., internal_format, ..., internal_format, format, ...);
Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top