Domanda

I use Nvidia nv_dds utility to load DDS image files to use in OpenGL program. It works on Windows but fails on Linux (Ubuntu 12.10).Initially I thought the problem with nv_dds but then found that fread() reads header bytes with wrong offset on Linux (GCC 4.7)

This is the block that reads DDS file marker and then the DDS header:

// open file
FILE *fp = fopen(filename.c_str(),"rb");
if (fp == NULL) {
    return false;
}
// read in file marker, make sure its a DDS file

char filecode[4];
fread(filecode, 1, 4, fp);
if (strncmp(filecode, "DDS ", 4) != 0) {
    fclose(fp);
    return false;
}

// read in DDS header
DDS_HEADER ddsh;
fread(&ddsh, 1,sizeof(DDS_HEADER)  , fp);

When I look through the contents of DDS_HEADER instance I can see a couple of real values assigned to wrong properties and the rest are junk.

Then, if I comment out the "DDS" marker check fread() :

// open file
FILE *fp = fopen(filename.c_str(), "rb");
if (fp == NULL) {
    return false;
}
// read in file marker, make sure its a DDS file
/* comment out for test
char filecode[4];
fread(filecode, 1, 4, fp);
if (strncmp(filecode, "DDS ", 4) != 0) {
    fclose(fp);
    return false;
}
*/
// read in DDS header
DDS_HEADER ddsh;   
fread(&ddsh, sizeof( DDS_HEADER ),1 , fp);//sizeof( DDS_HEADER )

Then I get image width value into imageHeight property of DDS_HEADER.The rest of properties are still junk.

All this doesn't happen when I test it on Windows machine. Is it possible that fread() works differently on Linux GCC than on Windows with MSVC compiler?

È stato utile?

Soluzione

I solved this and because no useful input was proposed I will answer this question by myself.

I started suspecting about differences of data type size between different compilers.Then I found this post. After that I found that the size of DDS header (compiling with GCC) is 248 which is twice larger than it should be.(MS specs say it must be 124 bytes exactly).nv_dds dds header uses unsigned long for its memebers:

typedef struct 
 {
    unsigned long dwSize;
    unsigned long dwFlags;
    unsigned long dwHeight;
    unsigned long dwWidth;
    unsigned long dwPitchOrLinearSize;
    unsigned long dwDepth;
    unsigned long dwMipMapCount;
    unsigned long dwReserved1[11];
    DDS_PIXELFORMAT ddspf;
    unsigned long dwCaps1;
    unsigned long dwCaps2;
    unsigned long dwReserved2[3];

  }DDS_HEADER;

So it appear that MSVC compiler treats unsigned long to be of 4 bytes while GCC on Linux 8 bytes.From here is the double size of the header. I changed it all to unsigned int (also in the DDS_PIXELFORMAT header):

 typedef struct 
 {
    unsigned int dwSize;
    unsigned int dwFlags;
    unsigned int dwHeight;
    unsigned int dwWidth;
    unsigned int dwPitchOrLinearSize;
    unsigned int dwDepth;
    unsigned int dwMipMapCount;
    unsigned int dwReserved1[11];
    DDS_PIXELFORMAT ddspf;
    unsigned int dwCaps1;
    unsigned int dwCaps2;
    unsigned int dwReserved2[3];


 }DDS_HEADER;

And now it all works!Therefore it seems, contrary to what is said in some places, NVidia nv_dds is not cross platform(or/and cross compile read) and this hack should be done to get it working with GCC on Linux.

Altri suggerimenti

With GCC longs are 4 bytes when compiled for 32 bits and 8 bytes for 64bits. Use can use option -m32 or -m64 to explicit target 32 or 64 bits.

For ABI related tasks, you should always use types that has its size described in its type name which defined in stdint.h like int32_t/uint64_t, this can save a lot of problems during compiling in different platforms.(Except big/little endian problem)

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top