Question

I'm having problems writing and reading an array of doubles into a binary file. In some cases the size of the file is greater than expected. The following code:

int main()
{
    int i, j, size=13;
    FILE *fid = fopen("C:\\Group0\\Night0\\Imanti\\test.dat", "w");
    double *arr = (double *)malloc(sizeof(double)* size);

    for (i = 0; i < size; i++) {
        arr[i] = size / (i + 1.0);
        printf("%f\n", arr[i]);
    }

    fwrite(arr, sizeof(double), size, fid);
    free(arr);
    fclose(fid);
    printf("\n\n");

    fid = fopen("C:\\Group0\\Night0\\Imanti\\test.dat", "r");
    arr = (double *)malloc(sizeof(double)* size);
    fread(arr, sizeof(double), size, fid);

    for (i = 0; i < size; i++) {
        printf("%f\n", arr[i]);
    }

    free(arr);
    fclose(fid);

    return 0;
}

shows a simple example of my problem. If I run it with for example size = 10, the size of the file is 80 bytes and the numbers are the same at writing and reading. If I run it with size = 13, the size of the file is 105 bytes (when it should be 104 bytes) and the numbers are completely different. The output of the case with size = 13 is:

13.000000
6.500000
4.333333
3.250000
2.600000
2.166667
1.857143
1.625000
1.444444
1.300000
1.181818
1.083333
1.000000


13.000000
-6108112916776316800000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000
-6277438562204192500000000000000000000000000000000000000000000000000.000000

Either the first number (13) or the second one (6.5) is written with an extra byte, causing the file to be larger and the reading to fail. I know double representation can lead to precision errors, but as far as I can tell this is not about precision as is the size of the double variable in the file what is changing.

I'm not sure if I'm missing something really obvious here, but it has driven me crazy already. I'm using vs2013 on an i7 computer.

Was it helpful?

Solution

Since you are working with binary data, you may need to specify binary mode for the file operations:

FILE *fid = fopen("C:\\Group0\\Night0\\Imanti\\test.dat", "wb");

fid = fopen("C:\\Group0\\Night0\\Imanti\\test.dat", "rb");
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top