Question

When using struct parameters in a function, clang will change the function signature. Instead of using a struct type, the signature will be a coerced int of equal size. In my compiler project, I use the llvm struct type for the method signature (which seems more logical).

This wouldn't be a problem, except for the fact that resulting assembly produced by LLVM when using the struct or coerced types are different and not call compatible. This results in my compiler not being ABI compatible with C functions with structs.

Why does clang do this? Is this something specified in the C ABI?

Here's a simple example C source file:

struct TwoInt { int a, b; };

struct EightChar { char a, b, c, d, e, f, g, h; };

void doTwoInt(struct TwoInt a) {}

void doEightChar(struct EightChar a) {}

int main()
{
        struct TwoInt ti;
        struct EightChar fc;

        doTwoInt(ti);
        doEightChar(fc);

        return 0;
}

Resulting LLVM-IR from Clang

%struct.TwoInt = type { i32, i32 }
%struct.EightChar = type { i8, i8, i8, i8, i8, i8, i8, i8 }

define void @doTwoInt(i64 %a.coerce) nounwind uwtable {
  %a = alloca %struct.TwoInt, align 8
  %1 = bitcast %struct.TwoInt* %a to i64*
  store i64 %a.coerce, i64* %1, align 1
  ret void
}

define void @doEightChar(i64 %a.coerce) nounwind uwtable {
  %a = alloca %struct.EightChar, align 8
  %1 = bitcast %struct.EightChar* %a to i64*
  store i64 %a.coerce, i64* %1, align 1
  ret void
}

define i32 @main() nounwind uwtable {
  %1 = alloca i32, align 4
  %ti = alloca %struct.TwoInt, align 4
  %fc = alloca %struct.EightChar, align 1
  store i32 0, i32* %1
  %2 = bitcast %struct.TwoInt* %ti to i64*
  %3 = load i64* %2, align 1
  call void @doTwoInt(i64 %3)
  %4 = bitcast %struct.EightChar* %fc to i64*
  %5 = load i64* %4, align 1
  call void @doEightChar(i64 %5)
  ret i32 0
}

What I would've expected (and what my compiler outputs):

%TwoInt = type { i32, i32 }
%EightChar = type { i8, i8, i8, i8, i8, i8, i8, i8 }

define void @doTwoInt(%TwoInt %a) {
  %1 = alloca i32
  %2 = alloca %TwoInt
  store %TwoInt %a, %TwoInt* %2
  ret void
}

define void @doEightChar(%EightChar %a) {
  %1 = alloca i32
  %2 = alloca %EightChar
  store %EightChar %a, %EightChar* %2
  ret void
}

define i32 @main() {
  %1 = alloca i32
  %ti = alloca %TwoInt
  %fc = alloca %EightChar
  %2 = load %TwoInt* %ti
  call void @doTwoInt(%TwoInt %2)
  %3 = load %EightChar* %fc
  call void @doEightChar(%EightChar %3)
  ret i32 0
}
Was it helpful?

Solution

Two month ago there was a thread in llvmdev: [LLVMdev] "Struct parameters being converted to other types" by Jaymie Strecker, Jan 14 19:50:04 CST 2013. She encountered similar problem: "When a function with a struct parameter or return type is compiled with clang -O0 -emit-llvm, the resulting bitcode varies greatly depending on the type of struct.", and clang turned struct into pointer, vector, passed it as several doubles, or merged into single i64 type. Anton Korobeynikov replied at Jan 15 00:41:43 CST 2013:

The struct is lowered to something which corresponds to C/C++ ABI on your platform for passing the struct in proper way.

So, clang does struct passing according to the way used by your OS, libs and native compiler. This is done to allow you build modules, which will work with local libraries. I think that your compiler project uses wrong ABI.

You can fix your compiler project to use platform ABI (convert structs like it was done by clang), OR you can define your own ABI and tune clang to use it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top