Question

I understand that writing anything in assembly, or adding assembly to any program harms its portability. But, how bad? I mean, basically all PC's are x86 or x64 these days, right? So, if I embed assembly into a C program, why wouldn't it still compile no matter where it went?

Does this notion of un-portability just refer to when you really dig in to the specific quirks of a specific processor, to squeeze out every drop of performance from a piece of code?

The PC game "Roller Coaster Tycoon" was written almost entirely in assembly language if I remember correctly. So... How un-portably could it really be?

Was it helpful?

Solution

Besides the processor itself, there are, of course, always other considerations: what are the calling conventions on your target platform? How are struct values passed to other (say: API) functions? Which registers may be clobbered by the callee? Which are guaranteed to be preserved for the caller? How to do a system call? What's the memory layout prepared for you by the OS upon process start?

OTHER TIPS

Porting assembly, there is also the problem of the ABI, that varies from OS to OS. Porting a C program from Unix to Windows (or even from Linux to OpenBSD) may be a straightforward recompilation, but for an assembly program, you may find that some callee-save registers become caller-save, or that the floating-point parameters are passed differently.

And this is not only theoretical, viz. register r2 of the PowerPC versions of Linux and Mac OS X. In practice the problem may not be too bad, for instance AMD published a "recommended" ABI at the same time as its 64-bit instruction set.

If you think "PC == Windows", then adding assembler to a C program doesn't hurt much. If you step into the Unix world, you'll have lots of different CPUs: PPC in the PS3 or XBox, old Macs and many powerful servers. For many small devices, you'll have ARM. Embedded devices (which account for the vast majority of installed CPUs today) usually use their own custom CPU with a special instruction set.

So while many PCs today will be able to run Intel code, that accounts only for a small fraction of all CPUs out there.

That said, x86 code is not always the same, either. There are two main reasons for assembly code: You need to access special features (like interrupt registers) or you want to optimize the code. In the first case, the code is pretty portable. In the latter case, each CPU is a little bit different. Some of them have SSE. But SSE was soon replaced with SSE2 which was replaced with SSE3 and SSE4. AMD has their own brand. Soon, there will be AVX. On the opcode level, each of them has slightly different timing on the various versions of CPUs.

To make things worse, some opcodes have bugs that are fixed in specific steppings of a CPU. On top of that, some opcode is much faster on certain versions of CPUs than on others.

Next, you'll need to interface this assembly code with the C part. That usually means you either need to deal with ABI issues.

So you can see that this can become arbitrarily complex.

assembly is writing instruction directly for a specific processor, which means yeaah if the x86 live forever your code is somehow portable.

But even now the arm processor are coming back (i.e. next generation net book) and I am sure if processor won't change in next year.

I would say assembly language is by design not portable.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top