Question

Might look similar to: ARM and NEON can work in parallel?, but its not, I have some other issue ( may be problem with my understanding):

In the protocol stack, while we compute checksum, that is done on the GPP, I’m handing over that task now to NEON as part of a function:

Here is the checksum function that I have written as a part of NEON, posted in Stack Overflow: Checksum code implementation for Neon in Intrinsics

Now, suppose from linux this function is called,

ip_csum(){
  …
  …
  csum = do_csum(); //function call from arm
  …
  …
}


do_csum(){
  …
  …
  //NEON optimised code
  …
  …
  returns the final checksum to ip_csum/linux/ARM
}

in this case.. what happens to ARM when NEON is doing the calculations? does ARM sit idle? or it moves on with other operations?

as you can see do_csum is called and we are waiting on that result ( or that is what it looks like)..

NOTE:

  1. Speaking in terms of cortex-a8
  2. do_csum as you can see from the link is coded with intrinsics
  3. compilation using gnu tool-chain
  4. Will be good if you also take Multi-threading or any other concept involved or comes into picture when these inter operations happen.

Questions:

  1. Does ARM sit idle while NEON is doing its operations? ( in this particular case)
  2. Or does it shelve this current ip_csum related code, and take up another process/thread till NEON is done? ( I'm almost dumb as to what happens here)
  3. if its sitting idle, how can we make ARM work on something else till NEON is done?
Was it helpful?

Solution

enter image description here

(Image from TI Wiki Cortex A8)

The ARM (or rather the Integer Pipeline) does not sit idle while NEON instructions are processing. In the Cortex A8, the NEON is at the "end" of the processor pipeline, instructions flow through the pipeline and if they are ARM instructions they are executed in the "beginning" of the pipeline and NEON instructions are executed in the end. Every clock pushes the instruction down the pipeline.

Here are some hints on how to read the diagram above:

  • Every cycle, if possible, the processor fetches an instruction pair (two instructions).
  • Fetching is pipelined so it takes 3 cycles for the instructions to propagate into the decode unit.
  • It takes 5 cycles (D0-D4) for the instruction to be decoded. Again this is all pipelines so it affects the latency but not the throughput. More instructions keep flowing through the pipeline where possible.
  • Now we reach the execute/load store portion. NEON instructions flow through this stage (but they do that while other instructions are possibly executing).
  • We get to the NEON portion, if the instruction fetched 13 cycles ago was a NEON instruction it is now decoded and executed in the NEON pipeline.
  • While this is happening, integer instructions that followed that instruction can execute at the same time in the integer pipeline.
  • The pipeline is a fairly complex beast, some instructions are multi-cycle, some have dependencies and will stall if those dependencies are not met. Other events such as branches will flush the pipeline.

If you are executing a sequence that is 100% NEON instructions (which is pretty rare, since there are usually some ARM registers involved, control flow etc.) then there is some period where the the integer pipeline isn't doing anything useful. Most code will have the two executing concurrently for at least some of the time while cleverly engineered code can maximize performance with the right instructions mix.

This online tool Cycle Counter for Cortex A8 is great for analyzing the performance of your assembly code and gives information about what is executing in what units and what is stalling.

OTHER TIPS

In Application Level Programmers’ Model, you can't really distinguish between ARM and NEON units.

While NEON being a separate hardware unit (that is available as an option on Cortex-A series processors), it is the ARM core who drives it in a tight fashion. It is not a separate DSP which you can communicate in an asynchronous fashion.

You can write better code by fully utilizing pipelines on both units, but this is not same as having a separate core.

NEON unit is there because it can do some operations (SIMDs) much faster than ARM unit at a low frequency.

This is like having a friend who is good at math, whenever you have a hard question you can ask him. While waiting for an answer you can do some small things like if answer is this I should do this or if not instead do that but if you depend on that answer to go on, you need to wait for him to answer before going further. You could calculate the answer yourself but it will be much faster even including the communication time between two of you compared to doing all the math yourself. I think you can even extend this analogy like "you also need to buy some lunch to that friend (energy consumption) but in many cases it worths it".

Anyone who is saying ARM core can do other things while NEON core is working on its stuff is talking about instruction-level parallelism not anything like task-level parallelism.

ARM is not "idle" while NEON operations are executed, but controls them.
To fully use the power of both units, one can carefully plan an interleaved sequence of operations:

loop:
SUBS r0,r0,r1  ; // ARM operation
addpq.16 q0,q0,q1  ; NEON operation
LDR r0, [r1, r2 LSL #2];   // ARM operation
vld1.32 d0, [r1]!  ; // NEON operation using ARM register
bne loop;         // ARM operation controlling the flow of both units...

ARM cortex-A8 can execute in each clock cycle up to 2 instructions. If both of them are independent NEON operations, it's no use to put an ARM instruction in between. OTOH if one knows that the latency of a VLD (load) is large, one can place many ARM instruction in between the load and first use of the loaded value. But in each case the combined usage must be planned in advance and interleaved.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top