Frage

  • How can I estimate the irq latency on ARM processor?
  • What is the definition for irq latency?
War es hilfreich?

Lösung 3

Mats and Nemanja give some good information on interrupt latency. There are two is one more issue I would add, to the three given by Mats.

  1. Other simultaneous/near simultaneous interrupts.
  2. OS latency added due to masking interrupts. Edit: This is in Mats answer, just not explained as much.

If a single core is processing interrupts, then when multiple interrupts occur at the same time, usually there is some resolution priority. However, interrupts are often disabled in the interrupt handler unless priority interrupt handling is enabled. So for example, a slow NAND flash IRQ is signaled and running and then an Ethernet interrupt occurs, it may be delayed until the NAND flash IRQ finishes. Of course, if you have priorty interrupts and you are concerned about the NAND flash interrupt, then things can actually be worse, if the Ethernet is given priority.

The second issue is when mainline code clears/sets the interrupt flag. Typically this is done with something like,

mrs   r9, cpsr
biceq r9, r9, #PSR_I_BIT

Check arch/arm/include/asm/irqflags.h in the Linux source for many macros used by main line code. A typical sequence is like this,

lock interrupts;
manipulate some flag in struct;
unlock interrupts;

A very large interrupt latency can be introduced if that struct results in a page fault. The interrupts will be masked for the duration of the page fault handler.

The Cortex-A9 has lots of lock free instructions that can prevent this by never masking interrupts; because of better assembler instructions than swp/swpb. This second issue is much like the IRQ latency due to ldm/stm type instructions (these are just the longest instructions to run).

Finally, a lot of the technical discussions will assume zero-wait state RAM. It is likely that the cache will need to be filled and if you know your memory data rate (maybe 2-4 machine cycles), then the worst case code path would multiply by this.

Whether you have SMP interrupt handling, priority interrupts, and lock free main line depends on your kernel configuration and version; these are issues for the OS. Other issues are intrinsic to the CPU/SOC interrupt controller, and to the interrupt code itself.

Andere Tipps

  1. Interrupt Request (irq) latency is the time that takes for interrupt request to travel from source of the interrupt to the point when it will be serviced.

  2. Because there are different interrupts coming from different sources via different paths, obviously their latency is depending on the type of the interrupt. You can find table with very good explanations about latency (both value and causes) for particular interrupts on ARM site

You can find more information about it in ARM9E-S Core Technical Reference Manual:

4.3 Maximum interrupt latency

If the sampled signal is asserted at the same time as a multicycle instruction has started its second or later cycle of execution, the interrupt exception entry does not start until the instruction has completed.

The longest LDM instruction is one that loads all of the registers, including the PC.

Counting the first Execute cycle as 1, the LDM takes 16 cycles.

• The last word to be transferred by the LDM is transferred in cycle 17, and the abort status for the transfer is returned in this cycle.

• If a Data Abort happens, the processor detects this in cycle 18 and prepares for the Data Abort exception entry in cycle 19.

• Cycles 20 and 21 are the Fetch and Decode stages of the Data Abort entry respectively.

• During cycle 22, the processor prepares for FIQ entry, issuing Fetch and Decode cycles in cycles 23 and 24.

• Therefore, the first instruction in the FIQ routine enters the Execute stage of the pipeline in stage 25, giving a worst-case latency of 24 cycles.

and

Minimum interrupt latency

The minimum latency for FIQ or IRQ is the shortest time the request can be sampled by the input register (one cycle), plus the exception entry time (three cycles). The first interrupt instruction enters the Execute pipeline stage four cycles after the interrupt is asserted

There are three parts to interrupt latency:

  1. The interrupt controller picking up the interrupt itself. Modern processors tend to do this quite quickly, but there is still some time between the device signalling it's pin and the interrupt controller picking it up - even if it's only 1ns, it's time [or whatever the method of signalling interrupts are].
  2. The time until the processor starts executing the interrupt code itself.
  3. The time until the actual code supposed to deal with the interrupt is running - that is, after the processor has figured out which interrupt, and what portion of driver-code or similar should deal with the interrupt.

Normally, the operating system won't have any influence over 1. The operating system certainly influences 2. For example, an operating system will sometimes disable interrupts [to avoid an interrupt interfering with some critical operation, such as for example modifying something to do with interrupt handling, or when scheduling a new task, or even when executing in an interrupt handler. Some operating systems may disable interrupts for several milliseconds, where a good realtime OS will not have interrupts disabled for more than microseconds at the most.

And of course, the time it takes from the first instruction in the interrupt handler runs, until the actual driver code or similar is running can be quite a few instructions, and the operating system is responsible for all of them.

For real time behaviour, it's often the "worst case" that matters, where in non-real time OS's, the overall execution time is much more important, so if it's quicker to not enable interrupts for a few hundred instructions, because it saves several instructions of "enable interrupts, then disable interrupts", a Linux or Windows type OS may well choose to do so.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top