Continue to Site

Welcome to

Welcome to our site! is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] AXI burst writes - how is it done by embedded sw developers or what does it mean from the sw point of view?

Not open for further replies.


Advanced Member level 5
Jan 16, 2008
Reaction score
Trophy points
Activity points
I am a logic design engineer and this question is bugging me a bit.

Consider the following architecture:
ARM Core with the ability to run some embedded C code (M) <--> (64bits wide data bus, S) AXI4 Interconnect (32 bits wide data bus, M) <--> (S)AXI2AHB-L bridge(M) <--> (S)AHB-L Slave with a FIFO

In the design the synthesize-able part is everything from the AXI4 Interconnect down to the AHB-L Slave. In the testbench, the prime component is then a AXI4 full Master that can read/write data from/to the FIFO either in bursts or through single accesses. Note that a bus-width conversion is taking place at the AXI4 Interconnect. Everything is working in simulation and I fully understand how the design is working.

Now this design must also work on the FPGA for the target design. So some embedded software engineer develops the firmware to read/write data from/to the FIFO.

Here comes my confusion: In the test-bench I have implemented a bust write is done by the AXI4 Master that fills the FIFO because it is the most efficient way to do it. But I do not understand how such a burst write or read is implemented in firmware.

I mean do I just tell the firmware guy to fill the FIFO locations from say 0x____0000 to 0x____0100? I understand how single word accesses are implemented using C on a processor system. But I do not understand how to make sure the embedded engineer implements the FIFO write mechanism such that the data is written in bursts. Then again, the AXI4 Master data bus is 64 bits wide and the Slave at the last mile has a 32 bits data bus.

Somebody care to explain?

Thanks n regards.

The embedded processor systems I've interfaced to have all had either DMA controllers built into the processor/microcontroller or have some kind of high speed peripheral bus that supports burst transfers. Otherwise a DMA controller was implemented in the FPGA to perform burst transfers from a shared memory resource with the processor/uC.

I haven't worked with an ARM core embedded system (besides there are a lot of ARM cores with different feature sets), so don't know if there is some special mechanism (outside of DMA) to perform bursts over AXI.
@ads-ee ,
I have taken the ARM core just as an example, it can be any uC have a high speed i/f with the FPGA using for e.g. a PCIe.
So you mean to say, generally speaking, a DMA mechanism is involved in such cases?

Yes - DMA Memory-to-Memory, Peripheral-to-Memory or Memory-to-Peripheral.

For example in Zynq at PS side (firmware) an address map is known from Vivado and DMA is used to perform AXI burst transfers under the hood (from address X transfer N bytes to address Y with specified width and if addresses should be incremented or not). DMA Controller is indeed an AXI4 Master connected to the AXI4 Interconnect or directly to one slave.

When I was working with STM32 MCUs I discovered from the datasheet that the DMA transfers uses AHB bus and DMA Controllers are implemented as AHB Masters that uses single or burst transfers (depends on the DMA's registers configuration). AHB Masters are connected to the AHB bus arbiter there that performs similar tasks as AXI4 Interconnect.
Last edited:
Not open for further replies.

Part and Inventory Search

Welcome to