Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] RGMII connections cross over to achieve a 1:1 Ethernet hub

Status
Not open for further replies.

dpaul

Advanced Member level 5
Joined
Jan 16, 2008
Messages
1,797
Helped
317
Reputation
635
Reaction score
341
Trophy points
1,373
Location
Germany
Activity points
13,048
Hi,

I have a situation in which I need to simply route the traffic from one Ethernet port to another Ethernet port. I want to achieve this by connecting the RGMII tx-rx of one port to the RGMII rx-tx of another port in the FPGA. In other words a functioning 1:1 hub (if I am not mistaken) is desired.

Environment- Vivado 2015.4 + AC701 board + Extenstion board= Ethernet FMC (from Opsero.com) having 4 Marvell 88E1510 RGMII capable PHYs with 4 RJ45 sockets and of course an FMC connector.

bd.jpg

It is a very simple Verilog code. If the SW2 DIP sw0 from the AC701 board is 1 then then the MUX allow the crossover connection as shown. Else there is no crossover connection. I have also generated the RESETn from the FPGA logic with the specific timmings so as to reset the Marvell PHYs. For me in principle it looks like data transmission should be possible from one port to the other.

When I perform the implementation of the above concept using the bit-stream, I see the PHYs being active (LEDs @ the RJ45 connectors blink). But my PC Eth card is not able to connect to the outside world/the default gateway. It shows Limited Connectivity message (a few minutes after showing not connected).

Is this a valid RGMII connection or am I making a mistake? If not then why can't my PC Eth card connect to my default gateway?

More info can be provided on request.
 

I believe both rx and tx are differential pairs and there should be 4 wires. It may not work for gigabit cards that use all the eight wires.
 

I presume you also did a crossover connection of RGMII RX and TX clocks? Is the default PHY configuration suitable or would it need to be changed through the management interface?
 

sure I did!
Code:
    assign mux0_td_o =     (phy_loopback_en_i == 1'b1) ? phy1_rgmii_rd_i     : temac0_rgmii_td_i;
    assign mux0_tx_ctl_o = (phy_loopback_en_i == 1'b1) ? phy1_rgmii_rx_ctl_i : temac0_rgmii_tx_ctl_i;
    assign mux0_txc_o    = (phy_loopback_en_i == 1'b1) ? phy1_rgmii_rxc_i    : temac0_rgmii_txc_i;
    
    assign mux1_td_o =     (phy_loopback_en_i == 1'b1) ? phy0_rgmii_rd_i     : temac1_rgmii_td_i;
    assign mux1_tx_ctl_o = (phy_loopback_en_i == 1'b1) ? phy0_rgmii_rx_ctl_i : temac1_rgmii_tx_ctl_i;
    assign mux1_txc_o    = (phy_loopback_en_i == 1'b1) ? phy0_rgmii_rxc_i    : temac1_rgmii_txc_i;

I have read the Marvell 88E1510 PHYs datasheet and by default (no MDIO config) it should run @ Gigabit mode.

First of all I would like to know if the concept of RGMII crossover is correct or not.
I would also recheck the RESETn signal timing which is feeding the PHYs (I am generating this from the FPGA).
 

@dpaul, perhaps.

RGMII is a basic, but interesting standard. The clock and data are edge aligned. In order to correctly clock data, the clock edge needs to be delayed by an appropriate amount. This was originally intended to be done by adding a buffer or long trace to the PCB. Because this is annoying, PHY's eventually added the ability to delay the RX clock as well as the TX clock. This also leads to problems because two 1/4th cycle delays will place the edges at the data transitions once again!

For FPGA applications, where data is consumed by the FPGA, there are additional concerns. The BUFG buffers actually have a very significant delay which for some devices exceeds 1/2 cycle. (BUFIO/BUFR don't have this issue, nor does a BUFG+PLL/MMCM/DCM set ups as a zero-delay buffer) As a result, some FPGA applications will want to delay the _data_ to generate a correctly delayed _clock_.

Finally, ethernet does not assume exact clock frequencies. This means the data rate from the FPGA and each port will be slightly different. (Unless both PHY's have the same clock source.)

If you want to go this route, you should try to think about the constraints required.

The reliable solution is to use an RGMII-GMII core (code is in some of the coregens) and correctly constrain the inputs/outputs in the UCF file and adding the input/output constraints. I don't think the output constraints made sense in that I don't think I could get automatic checking. I recall just setting them up and using them to set output delays. This code also has the elastic buffer to allow operation with different clock rates.

These cores may use excessive clocking resources, but it is easy to correct that as long as the constraints are appropriately modified.

edit -- Also, there may be two RGMII options for your PHY. One will have TX delay and one won't. However the internals of the FPGA play a role in all of this. As a result, I prefer to set the PHY into basic RGMII mode and do all of the data-clock manipulation in the FPGA.
 

    V

    Points: 2
    Helpful Answer Positive Rating
Thanks vGoodtimes, I will try out the RGMII<-->GMII<-->RGMII design. It might be a problem of precise timing.

I want to put together a simple FPGA based design which has two Gigabit Ethernet ports. These ports should behave in such a way that the data tx-rx for one port should be available for rx-tx on the other port (the PHY's support RGMII v2.0). Hence I came up with the simple design of RGMII crossover as described in #1.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top