Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

I2C not working properly

Status
Not open for further replies.

rmachado

Member level 2
Joined
Apr 13, 2018
Messages
45
Helped
0
Reputation
0
Reaction score
1
Trophy points
6
Activity points
668
Hi everyone,

I have a very basic I2C slave ASIC with a bank of registers that I am currently testing. The ASIC was produced just to test the I2C IP (to make sure the design flow was correct and that the IP was ready to be used in more complex projects). According to the simulations and tests performed before tape-out, the ASIC should always work, but in reality... It only works in a specific scenario.
In I2C, when you want to write a zero on the SCL and SDA lines you have to force them low, but when you want to write a one, you just release the line and let the pull-up do the job.
The problem is that the I2C ASIC that I have only works properly if both transitions on the SCL and SDA lines are forced. If I force both rising and falling transitions on the SDA and SCL lines (in bit bang mode), the ASIC works as it should. Although, if I only force the falling transitions and let the pull-up control the rising transitions, then the ASIC does not work properly.
My guess is that this is probably some issues with parasitics. I started the tests with 10k pull-up resistors and the ASIC was not working properly, so I repeat the tests with a lower resistance (4.7k) to increase the pull-up strength but the ASIC still does not work as it should.

I identify some strange scenarios during my tests. Following are some images with the error cases and an explanation of what I think is the problem. (the SCL signal is represented in yellow and the SDA in blue)

IMG_20190809_171833.jpg

The problem can be seen in 8th falling edge of the SCL signal. Both SCL and SDA go low at the same time which can lead to a start condition, restarting the system. The problem is, the only way for the SDA go low in that scenario is if the Slave is pulling the line low since the master is still sending the slave addr data packet. imideatly after a spike can be seen, this corresponds to the time where the master is forcing the line low to send the read/write bit. Also in this example, it can be seen that the SDA line is held low at the end of the communication. This is for sure being done by the ASIC slave and I really don't know why because the state machine of the I2C slave only forces the line low when it needs to send an ACK (and that only ocurs for one I2C clock cycle) or when it is in write to master mode (which is impossible because for that a one had to be sent on the last bit of the slave address paket).

I also have a scenario were the slave address (0x27) is sent correctly but still I receive a NACK from the slave.

IMG_20190809_172459.jpg

Any idea on the possible route of the problem, or any idea on tests that I could run to try to learn more about the problem?

Thanks in advance
 

Hi,

Never force HIGH.

I'd even not try it because short circuit currents may destroy a device.

Klaus
 

Hi,

Never force HIGH.

I'd even not try it because short circuit currents may destroy a device.

Klaus

I know that forcing high is not a good idea, but the circuit has only one master and one slave, and I am only testing write to slave operations.
It was just to try to understand what was happening.
Also, I reduced the pull-up resistors to 1.8K and, in bit bang, it works fine with this configuration. Then, I test two more pull-up resistance values 2.2K and 2.7K. With 2.2k it still works but with 2.7K the errors appear. Nevertheless, even with 1.8K, when I am using the arduino I2C master, because the values change on the SDA line are much closer to the SCL fall-edge than in the bit bang scenario, only some communications occur without errors.
So it must be something related with the input capacitance of the I2C pads right?
 

Hi,

Look at the upper scope picture of post#1.
The voltage of the blue channel sometimes is somewhere inbetween HIGH and LOW,
this is when two drivers are active at the same time causing short circuit.

Klaus
 

Hi,

Look at the upper scope picture of post#1.
The voltage of the blue channel sometimes is somewhere inbetween HIGH and LOW,
this is when two drivers are active at the same time causing short circuit.

Klaus

Ok, I understand what you are saying. I did not know that the short circuit would look like that on the scope. That means that the slave interface in trying to pull down the line at the wrong time. The slave should only try to pull the line at the 9th clock edge, never before that. I did the same test in bit bang, but without forcing the lines, using the pull-up. The scenario is the same except this time I have no spikes, but sometimes the SDA line still goes low sooner (at the same point of picture 1 in the first post) than what it should. (I attach a picture of this test at the end of the post)

Could this mean that my slave is getting more SCL cycles than the ones I am sending? (due to maybe a long rise time of the SCL signal?)

IMG_20190809_180912.jpg
 

Hi,


Some questions come through my mind.
* I expect your ASIC tool to include a simulator. Did you use it?
* And I expect there are ready to use I2C slave IPs. You try to re-invent the wheel?
* And why did you do an ASIC ... and did no tests with an FPGA (or CPLD) which more simply can be reprogrammed with some debugging features?

Could this mean that my slave is getting more SCL cycles than the ones I am sending? (due to maybe a long rise time of the SCL signal?)
Maybe, maybe not, how can we know?
How did you implement the SCK input? Did you add some oversampling, noise filtering....?

Klaus
 

Hi,


Some questions come through my mind.
* I expect your ASIC tool to include a simulator. Did you use it?
* And I expect there are ready to use I2C slave IPs. You try to re-invent the wheel?
* And why did you do an ASIC ... and did no tests with an FPGA (or CPLD) which more simply can be reprogrammed with some debugging features?


Maybe, maybe not, how can we know?
How did you implement the SCK input? Did you add some oversampling, noise filtering....?

Klaus

I understand your questions. I did not tell the hole story before I came to the forum asking for help, and therefore, you don't know if I am just another guy searching for an easy answer....


1. I used a simulator to test the design that was sent for fabrication. The ASIC passed all simulation tests done (with sdf timings included of course).
2. I did not re-invent the wheel, in fact, I was not the one that designed the i2c. I am a PhD student, doing my PhD in a company. They had a i2c IP that was working (at least they told me it was, I never saw it). Lately, they changed the technology that they were using and developed a chip with the i2c peripheral in it. Surprise surprise, it was not working. So, because they had some extra space to spare on the last run they made, they decided to add a solo module with the i2c and some dummy registers for read/write, just to test the interface alone. And then they ask me if I could look deeper into the problem. So here I am trying to find out what's wrong.
3. The first thing I did when they told me that the i2c was not working properly was to pick up the source code and put it running in an FPGA (I used my ZYBO board for that). Everything is working fine in the FPGA, which makes everything even more strange.

I was not responsible for the implementation but as far as I know, the SCK input has just a digital input pad (they did not implement the slave with stall capabilities), and from the pad, the signal goes directly to a buffer where it is distributed by a clock tree. What do you mean by oversampling?

Thanks for trying to help.
 

How is the reset to the IP core handled? One big difference between ASIC and FPGA is both Altera and Xilinx both have flip-flops that power up in a know state. ASICs flip-flops won't necessarily power up in a known state. Are you sure all registers that have anything to do with controlling the state of the I2C transfer are begin reset or can be forced into a reset?

Was a specific reset test done with the ASIC netlist to verify the design's behaviour coming out of reset? This could explain why the design seems to be driving the SDA line at the wrong time as it might be in the wrong state or a counter might be at the wrong value.
 

How is the reset to the IP core handled? One big difference between ASIC and FPGA is both Altera and Xilinx both have flip-flops that power up in a know state. ASICs flip-flops won't necessarily power up in a known state. Are you sure all registers that have anything to do with controlling the state of the I2C transfer are begin reset or can be forced into a reset?

Was a specific reset test done with the ASIC netlist to verify the design's behaviour coming out of reset? This could explain why the design seems to be driving the SDA line at the wrong time as it might be in the wrong state or a counter might be at the wrong value.

Yes, all the flip-flops are being reset. The system has a global reset pin that forces everything to a known state.
The first thing I test in the simulations is the reset condition and the system's behavior when coming out of reset.
 

Hi,

I understand your questions. I did not tell the hole story before I came to the forum asking for help, and therefore, you don't know if I am just another guy searching for an easy answer....
Thanks for the additional information about your project and how it developed....it helps to understand you and your situation.
Although mostly non technical information it motivates to help ....
It also gives an idea wheter you want to rectify the cause of the errors .... or just want to cure the symptom.

******
I was not responsible for the implementation but as far as I know, the SCK input has just a digital input pad (they did not implement the slave with stall capabilities), and from the pad, the signal goes directly to a buffer where it is distributed by a clock tree. What do you mean by oversampling?
Although possible, I don't think it's a good idea to treat an I2C SCK as a true clock signal that is distributed as a clock tree.
I'd use a higher frequency system clock and treat the SCK similarily as the SDA as usual logic input (output) signal.
With all the usual treatment like double buffering to avoid meatastability.
Often one uses an 8x (or other) oversampling technique (the same applies to U(S)ARTS and other serial interfaces).. a digital filter "validates" the inputs signal and suppresses erroneous spikes and ringing effects.

Imagine:
When SCK is treated as true ASIC clock, all the I2C logic operates with ASIC speed. A glitch (or ringing) in the range of some nanoseconds will be treated as additional clock edge. This is not what you want, because your I2C becomes unreliable. It will be sensitive to EMC signals, it will be sensitive to wiring and all impedance jumps in the traces.
Remember: I2C is a relatively slow interface. No impedace controlled wiring, no differencial signalling, no terminator resistor at the ends of the bus, even star wiring is allowed...
Your ASIC or better say your HDL interface design needs to care for this situation.

I'd say a glitch on SCK or SDA for 100ns should not cause an error. (Depends on expected I2C speed).
And sadly most simulation does not test for those "hardware related" situations.

But still it's not clear whether this is the problem at all.

An idea: use a 100R / 100pF LPFilter at SCK and SDA very close to the ASIC IO.
This should suppress HF.
Let's see what happens.

Klaus

Added: about RESET.
IMHO the whole I2C interface should/could be reset on I2C_STOP or bus_idle condition.
 

Although possible, I don't think it's a good idea to treat an I2C SCK as a true clock signal that is distributed as a clock tree.
I'd use a higher frequency system clock and treat the SCK similarily as the SDA as usual logic input (output) signal.
With all the usual treatment like double buffering to avoid meatastability.
Often one uses an 8x (or other) oversampling technique (the same applies to U(S)ARTS and other serial interfaces).. a digital filter "validates" the inputs signal and suppresses erroneous spikes and ringing effects.

Imagine:
When SCK is treated as true ASIC clock, all the I2C logic operates with ASIC speed. A glitch (or ringing) in the range of some nanoseconds will be treated as additional clock edge. This is not what you want, because your I2C becomes unreliable. It will be sensitive to EMC signals, it will be sensitive to wiring and all impedance jumps in the traces.
Remember: I2C is a relatively slow interface. No impedace controlled wiring, no differencial signalling, no terminator resistor at the ends of the bus, even star wiring is allowed...
Your ASIC or better say your HDL interface design needs to care for this situation.

Ok, so you were speaking of oversampling because you are suggesting that the ASIC should have a "system clock". In the company they have a different position regarding it, they say the i2c has to be able to work as a standalone IP, with no extra clock. But I completely understand what you are saying and the motivation for having a higher frequency system clock and treat the SCK as a normal input.

I'd say a glitch on SCK or SDA for 100ns should not cause an error. (Depends on expected I2C speed).
And sadly most simulation does not test for those "hardware related" situations.

But still it's not clear whether this is the problem at all.

I am convinced that the problem is probably a glitch in the SCK line.
I was reviewing the images that I post and your first answer regarding the short circuit scenario. It is impossible for that to be a short circuit because that image was taken from a test were the master was on bit bang mode and using the pull-up resistors (I was not forcing the lines high. When I force the lines High, the system always works, no errors, no glitches).
I also did another test. Since the slave does not have the possibility to control the SCK line, I decided to do a test where the master is controlling the SCK line by forcing it low and high (no pull-up on SCK) and the SDA is forced low but the pull-up is responsible for pulling the line high when needed. In this test scenario, the i2c works with no errors. So I am guessing this is really a problem in the SCK line, related maybe with multiple false clock edges in a single edge, due to glitches and noise.

An idea: use a 100R / 100pF LPFilter at SCK and SDA very close to the ASIC IO.
This should suppress HF.

Today I am at the university and will only be able to test your suggestion tomorrow when I am at the company. I will try this out and post the results.
I already did something similar and it helped on reducing the amount of errors.

Added: about RESET.
IMHO the whole I2C interface should/could be reset on I2C_STOP or bus_idle condition.

Both the start condition and the stop condition work as a reset for the i2c, letting the i2c slave in a known state (in this case waiting for the MSB of the slave_addr)
 

I also did another test. Since the slave does not have the possibility to control the SCK line, I decided to do a test where the master is controlling the SCK line by forcing it low and high (no pull-up on SCK) and the SDA is forced low but the pull-up is responsible for pulling the line high when needed. In this test scenario, the i2c works with no errors. So I am guessing this is really a problem in the SCK line, related maybe with multiple false clock edges in a single edge, due to glitches and noise.

This along with a previous post that suggests smaller pullup resistors improves things seems to indicate that the rise time of the signal with added noise (don't know your setup with driving the SCL from an Arduino) is likely the issue.

Try the a termination filter suggested by KlausST on the SCL, but I think ultimately the change should be in the ASIC. I would say the I/O needs to be changed to an I/O cell that has hysteresis. The new tech node is maybe smaller/faster than the original one the I2C was designed for and is much more susceptible to glitches.
 

Hi,

I did as you suggested and added the 100R/100pF low pass filter but it had no effect. I still have the same type of errors, like the one above where the SDA line is clearly being pushed low by the slave when the master is still transmitting (simultaneous High-to-Low transition on SCL and SDA).

IMG_20190911_150517.jpg

I am saying that the slave is the one forcing the line low because I check the typical delay between the fall transition of SCL and the data value change in SDA and when the master is doing it, the time are the ones depicted in the pictures below.

IMG_20190911_151243.jpg

IMG_20190911_151310.jpg

basically 1 us for a rise transition on SDA after the fall transition on SCL, and 400ns for the fall transition on SDA after the fall transition on SCL.

when the errors occurs I have this following case (see image below), so it must be the slave pulling the line low ahead of time.

IMG_20190911_151403.jpg

I agree trying to solve the problem on the slave (ASIC) side. And I am confident that the problem is related with the i2c clock signal that is being received by the ASIC logic. I just wanted to try to figure out a way to make sure I will not have the same problem again. Is there any way to simulate this type of things in the layout stage?
 

Is there any way to simulate this type of things in the layout stage?

Yes. The absolute best accuracy you can get if from spice. You could try to extract the whole digital block and the connected IO. Depending on how big the block is, spice will choke. You might have to slide it up even further, just IO and the first layer of flops.

A more digital-friendly simulation is probably better. gate-level plus sdf.

In either case, the problem with simulation is that it is too easy to fail to mimic real-life timings. You have to make sure all signals are arriving to the chip the exact same way your simulation environment has them modelled in. This get's tricky and hides the true behavior of the chip.
 

Yes. The absolute best accuracy you can get if from spice. You could try to extract the whole digital block and the connected IO. Depending on how big the block is, spice will choke. You might have to slide it up even further, just IO and the first layer of flops.

Unfortunately I can not do it. I am using the 180nm TSMC library and they don't give the layout of the digital cells, just blockages and general cell sizes.

A more digital-friendly simulation is probably better. gate-level plus sdf.

I already did this, It was the first thing I tried was to do a gate-level timing simulation with the sdf info. Everything works fine in the simulation.
The only thing that is not included in the simulation are the pads
 

Hi,

Whan I see your diagrams ... one thing is worrying me: the edges of the data sometimes are very close to the edges if the clock.
They should not be that close.
Are you sure they are according I2C specifications?

Klaus
 

Hi,

Whan I see your diagrams ... one thing is worrying me: the edges of the data sometimes are very close to the edges if the clock.
They should not be that close.
Are you sure they are according I2C specifications?

Klaus

The images are related to the scenario in which I use the arduino i2c peripheral. According to i2c specifications a minimum time between the fall edge of the SCL and the data change in SDA is 300ns (at least this was what I understood when I read the standard). The only scenario in which these timings are not respected is when the Slave pulls the SDA line low while the master is sending data.

Even in the case where I was using bit-bang mode, (the images on my the first post) I was having these errors, and I was waiting for 5us after the fall edge of SCL to change SDA. So I don't think the problem is related to the signals being update close to each other, I think that the possibility of having multiple SCL transitions inside the ASIC due to slow rise time and noise is more likely to be the cause.
 

The second picture in post #13 isn't how you measure the delay between the falling edge of the SCL and the SDA.

My bet is that the signals you think meet the 300 ns (internal required hold time) is not being met in the ASIC. Do you have the actual characterized timing of the SCL/SDA delays or at least the values that the part you are testing has? I suspect you are violating the timing by having this almost on top of the SCL falling edge SDA transitions. In standard mode you can get away with having the transitions on SDA up to 3.45us after falling SCL and 0.9 us for fast more.

- - - Updated - - -

Forgot to mention you should at least set the cursors at the point where the logic would detect the signal being interpreted as a logic 1 or 0 and not at the peak of the waveform. This will be whatever the ASIC input detect as a low going signal for SCL and a high or low signal for the SDA.

As some of the waveforms (e.g. 1st falling edges of both the first and second pictures in post #13) are overlaid for SCL/SDA and that requires that the ASIC delays the SDA to meet the 300ns spec requirement for the hold time. That is ignoring any hold requirement of the actual flip-flops in the ASIC.
 

Hi,

Maybe a dumb question:
When your I2C slave has no internal clock, then how can it generate 300ns (or more) delays?

I2C_START condition: SDA_falling @ SCL_HIGH
I2C_STOP condition: SDA_rising @ SCL_HIGH
Thus, during data transfer: NO SDA_change when SCL_HIGH ... or close to the SDA edges, to avoid erroneous START or STOP detection.

Another thing is worrying me: At some pictures the LOW level of SCK at the beginning of the transfer (near START) is lower than at the end of transmission (near STOP).

Show a photo of your wiring between master and slave, what wire length, show us power supply wiring (and how you avoided GND loops) and tell us about power consumption of master and slave.

Klaus
 

Unfortunately I can not do it. I am using the 180nm TSMC library and they don't give the layout of the digital cells, just blockages and general cell sizes.



I already did this, It was the first thing I tried was to do a gate-level timing simulation with the sdf info. Everything works fine in the simulation.
The only thing that is not included in the simulation are the pads

From my experience, I can tell you that TSMC can give you all the files you would ever need if you show you really need them. It boils down to how good your local contact person is in solving these issues and whether they go through a broker (MOSIS, europractice, etc) or not. All it takes is one lazy guy in the chain that doesn't want to figure out where the files are and you end up not getting them. So... ask around, talk to the right people, explain your issue, and pray for the Taiwanese gods.

At a bare minimum, they should give you some annotated verilog model for the pads. This might not be enough to make your simulation accurate, but it is a move in the right direction.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top