Hi,
Never force HIGH.
I'd even not try it because short circuit currents may destroy a device.
Klaus
Hi,
Look at the upper scope picture of post#1.
The voltage of the blue channel sometimes is somewhere inbetween HIGH and LOW,
this is when two drivers are active at the same time causing short circuit.
Klaus
Maybe, maybe not, how can we know?Could this mean that my slave is getting more SCL cycles than the ones I am sending? (due to maybe a long rise time of the SCL signal?)
Hi,
Some questions come through my mind.
* I expect your ASIC tool to include a simulator. Did you use it?
* And I expect there are ready to use I2C slave IPs. You try to re-invent the wheel?
* And why did you do an ASIC ... and did no tests with an FPGA (or CPLD) which more simply can be reprogrammed with some debugging features?
Maybe, maybe not, how can we know?
How did you implement the SCK input? Did you add some oversampling, noise filtering....?
Klaus
How is the reset to the IP core handled? One big difference between ASIC and FPGA is both Altera and Xilinx both have flip-flops that power up in a know state. ASICs flip-flops won't necessarily power up in a known state. Are you sure all registers that have anything to do with controlling the state of the I2C transfer are begin reset or can be forced into a reset?
Was a specific reset test done with the ASIC netlist to verify the design's behaviour coming out of reset? This could explain why the design seems to be driving the SDA line at the wrong time as it might be in the wrong state or a counter might be at the wrong value.
Thanks for the additional information about your project and how it developed....it helps to understand you and your situation.I understand your questions. I did not tell the hole story before I came to the forum asking for help, and therefore, you don't know if I am just another guy searching for an easy answer....
Although possible, I don't think it's a good idea to treat an I2C SCK as a true clock signal that is distributed as a clock tree.I was not responsible for the implementation but as far as I know, the SCK input has just a digital input pad (they did not implement the slave with stall capabilities), and from the pad, the signal goes directly to a buffer where it is distributed by a clock tree. What do you mean by oversampling?
Although possible, I don't think it's a good idea to treat an I2C SCK as a true clock signal that is distributed as a clock tree.
I'd use a higher frequency system clock and treat the SCK similarily as the SDA as usual logic input (output) signal.
With all the usual treatment like double buffering to avoid meatastability.
Often one uses an 8x (or other) oversampling technique (the same applies to U(S)ARTS and other serial interfaces).. a digital filter "validates" the inputs signal and suppresses erroneous spikes and ringing effects.
Imagine:
When SCK is treated as true ASIC clock, all the I2C logic operates with ASIC speed. A glitch (or ringing) in the range of some nanoseconds will be treated as additional clock edge. This is not what you want, because your I2C becomes unreliable. It will be sensitive to EMC signals, it will be sensitive to wiring and all impedance jumps in the traces.
Remember: I2C is a relatively slow interface. No impedace controlled wiring, no differencial signalling, no terminator resistor at the ends of the bus, even star wiring is allowed...
Your ASIC or better say your HDL interface design needs to care for this situation.
I'd say a glitch on SCK or SDA for 100ns should not cause an error. (Depends on expected I2C speed).
And sadly most simulation does not test for those "hardware related" situations.
But still it's not clear whether this is the problem at all.
An idea: use a 100R / 100pF LPFilter at SCK and SDA very close to the ASIC IO.
This should suppress HF.
Added: about RESET.
IMHO the whole I2C interface should/could be reset on I2C_STOP or bus_idle condition.
I also did another test. Since the slave does not have the possibility to control the SCK line, I decided to do a test where the master is controlling the SCK line by forcing it low and high (no pull-up on SCK) and the SDA is forced low but the pull-up is responsible for pulling the line high when needed. In this test scenario, the i2c works with no errors. So I am guessing this is really a problem in the SCK line, related maybe with multiple false clock edges in a single edge, due to glitches and noise.
Is there any way to simulate this type of things in the layout stage?
Yes. The absolute best accuracy you can get if from spice. You could try to extract the whole digital block and the connected IO. Depending on how big the block is, spice will choke. You might have to slide it up even further, just IO and the first layer of flops.
A more digital-friendly simulation is probably better. gate-level plus sdf.
Hi,
Whan I see your diagrams ... one thing is worrying me: the edges of the data sometimes are very close to the edges if the clock.
They should not be that close.
Are you sure they are according I2C specifications?
Klaus
Unfortunately I can not do it. I am using the 180nm TSMC library and they don't give the layout of the digital cells, just blockages and general cell sizes.
I already did this, It was the first thing I tried was to do a gate-level timing simulation with the sdf info. Everything works fine in the simulation.
The only thing that is not included in the simulation are the pads
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?