quantized
Member level 2
- Joined
- Jul 6, 2012
- Messages
- 51
- Helped
- 1
- Reputation
- 2
- Reaction score
- 1
- Trophy points
- 1,288
- Activity points
- 1,887
I have a set of signal wires that, due to layout concerns, are probably going to need to be driven with unidirectional current. Specifically, the NFET pulldown and PFET pullup are about 30um apart on a wire which is 400um long and 0.25um wide (min-pitch). Wires are Al (pre-Cu process).
I'm aware that driving a signal wire unidirectionally is bad news from an electromigration standpoint. Does anybody have any helpful pointers on practical information on how this affects reliability? Or whether or not this is within the Blech length? All the foundry provides is maximum current levels, which I'm well within.
The only reason I'm considering doing this is that the application is a bit unusual -- it can tolerate quite a lot of failure. If this design choice means a 5% failure rate over the first year, that's actually okay. If one of these wires fails it only kills about 1% of the chip, and we already have to be able to route around stuff like this for other reasons. Think of the device-reliability tradeoff in an SRAM-with-ECC -- that would be a reasonably good approximation of the reliability requirements. Most foundries give you more-relaxed design rules you can use for SRAM as long as you're prepared to tolerate a small number of failures. This isn't an SRAM, but the device-level reliability requirements are very similar in that respect.
Unfortunately I'm a bit spooked by the stories about Western Digital's electromigration problem that caused 90% of a particular model of hard drive to fail all within a 6-month window.
I'm aware that driving a signal wire unidirectionally is bad news from an electromigration standpoint. Does anybody have any helpful pointers on practical information on how this affects reliability? Or whether or not this is within the Blech length? All the foundry provides is maximum current levels, which I'm well within.
The only reason I'm considering doing this is that the application is a bit unusual -- it can tolerate quite a lot of failure. If this design choice means a 5% failure rate over the first year, that's actually okay. If one of these wires fails it only kills about 1% of the chip, and we already have to be able to route around stuff like this for other reasons. Think of the device-reliability tradeoff in an SRAM-with-ECC -- that would be a reasonably good approximation of the reliability requirements. Most foundries give you more-relaxed design rules you can use for SRAM as long as you're prepared to tolerate a small number of failures. This isn't an SRAM, but the device-level reliability requirements are very similar in that respect.
Unfortunately I'm a bit spooked by the stories about Western Digital's electromigration problem that caused 90% of a particular model of hard drive to fail all within a 6-month window.
Last edited: