The diodes represent the clamping diodes placed inside the microcontroller GPIO pins.
Case 1 - Given directly using the voltage source.
Case 2 - Given the same voltage through a resistor divider.
In the above image, both the circuits are providing 12V at the GPIO pin. But first one is given through the voltage source. Second is through a resistor divider network. But effectively, same voltage is given and provide at the GPIO pin. But we are getting different results.
I understand that in the first case, since the Voltage source output is very low impedance, it is showing 12V. But I am confused on this behaviour and unable to arrive at a conclusion on how the 2 cases are different. Please help to provide an answer in simple terms for my clarity.
No. Only the first schematic provides 12V to the PIN. (but this situation is not realistic)
The second schematic provides 12V to the resistor ... and if you´d use your simulation you´d see that there is not 12V at the PIN.
Semiconductor diodes, in MOS processes, are crappy diodes, and exhibit high intrinsic R, so look
like R's when the diode turns on. When you do 2'ond case effectively all you have done is add
more R to the diodes intrinsic R. And of course limit the current for a given V1 voltage.
As you can see diode turns on when its ~ .7V and then current takes off, but is limited by D1 intrinsic dynamic R.
Both circuits are exceeding processor maximum ratings.
In the first case, an unlimited voltage source would destroy the processor input cell. In the second case, the input current of about 12 mA, is most likely exceeding the continuous current rating and may damage the input.
A number of processors these days limit "injection" current to small values to
prevent triggering the parasitic SCR inherent in todays CMOS processes that
if triggered shorts internally the supply rails. In many instances blowing out the
internal bonds on the wires connecting die to external pin. Some I know of limit
it to 100 uA, gotta read the datasheets.