For example, to get around the 'which starts first' issue, you could try having each device send a known character or sequence to the other, with the expectation that something will be received back. This is the equivalent of a 'ping' over the network. If nothing comes back the 'ping' again. If the 'other' device starts up half way though a 'ping' then it will receive rubbish (which it can ignore) and possibly framing errors (for UARTs) - all of which can be handled but still the overall thing is to ignore what is received. By doing so that will cause the first-started device to send the 'ping' again which , this time, you should receive properly and so you can respond.
When this initial 'handshake' is complete, both devices know that the other is there and can start the 'real' communication.
You also seen to be asking about the transfer rates between devices. There are several factors that come in here.
Firstly, as you have mentioned, there is the time each bit takes to send. For UARTs this is commonly referred to as the BAOD rate and you have already mentioned two of the common I2C rates. Then there is the number of bits needed to send a value. for UARTs this is typically 8 bits for the character (but from 5 to 9 is possible) plus whether you have a parity bit and then the start and stop bits. All of these need to be agreed between the sending and receiving device as part of you communication design.
There may also be limits on the bandwidth of the line between the two devices. If they are on the same PCB (for example) then the rate can be very high. If they are many metres apart and connected by a wire or RF link etc., then you may need to slow things down to be reliable.
Next you have whether you send single character or streams. If you want to send a line of text (say) then you need something to tell when the line completes. You *could* always send a fixed number of characters, but normally text comes in varying line lengths which is where the '\r' and '\n' characters come in to mark the end of the line. (Again which of these, or a combination, is used is something that you need to determine as part of your design.) There are also other techniques you can use such as a knowns 'start of text' and 'end of text' character. These also let you re-synchronise if a message is corrupted part way through so you can ignore that but still know when the next message starts.
You can also get more fancy and start putting the data into a packet which is what the various network protocol (e.g. OSI layers) do. This makes the message longer (and therefore will take longer to send) but can increase the reliability
Once you have that sorted out, then you will know how long it will take for each packet to be sent. That brings up the next point - how fast does this need to be? Certainly you must have the first packet sent (and acknowledged if required) before the next packet needs to be sent. For time critical situations (e.g. monitoring a real-time sensor where you cannot miss a sample) this is important. If you can't complete one message before the next needs to be sent, then you need to make design decisions about the things mentioned before or make the message smaller (compression???) etc..
every device comes with datsheets. And in the datsheets you find the timing requirements and specifications.
And they do this using timing diagrams and charts.. for a ggod reason. And this is the way to go.
Your lengthy textual descriptions are hard to read, hard to understand and prone for misunderstandings.
Timing diagrams use lines, arrows, starting points, end points ... to visualize the problem.
Just imagine a map of a town described as text instead of picures. The same is here.
Additionally pictures are quite international ... no language barrier.
Or another example which still occupies my mind, Let's say we have STM32 which first transmits data (HAL_UART_Transmit), and after that function he calls (HAL_UART_Receive). The device on the other end is already sending data after (HAL_UART_Transmit) call, and (HAL_UART_Receive) is being processed, so data comes while function is still being processes, the data comes with 9600 baud rate, so will the function finishes it's stuff before the full 8 bits will arrive ? How fast will the function finishes it's stuff ? there are other faster speeds/bigger baud rates or faster communication lines like I2C or SPI or CAN (I haven't learned how CAN work yet), USB, PCI-Express.
Basically there is not necessarily a relationship.I've seen some charts/diagrams but couldn't understand that some clock of a processor or something isn't matching with the speed of something. Okey maybe something else because that one might sound stupid. Like calculating the window how much time processor/device/process has before doing something, from just knowing the chip clock and connection speed, I didn't understand it at that moment, and that moment gave me a thought if I should think about timing all the time and if so how.
But still a microcontroller does not necessarily need to react within those 50us (although it should be no problem when interrupt driven)
How fast it needs to react depend on a lot of factors:
* protocol (independent of microcontroller at all)
* microcontroller internal buffer size / or DMA
So if a microcontroller has a 16 byte buffer ... while the maximum message size is less tham 16 bytes ... then indeed the microcontroller has unlimited time to react.
It then solely is defined by protocols.
I was then thinking like : Hmmm I've sent the whole command to the device and he know to send me back some data, so first he has to prepare the data, and send it back on the STM side (other side that receives the data back), he has to read HAL_UART_Receive, execute every line of the code of that function, and it takes time. What if the data is prepared and is already being sent ? The HAL_UART_Receive might still be reading the lines of code, and it ends at reading the RX flag that the data is prepared.
Well the problem is that first the data is already being transmitted to the STM but the STM might still be reading the HAL_UART_Receive and might not be prepared before the whole data is being sent to him. The second problem is that the data is being sent while the HAL_UART_Receive is being executed/proceeded don't know how to say it. This might not be a problem for 9600 Baud rate it might read the code of the HAL_UART_Receive function, but for faster UART ? Or for faster communication like I2C or SPI ? They are much faster at sending data ???
Yes and no. This is where you need to look at your MCUs data sheet and see how the UART is configured. Often they have a FIFO (or at least are double-buffered) so that you can read the characters that have been fully received while the hardware is reading in the next one.Isn't the 1 byte stored physically somewhere and must be taken/received before the next data comes ?
So there must be some time to receive it so as you've mentioned 50us.
This is where you need to design your information transfers correctly. You probably should not both be 'talking' at the same time. The device that sends the command should transmit and the device that processes that command should be listening. Then they swap around so that the device that sent the command starts listening for the response, and the other device then sends that response.Well the problem is that first the data is already being transmitted to the STM but the STM might still be reading the HAL_UART_Receive and might not be prepared before the whole data is being sent to him. The second problem is that the data is being sent while the HAL_UART_Receive is being executed/proceeded don't know how to say it.
1) Yes it is stored physically. But if you have a 16 byte buffer there is no need to fetch before the next by receives. You need to fetch befor buffer overrun.Isn't the 1 byte stored physically somewhere and must be taken/received before the next data comes ?
So there must be some time to receive it so as you've mentioned 50us.
I get the feeling that you are trying to use what are known as blocking functions - those are functions that you all and they only return when the task is complete. For example the 'HAL_UART_Transmit()' function will only return when the number of characters you have specified have been sent. Nothing else in your main code will execute while that happens.
1) Yes it is stored physically. But if you have a 16 byte buffer there is no need to fetch before the next by receives. You need to fetch befor buffer overrun.
2) and it depends on sending behaviour and protocol. So either the data are transmitted as byte, then it´s not unusual, that the mext byt becomes sent(recived) 1ms later, 10 ms later, or after a response by the receiver.
It simply is not: always 1 byte immediately after the other without gap.
And again: fetching 1 byte every 50us - interrupt controlled - isn´t a problem for any microcontroller. Even for a slow one.
Also, you are talking about a processor that runs at 80MHz. That means it will execute each instruction in 12.6nS, or 80 instructions in a uS. If you only get characters every 100uS (at about 9600 Baud) then you actually have heaps of time; more so if you only need to look at the complete received buffer and not individual characters.
You got me wrong:So physical is 16 byte ? What if I get 8 bytes ? It usually sends that data is ready to be taken by the fact that the register is full but in this case it is not.
Good to also know that the data is sent in a delay and not immediately one after another.
Did you read my list of what all is processd in parallel? It´s not a problem.Why it is not a problem ? What if it interrupts a different communication. Like it interrupted in the middle of transmition of another communication. Of course DMA or IT is a good idea or interrupt priority.
No. Don´t be scared.So many things to be aware of ... Pretty overwhelming.
You got me wrong:
I clearly wrote "IF" .. I did not write: it is always the case!
--> "IF" you have a 16 byte buffer
--> not unusual there is a gap
***
But if you have a 16 byte buffer and you receive 8 bytes only.
This is quite the identical problem: You have a box where 16 eggs can fit in. But you have only 8 eggs.
If you think you are not able to transport a half full box ... then life becomes really difficult. ;-)
****
Did you read my list of what all is processd in parallel? It´s not a problem.
Waiting is a problem. This means blocking functions. Even polling functions use a lot processing power.
But receiving a byte via UART .. and in an ISR you take read the byte form the UART_Buffer and store it into an SRAM buffer .. maybe takes 1us on an AVR. Let´s say 2us including ISR overhead. So you have 48us or 96% of processing power left to really do meaningful jobs.
Mind: ISRs need to be short in time. If an ISR takes more than 10us .. I would be concerned.
I often test this, just by SETing an IO on entering the ISR and RESET it on leaving the ISR.
And yes, what should happen if the ISR happens during the transmission of another communication? Nothing. Let it happen!
No. Don´t be scared.
Just do your job step by step.
First decide the application requirements properly, then draw your flow charts and timing diagrams, then .. then start coding.
It almost the other way round.But comming back hmmm when waiting is a problem because it wastes time ? So like first byte is received and another comes in 50 us, so yea if it takes 2 us to execute it then yea 48 us free time and it waits.
(takes maybe less than 1 microsecond)
and after that it immediately returns to the main loop.
And the main loop does nto get notified, it will never know that there was an interrrupt. It will never know that there was a short delay in main loop processing.
* So it samples 10kSmpl/s.
* But when the UART receives a command ... it still samples without a gap ..
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?