Definitions | One
shot threads vs conventional threads Priorities |
Kernel | Scheduler Non-preemptive kernels Preemptive kernels |
Synchronization and communication primitives | The need for protection |
This chapter defines and explains some words used in the real time pages. Some words are used in slightly different ways by different persons, and discussions on these words can often be found in discussion groups on the internet.
Thread | A thread is the primitive that can execute code. It contains an instruction pointer (= program counter) and sometimes has its own stack. |
Process | An environment in which one or several threads run. Examples of the
context is the global memory address, which is common for threads within a
process. Note that a process in itself does not execute code, but one or
more threads within the process can.
In some systems that does not have memory protection threads are sometimes called processes. This should be avoided since it will only lead to confusion. |
Task | In this text, a task is something that needs to be done. It is often implemented in a separate thread, but does not have to be. |
Kernel | The kernel of an RTOS is the part that schedules which thread gets to execute at a given point in time. |
Preemption | When a thread is interrupted, and the execution is handed over to another thread using a context switch. |
Context switch | The execution is changed from one context (thread) to another. |
There are two types of threads which can distinctively be distinguished. These are single shot threads and conventional threads.
A single shot thread can have three different states - ready, running and terminated. When it becomes ready to run, it enters the ready state. Once it get the CPU time, it enters the running state. If it is preempted by a higher priority thread, it can go back to ready state, but when it is finished, it enters terminated state.
Wait should be noticed here is that a single shot thread has no waiting state. The task can not yield the processor and wait for something (like a time or an event) and then continue where it was. The closest thing it can do is ,before it terminates, it makes sure that it is restarted when a time or an event occurs.
Single shot threads are well suited for time driven systems where one wants to create a schedule offline. Single shot threads can be implemented using only very little RAM memory, and are therefore often used in small systems.
To summarize, a single shot thread behaves much like an interrupt service
routine - something starts it, it is preempted by higher priority interrupts and
when it is finished it terminates.
In comparison with single shot threads, conventional threads have an extra state, the waiting state. This means that a conventional thread can yield the processor and wait for a time or an event to occur.
In systems using conventional threads, threads are often implemented using
while loops:
void blocking_task(void)
{
while (1)
{
/* task
code */
delay(500);
/* do
stuff */
...
}
}
Unlike single shot threads, many conventional threads will never enter the terminated state. It is not necessary since lower priority threads can run while the thread is in its waiting state.
Conventional threads give the programmer a higher degree of freedom compared to single shot threads, but as always that comes at a price. Usually the price is RAM and ROM memory, and sometimes also more CPU overhead.
In operating systems that support preemption, threads normally have priorities. A thread that becomes ready to run can preempt an executing lower priority thread. Threads which perform critical tasks or have short deadlines often have higher priority.
A thread may have two different priorities - the base priority and the current priority.
Normally the current priority is the same as the base priority, but in some cases it can be raised to a higher level. That will be described later in the chapter on Synchronization and communication primitives.
Note that in most RTOSes a higher priority means a lower priority number.
The kernel is the part of the operation system which controls which thread is run at each point in time. It is the kernel on which the rest of the OS is built.
In small systems, often using 8 or 16 bit micro controllers, the RAM memory and CPU time are limited resources. In such systems, a true RTOS may use too much of these resources. A simple schedules is often used to control how different tasks shall be run.
When a simple scheduler of this kind is used, threads are one shot and preemption is not used.
The simplest of schedulers, where each tread is run once every tick, can be implemented as in the example below:
void Scheduler(void)
{
int stop=0;
int newtick=0;
while(!stop)
{
while(!newtick); // Wait for timer tick
newtick = 0;
thread1();
thread2();
...
threadn();
if(newtick) // overrun
OverrunHandler(); //Could make reset depending on application
}
}
The scheduler is in a while loop, waiting for the newtick variable to be
increased by a timer interrupt. Once it has been received, the threads are run
and last the scheduler checks if there has been an overrun - if the threads took
too much time so that the next timer tick has already started. An error handling
routine is then called.
The simple scheduler above could only run threads having a common cycle time. The extension below is a little bit more useful, and is used with some variation in many micro controllers.
void Scheduler(void)
{
int stop=0;
int newtick=0,oldtick=0;
while(!stop)
{
while(newtick==oldtick); // Wait for timer tick
oldtick = newtick;//threads running with minimum cycle time
thread1();
thread2();
switch(newtick % MAX_CYCLE_TIME)
{
case 0:
thread3();
thread4();
...
break;
case 1:
...
case MAX_CYCLE_TIME-1:
...
break;}
//more threads with min cycle time
...
threadn();
if(newtick!=oldtick) // overrun
OverrunHandler(); //Could make reset depending on application
}
}
Inside the switch statement threads with longer cycle times are placed. One can vary the times by running the same thread in more than one case statement.
A simple scheduler as described above does really bot qualify as a real time kernel. Kernels are usually divided into two groups - preemptive or non-preemptive kernels.
In non-preemptive systems, a thread which has started to execute is always allowed to execute until one of two things happen:
When one of the things above has happend, the kernel performs a context switch.A context switch means that the kernel hands over the execution from one thread to another. The thread which is ready to run and has the highest priority gets to run.
What can be noticed here is that once a thread has started to execute, it is always allowed to continue to run until it itself chooses to yield the execution to some other thread. That also means that if a low priority thread has started to execute, a high priority thread which gets ready to run may have to wait for a considerable amount of time before it gets to execute.
In preemptive systems, the kernel schedules is called with a defined period, each tick. Each time it is called it checks if there is a ready-to-run thread which has a higher priority than the executing thread. If that is the case, the scheduler performs a context switch. This means that a thread can be preempted - forced to go from executing to ready state - at any point in the code, something that puts special demands on communication between threads and handling common resources.
Using a preemptive kernel solves the problem where a high priority thread has to wait for a lower priority thread to yield the processor. Instead, when the high priority thread becomes ready to run, the lower priority thread will become preempted, and the high priority thread can start to execute.
Most commercial RTOSes support preemption.
It is essential to understand the scheduling policy of the kernel that is used, since different policies allows for different programming styles and have different requirements. An identical program can behave quite differently when different scheduling policies are used!
The most common fixed-priority scheduling policies are:
A number of policies exist where the priority of a thread is changed depending on how much execution time it has used lately. Most Unix systems use such policies. They are however uncommon in real time systems, since it is difficult to create predictable systems using such policies.
In a complete system, threads almost always have to cooperate in order to make the system work. Two kinds of cooperation can be identified:
Resources that are common to several threads are called common resources. Common resources can include shared memory (like shared variables) and I/O units like a keybord, an UART or a CAN controller.
Problems can occur if more than one thread tries to access a common resource at the same time.
An example:
INSERT A PICTURE HERE!!!!
Thread A: N=N+1
Thread B: N=N-1
Assume that N has the initial value of 5.
If thread A is run first, and
then thread B, the result will be 5. The same is true if thread B runs first and
then thread A. However, it is also possible that the result will be 4 or 6!
Consider the following case:
INSERT PICTURE HERE!!!
Thread A:
LOAD R1, N
ADD R1,1
STORE R1, NThread B:
LOAD R2, N
SUB R2, 1
STORE R2, N
Nothing here yet...
Nothing here either...
OSEK/VDX started as an effort from German and French automotive industry to reduce costs of software development in distributed real-time control by standardising non-applikation dependant software parts. If a common real-time OS was used, it would be easier to integrate software from different manufacturers into the same contol unit. Such an OS could also be the base for other software packages, handling the communication and network management between Electronic Control Units (ECUs).
OSEK is an abbreviation for the German term „Offene Systeme und deren Schnittstellen für die Elektronik im Kraftfahrzeug" (English "Open Systems and the Corresponding Interfaces for Automotive Electronics") while VDX stands for "Vehicle Distributed eXecutive".
Currently OSEK covers three areas:
Only the OSEK RTOS is covered here. For further information on the other parts see the OSEK/VDX home page.
Warning: This section is not finished, and is written as a reminder to the author himself only!
Note: OSEK threads are called tasks.
The following conformance classes are defined:
· BCC1 (only basic tasks, limited to one request per task and one task per priority, while
all tasks have different priorities)
· BCC2 (like BCC1, plus more than one task per priority possible and multiple requesting
of task activation allowed)
· ECC1 (like BCC1, plus extended tasks)
· ECC2 (like BCC2, plus extended tasks without multiple requesting admissible)
Northern Real-time Applications home page.
Integrated Systems homepage and their pOSEK home page.
This page has had visits
since 990612
Return to Home