TNKernel is a compact and very fast real-time kernel for the embedded 32/16/8 bits microprocessors.
TNKernel was inspired by ITRON specification and follows μITRON 4.0 requirements (the spirit, not the letter).
μITRON 4.0 Specifications is an open real-time kernel specification developed by the ITRON Committee
of the TRON Association.
The μITRON 4.0 Specification document can be obtained from the ITRON Project web site
TNKernel is distributed in the source code form free of charge under the FreeBSD-like license.
This is a brief TNKernel version 2.xx description. The complete manual (includes all TNKernel API functions description) is placed in the Downloads tab.
In TNKernel, a task is branch of the code that runs concurrently with another tasks from the programmer's point of view. At the physical level, tasks are actually executed using processor time sharing. Each task can be considered to be an independent program, which executes in its own context (processor registers, stack pointer, etc.).
When the currently running task loses its claim for executing (by the issuing of a system call or interrupt), a context switch is performed. The current context (processor registers, stack pointer, etc.) is saved and the context of another task is restored. This mechanism in the TNKernel is called the "dispatcher".
Generally, there are more than one executable task, and it is necessary to determine the order of the task switching (execution) by using some rules. "Scheduler" is a mechanism that controls the order of the task execution. TNKernel uses priority-based scheduling based on a priority level assigned to the each task.
The smaller the value of the priority, the higher the priority level. TNKernel uses a number_of_bits_in_integer levels of priority (32 - for 32-bits CPUs, 16 - for 16-bits CPUs, 8 - for 8-bits CPUs). Priorities 0 (highest) and number_of_bits_in_integer - 1 (lowest) are reserved by the system for the internal usage. The user may create tasks with priorities 1…number_of_bits_in_integer - 1 (1..30 for 32-bits CPUs).
In TNKernel, more than one task can have the same (identical) priority.
There are four task states in TNKernel:
1. RUNNING state
The task is currently executing.
2. READY state
The task is ready to execute, but cannot do so because a task with higher priority (sometimes same priority) is already executing. A task may execute at any time once the processor becomes available.
In TNKernel, both RUNNING state and READY state are marked as RUNNABLE.
3. WAIT/SUSPEND state
When a task is in the WAIT/SUSPEND state, the task cannot execute because the conditions necessary for its execution have not yet been met and task is waiting for them. When a task enters the WAIT/SUSPEND state, the task's context is saved. When the task resumes execution from the WAIT/SUSPEND state, the task's context is restored.
WAIT/SUSPEND actually have one of three types:
- WAITING state
The task execution is blocked until some synchronization action occurs,such as timeout expiration, semaphore available, event occuring, etc.
- SUSPENDED state
The task is forced to be blocked (switched to the non-executed state) by another task or itself.
- WAITING_SUSPENDED state
Both WAITING and SUSPENDED states co-exist.
In TNKernel, if a task leaves a WAITING state, but a SUSPENDED state exists, the task is not switched to the READY/RUNNING state. Similarly, if a task leaves a SUSPENDED state, but a WAITING state exists, the task is not switched to the READY/RUNNING state. Task is switched to the READY/RUNNING state only if there are no neither WAITING nor SUSPENDED states flagged on it.
4. DORMANT state
The task has been initialized and it is not yet executing or it has already exited. Newly created tasks always begin in this state.
as long as the highest privilege task is running, no other tasks will
execute unless the highest privilege task cannot execute (for
instance, for being placed in the WAITING state).
Among tasks with different priorities, the task with the highest priority is highest privilege task and will execute.
Among tasks of the same priority, the task that entered into the runnable (RUNNING or READY) state first is the highest privilege task and will execute.
Example : Task A has priority 1, tasks B, C, D, E have priority 3, tasks F,G have priority 4, task I has priority 5.
If all tasks are in the READY state, this is the sequence of tasks executing:
- Task A - highest priority (priority 1)
- Tasks B, C, D, E - in order of entering into runnable state for this priority (priority 3)
- Tasks F, G - in order of entering into runnable state for this priority (priority 4)
- Task I - lowest priority (priority 5)
In TNKernel, tasks with the same priority may be scheduled in round robin fashion by getting a predetermined time slice for each task with this priority.
In TNKernel, there are special functions for processing system calls inside interrupt (s). Generally, if some conditions, checked in interrupt, required context switching, system does it according to the architecture of processor (some processors use different stack to service interrupts).
In TNKernel, the task with highest priority 0 is used for the supporting the system ticks timer functionality and the task with lowest priority 31 (for 32-bits CPUs) is used for performing statistics.
TNKernel automatically creates these tasks at the system start.
The user may create tasks with priorities 1…30 (for 32-bits CPUs). User tasks should newer communicate with tasks of priorities 0 and 31 (for instance, to attempt to switch these tasks into suspend state, etc.). The system will reject any attempt to create a task with priority 0 or 31.
More than one user tasks can have the same priority. Tasks with identical priorities have the ability for round-robin scheduling.
Task functions (TNKernel version 2.x)
|tn_task_terminate||Move task to DORMANT state|
|tn_task_exit||Terminate currently running task|
|tn_task_delete||Delete already terminated task|
|tn_task_activate||Activate task. Task is switched from a DORMANT state to the runnable state|
|tn_task_iactivate||The same as above, but in interrupts|
|tn_task_change_priority||Change current task priority|
|tn_task_suspend|| Suspend task. If task is runnable, it is switched to the SUSPENDED state.
If task in WAITING stage, it is moved into the WAITING_SUSPENDED state
|tn_task_resume||Resume suspended task - allows the task to continue its normal processing.|
|tn_task_sleep||Move currently running task sleep.|
|tn_task_wakeup||Wake up the task from sleep.|
|tn_task_iwakeup||The same as above, but in interrupts.|
|tn_task_release_wait||Forcibly release task from waiting (including sleep), but not from the SUSPENDED state|
|tn_task_irelease_wait||The same as above, but in interrupts|
A semaphore has
a resource counter and a wait queue. The resource counter shows the
number of unused resources. The wait queue manages the tasks waiting
for the resources from this semaphore. The resource counter is
incremented by 1 when a task releases a semaphore resource, and is
decremented by 1 when a task acquires a semaphore resource.
If a semaphore has no resources available (resource counter is 0), a task that requested a resource will wait in the semaphore wait queue until a resource is arriving (another task releases it to the semaphore).
Semaphore functions (TNKernel version 2.x)
|tn_sem_signal||Release semaphore resource|
|tn_sem_isignal||The same as above, but in interrupts|
|tn_sem_acquire||Acquire one resource from semaphore|
|tn_sem_polling||Acquire one resource from semaphore with polling|
|tn_sem_ipolling||The same as above, but in interrupts|
A mutex is an object used for mutual exclusion of a shared resource.
A mutex supports two approaches for avoiding unbounded
priority inversions problem - the priority inheritance protocol and the
priority ceiling protocol. Discussion about strengths and weaknesses of each
protocol as well as priority inversions problem is beyond the scope of this
A mutex has a similar functionality as a semaphore with maximum count = 1( a binary semaphore). The differences are that a mutex can only be unlocked by the task that locked it and that a mutex is unlocked by TNKernel when the locking task terminates.
A mutex uses the priority inheritance protocol when it has been created with the TN_MUTEX_ATTR_INHERIT attribute, and the priority ceiling protocol when its attribute value is TN_MUTEX_ATTR_CEILING.
The mutexes in TNKernel(ver.2.0 - 2.5.x) support full-feature priority inheritance protocols according to the document . There is a difference in approach to µITRON 4.0 Specifications: µITRON 4.0 proposes a subset of the priority ceiling protocol (a highest locker protocol), TNKernel uses a full version of the priority ceiling protocol.
The priority inheritance protocol solves priority inversions problem but doesn't prevents deadlocks.
The priority ceiling protocol prevents deadlocks and chained blocking but it is slower than the priority inheritance protocol.
From the ver. 2.6, TNKernel uses a "lite" variant of the mutexes. A new mutexes design does not supports all features like previous versions, but works significantly faster.
Mutex functions (TNKernel version 2.x)
|tn_mutex_create||Create a mutex|
|tn_mutex_delete||Delete a mutex|
|tn_mutex_lock||Lock a mutex|
|tn_mutex_lock_polling||Try to lock a mutex (with polling)|
|tn_mutex_unlock||Unlock a mutex|
4. Data Queues
A data queue is a FIFO that stores pointer (of type void*) in each cell, called (in µITRON style) a data element.
A data queue also has an associated wait queue each for sending (wait_send queue) and for receiving (wait_receive queue).
A task that sends a data element is tried to put the data element into the FIFO. If there is no space left in the FIFO, the task is switched to the WAITING state and placed in the data queue's wait_send queue until space appears (another task gets a data element from the data queue).
A task that receives a data element tries to get a data element from the FIFO. If the FIFO is empty (there is no data in the data queue), the task is switched to the WAITING state and placed in the data queue's wait_receive queue until data element arrive (an another task puts some data element into the data queue).
To use a data queue just for the synchronous message passing, set size of the FIFO to 0.
The data element to be sent and received can be interpreted as a pointer or an integer and may have value 0 (NULL).
Data Queue functions (TNKernel version 2.x)
|tn_queue_create||Create data queue|
|tn_queue_delete||Delete data queue|
|tn_queue_send||Send (put) a data element into the data queue|
|tn_queue_send_polling||Try to send (put) a data element into the data queue (with polling)|
|tn_queue_isend_polling||The same as above, but in interrupts|
|tn_queue_receive||Receive (get) a data element from the data queue|
|tn_queue_receive_polling||Try to receive(get) a data element from the data queue (with polling)|
|tn_queue_ireceive||The same as above, but inside interrupts|
An eventflag has an internal variable (of size integer),
which is interpreted as a bit pattern where each bit represents an
event. An eventflag also has a wait queue for the tasks waiting on
A task may set specified bits when an event occurs and may clear specified bits when necessary. A task waiting for events to occur will wait until every specified bit in the eventflag bit pattern is set. The tasks waiting for an eventflag are placed in the eventflag's wait queue.
An eventflag is a very suitable synchronization object for cases where (for some reasons) one task has to wait for many tasks, and vice versa, many tasks have to wait for one task.
Eventflag functions (TNKernel version 2.x)
|tn_event_wait||Wait until eventflag satisfies the release condition|
|tn_event_wait_polling||Wait until eventflag satisfies the release condition, with polling|
|tn_event_iwait||The same as above, but inside interrupts|
|tn_event_iset||The same as above, but inside interrupts|
|tn_event_clear||Clears the bits in the eventflag|
|tn_event_iclear||The same as above, but inside interrupt|
6. Fixed-Sized Memory Pools
A fixed-sized memory pool is used for managing fixed-sized memory blocks dynamically.
A fixed-sized memory pool has a memory area where fixed-sized memory
blocks are allocated and the wait queue for acquiring a memory block.
If there are no free memory blocks, a task trying to acquire a memory block will be placed into the wait queue until a free memory block arrives (another task returns it to the memory pool).
Fixed-sized memory pool functions (TNKernel version 2.x)
|tn_fmem_create||Create Fixed-Sized Memory Pool|
|tn_fmem_delete||Delete Fixed-Sized Memory Pool|
|tn_fmem_get||Acquire (get) a memory block from pool|
|tn_fmem_get_polling||Acquire (get) a memory block from pool, with polling|
|tn_fmem_get_ipolling||The same as above, but inside interrupts|
|tn_fmem_release||Release (put back to pool) a memory block|
|tn_fmem_irelease||The same as above, but inside interrupts|
 L. Sha, R. Rajkumar, J. Lehoczky, Priority Inheritance Protocols: An Approach to Real-Time Synchronization, IEEE Transactions on Computers, Vol.39, No.9, 1990
TNKernel real-time kernel
Copyright ©2004, 2011 Yuri Tiomkin
All rights reserved.
Permission to use, copy, modify, and distribute this software in source and binary
forms and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that
copyright notice and this permission notice appear in supporting documentation.
THIS SOFTWARE IS PROVIDED BY THE YURI TIOMKIN AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL YURI TIOMKIN OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF