Sunday 22 May 2011

RTOS Basics

Reference:

http://www.slideshare.net/sundaresan/rtos-concepts



RTOS Concepts
What Does Real-Time Mean?
Main difference to other computation: time
time means that correctness of system depends
- not only on logical results
- but also on the time the results are produced
real => reaction to external events must occur during their evolution.
system time ( internal time ) has to be measured
with same time scale as controlled environment
( external time )
Foreground/Background.
Systems which do not use an RTOS
An application consist of an infinite loop which calls application modules to perform the desired operations.
The modules are executed sequentially (background) with interrupt service routines (ISRs) handling asynchronous events (foreground).
Batch process
A process which executes without user interaction.
Interactive process
A process which requires user interaction while executing
Kernel
Kernel : the smallest portion of the operating system that provides
task scheduling, dispatching, and intertask communication.
Kernel types
Nanokernel - the dispatcher
Microkernel - a nanokernel with task scheduling
Kernel - a microkernel with intertask synchronization
Executive - a kernel that includes privatized memory blocks, I/O services, and other complex issues. Most commercial real-time kernels are in this category.
Operating system - an executive that also provides generalized user interface, security, file management system, etc
What is RTOS?
A real-time operating system (RTOS) that supports real-time applications and embedded systems.
Real-time applications have the requirement to meet task deadlines in addition to the logical correctness of the results.
– Multiple events handled by a single processor
– Events may occur simultaneously
– Processor must handle multiple, often competing events
Desirable Features of Real-Time Systems
Timeliness
- OS has to provide kernel mechanisms for
- time management
- handling tasks with explicit time constraints
Deterministic
Design for peak load
Predictability
Fault tolerance
Maintainability

Multitasking
Must provide mechanisms for scheduling and switching for several user and kernel tasks
Maximize CPU utilization
Allow for managing of complex and real-time applications
Categories
Hard Real Time System:
failure to meet time constraints leads to system failure
Firm Real Time System:
low occurrence of missing a deadline can be tolerated
Soft Real Time System:
performance is degraded by failure to meet time constraints
An RTOS differs from common OS, in that the user when using the former has the ability to directly access the microprocessor and peripherals.
Such an ability of the RTOS helps to meet deadlines.
Real-Time Systems
RTOS is a multitasking system where multiple tasks run concurrently
– system shifts from task to task
– must remember key registers of each task
• called its context
RTOS responsible for all activities related to a task:
– scheduling and dispatching
– intertask communication
– memory system management
– input/output system management
– timing
– error management
– message management
Basic requirements of an RTOS.
(i) Multi-threading and preemptibility
(ii) Thread priority
(iii) Thread synchronization mechanisms
(iv) Priority inheritance
(v) Predefined latencies
Task switching latency: time to save the context of a currently executing task and switch to another task..
 Interrupt latency: time elapsed between the execution of the last instruction of the interrupted task and the first instruction in the interrupt handler
 Interrupt dispatch latency: This is the time to go from the last instruction in the interrupt handler to the next task scheduled to run.
Basic requirements of an RTOS.
priority inversion
occurs when a higher priority task must wait on a low priority task to release a resource
Priority Ceiling
Each resource has an assigned priority
Priority of thread is the highest of all priorities of the resources it’s holding
Priority Inheritance
The thread holding a resource inherits the priority of the thread blocked on that resource
Preemptive scheduling .
In a preemptive kernel, when an event makes a higher priority task ready to run, the current task is immediately suspended and the higher priority task is given control of the CPU.
Reentrancy .
reentrant function : can be used by more than one task without fear of data corruption.
non-reentrant function : cannot be shared by more than one task unless mutual exclusion to the function is ensured by either using a semaphore, by disabling interrupts during critical sections of code.
A reentrant function can be interrupted at any time and resumed at a later time without loss of data.
Reentrant functions either use local variables (CPU registers or variables on the stack) or protect their data when global variables are used.
Compilers specifically designed for embedded software will generally provide reentrant run-time libraries.
Dynamic Memory Allocation
RTOS uses abstract data types such as record, linked list, and queue
These data types normally use RAM dynamic memory allocation techniques
Data structures are created (allocated) on the fly during program execution and destroyed when no longer needed
– Requires large RAM memory
Heap is portion of memory used for dynamic memory allocation
Must allocate separate RAM spaces for the Heap as well as the Stack
Stack : Last-in-first-out (LIFO) data structure
RTOS requires multiple stacks - one for each task
Memory Management
Two issues
Heap management
Stack management
Heap management
Classic heap
Priority heap
Fixed block heap
Memory Management
Classic heap
The memory is collected into one giant heap and partitioned according to the demand from tasks .
There are several “fit” memory allocation algorithms, e.g., best-fit, first-fit, that also attempt to minimize the memory fragmentation.
Has a big management overhead so is not used in real-time systems
Priority heap
partitions the memory along priority boundaries, e.g., a high and a low priority partitions are created
Fixed block heap
partitions the memory into several pools of fixed block length and upon a request, allocates a single block of memory from the pool with size equal or larger than the requested amount
Stack management:
When multiple tasks share a single processor, their contexts
(volatile information such as the contents of hardware registers, memory-management registers, and the program counter)
need to be saved and restored so as to switch them.
This can be done using
task-control block model OR one or more run-time stacks
Run-time stacks - used to keep context
may use only one run-time stack for all the tasks or one run-time stack in conjunction with several application stacks (or private stacks), one for each task in memory
Multiple stack case allows tasks to interrupt themselves ,
Stack size must be known a priori.
Operating system manages the stacks
Task and Task Control Blocks
In RTOS program consists of independent,asynchronous, and interacting tasks
– Must have capability to store task context
Context is kept in the control block of the task.
Having multiple tasks means multiple control blocks, which are maintained in a list
• RTOS updates TCB when task is switched
best for full-featured real-time operating systems
Device Control Block (DCB)
– tracks status of system associated devices
Priorities
Priority
An ordinal number which represents the relative importance of a task.
Static priority
A priority which is not automatically adjusted by the system.
Static priority can typically be changed by user.
Dynamic priority
A priority which is adjusted automatically by the system according to task behavior and system loading.
Dynamic priority imposes an overhead on the system.
Dynamic priority can improve response times and eliminate indefinite postponing
Scheduling algorithms of RTOS
The most commonly used static scheduling algorithm is the Rate Monotonic (RM) scheduling algorithm
The RM algorithm assigns different priorities proportional to the frequency of tasks.
The task with the shortest period gets the highest priority, and the task with the longest period gets the lowest static priority.
Rate monotonic algorithm is a dynamic preemptive algorithm based on static priorities
RM algorithm provides no support for dynamically changing task periods and/or priorities and tasks that may experience priority inversion.
Rate Monotonic
Priority inversion occurs in an RM system where in order to enforce rate monotonicity, a non-critical task with a high frequency of execution is assigned a higher priority than a critical task with lower frequency of execution
A priority ceiling protocol (PCP) can be used to counter priority inversion, wherein a task blocking a higher priority task inherits the higher priority for the duration of the blocked task.
The priority ceiling protocol is used to schedule a set dependant periodic tasks that share resources protected by semaphores
Earliest deadline first
Earliest deadline first (EDF) scheduling can be used for both static and dynamic real-time scheduling.
a dynamic priority algorithm which uses the deadline of a task as its priority.
The task with the earliest deadline has the highest priority
Minimum Laxity First
A variant of EDF is Minimum Laxity First (MLF) scheduling where a laxity is assigned to each task in the system and minimum laxity tasks are executed first.
Laxity : The difference between the time until a tasks completion deadline and its remaining processing time requirement.
{ the deadline by which _ { the amount of the task must be completed } computation
remaining to be performed }
MLF considers the execution time of a task, which EDF does not
Minimum Laxity First
MLF assigns higher priority to a task with the least laxity
A task with zero laxity must be scheduled right away and executed without preemption or it will fail to meet its deadline.
The negative laxity indicates that the task will miss the deadline, no matter when it is picked up for execution.
A major problem with LLF algorithm is that it is
impractical to implement because laxity ties ( two or more tasks have the same laxities ) result in the frequent context switches among the corresponding tasks.
This will cause the system performance to remarkably degrade.
Modified Least Laxity First
MLLF schedules the task sets the same as LLF algorithm.
If the laxity tie occurs, the running task continues to run with no preemption as far as the deadlines of other tasks are not missed.
The MLLF algorithm defers the context switching until necessary and it is safe even if the laxity tie occurs.
That is, it allows the laxity inversion where a task with the least laxity may not be scheduled immediately.
Laxity inversion applies to the duration that the currently running task can continue running with no loss in schedulability
Maximum Urgency First Algorithm
solves the problem of unpredictability during a transient overload for EDF, LLF and MLLF algorithms.
This algorithm is a combination of fixed and dynamic priority scheduling, also called mixed priority scheduling.
With this algorithm, each task is given an urgency which is defined as a combination of two fixed priorities ( criticality and user priority ) and a dynamic priority that is inversely proportional to the laxity.
The MUF algorithm assigns priorities in two phases
Phase One concerns the assignment of static priorities to tasks
Phase Two deals with the run-time behavior of the MUF scheduler
Maximum Urgency First Algorithm
The first phase consists of these steps :
1) It sorts the tasks from the shortest period to the longest period. Then it defines the critical set as the first N tasks such that the total CPU load factor does not exceed 100%. These tasks are guaranteed not to fail even during a transient overload.
2) All tasks in the critical set are assigned high criticality.The remaining tasks are considered to have low criticality.
3) Every task in the system is assigned an optional unique user priority
{ CHIMERA II, a real-time operating system being used to control sensor-based control systems }
Maximum Urgency First Algorithm
In the second phase , the MUF scheduler follows an algorithm to select a task for execution.
This algorithm is executed whenever a new task is arrived to the ready queue.
The algorithm is as follows:
1) If there is only one highly critical task, pick it up and execute it.
2) If there are more than one highly critical task, select the one with the highest dynamic priority. Here, the task with the least laxity is considered to be the one with the highest priority.
3) If there is more than one task with the same laxity, select the one with the highest user priority.
In addition to basic scheduling and context switching , a real-time kernel typically provides other valuable services to applications such as:
Time Delay
System Time
Inter-Process Communication (IPC)
Synchronization —semaphores or flags
Resource Protection - mutex
RTOS for small footprint, mobile and connected devices
Windows CE
(32 bit devices , minimum footprint of 400KB , 256 priority levels )
RTOS for complex, hard real-time applications
LynxOS
(microkernel is 28 KB, 512 thread priority levels, supports memory protection )
General purpose RTOS in the embedded industry
VxWorks
(256 priority levels, multitasking, deterministic context switching,
preemptive and round robin scheduling, binary and counting semaphores, mutual exclusion with inheritance, supports virtual memory configuration )
RTOS for the Java Platform
Jbed RTOS package
( runs on 32 -bit microprocessors and controllers. Current versions support ARM7, 68k, PowerPC architectures, supports up to 10-thread priority levels, EDF )
Objected-oriented RTOS
pSOSystem
Why Should I Use an RTOS?
True that many or most applications can be written without the support of an RTOS.
A few reasons to consider using an RTOS :
The job of writing application software is generally easier using an RTOS, because the use of a kernel enforces certain disciplines in how your code is structured.
While the illusion of concurrency can be created without the use of an RTOS (though not always), it almost always results in a much more complex piece of software.
Disadvantages of Real-Time Kernels
Extra cost of the kernel at Software
More ROM/RAM
In addition to basic scheduling and context switching, a real-time kernel typically provides other valuable services to applications such as:
Time Delays
System Time
Inter-Process Communication (IPC)
Synchronization

Sundaresan SundarSundaresan Sundar + Follow

24722 views, 6 favs, 4 embeds more
Related

Implementing Lightweight Networking Implementing Lightweight Networking 3131 views
Implementing Lightweight Networking Implementing Lightweight Networking 934 views
TCIL TCIL 236 views
RTOS RTOS 1226 views
070413 임베디드 시스템 070413 임베디드 시스템 379 views
Real Time Operating Systems Real Time Operating Systems 1613 views
Os Concepts Os Concepts 1185 views
Real Time Operating System Concepts Real Time Operating System Concepts 4896 views
Rtos By Avanish Agarwal Rtos By Avanish Agarwal 1595 views
ucOS ucOS 2007 views
Real Time Systems Real Time Systems 1157 views
real-time virtualization experts real-time virtualization experts 139 views
Lecture 09 Lecture 09 366 views
Misconceptions About Real Time Misconceptions About Real Time 312 views
Introduction to Operating systems & RTOS Introduction to Operating systems & RTOS 2719 views
INtime INtime 104 views

About this presentation
Usage Rights

© All Rights Reserved
Stats

6 Favorites
7 Comments
1,434 Downloads
24,452 Views on
SlideShare
270 Views on
Embeds
24,722 Total Views

Embed views

203 views on http://www.slideshare.net
60 views on http://dev.emcelettronica.com
5 views on http://crazyindiandreams.blogspot.com
2 views on http://translate.googleusercontent.com

Accessibility

View text version
Additional Details

Uploaded via SlideShare
Uploaded as Microsoft PowerPoint

Flag as inappropriate
File a copyright complaint
Categories

Technology

Top 25 Tags

s
v
rtos
concepts

Follow SlideShare

Twitter
Facebook
SlideShare Blog

8 tweets
8 shares
0 shares
WordPress
More