Embedded Systems.. EASY questions!

I have those 2 questions attached in “Assignment3”.. Each answer should be like half a page long.. All documents and references are included.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper


Question 1 Real time issues in embedded systems

Write a discussion of the issues involved in designing real time systems. Include the following systems: Bare metal programming, Linux or Unix operating systems, and microkernel designs.

Describe the advantages and potential problems of these types of system

Some information and references can be found in the folder

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper


Question 2 JTAG description, and use for testing and flash programming

To answer this question, you may cut and paste any diagrams and paraphrase, but preferably condense, any text you wish. Please reference all sources you use. Format your answer as you wish, but try to cover all the following points.

Using the references below and/or any others you can find, answer the following questions.

Describe briefly the principles and methods of the boundary scan testing through the JTAG interface. Refer to:

http://en.wikipedia.org/wiki/Joint_Test_Action_Group

And

http://boundaryscan.blogspot.ca/2010/04/tutorial-role-of-jtag-in-system-debug.html

and

http://www.corelis.com/education/JTAG_Tutorial.htm

1
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Embedded Linux system development

Real­time in 
embedded Linux 

systems
Michael Opdenacker
Thomas Petazzoni

Gilles Chanteperdrix
Free Electrons

© Copyright 2004­2011, Free Electrons.
Creative Commons BY­SA 3.0 license
Latest update: Feb 21, 2011, 
Document sources, updates and translations:
http://free­electrons.com/docs/realtime
Corrections, suggestions, contributions and translations are welcome!

http://free-electrons.com/docs/realtime

2
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Real Time in Embedded Linux Systems

Introduction

3
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Embedded Linux and real time

Due to its advantages, Linux and the open­source softwares are 
more and more commonly used in embedded applications

However, some applications also have real­time constraints

They, at the same time, want to

Get all the nice advantages of Linux: hardware support, 
components re­use, low cost, etc.

Get their real­time constraints met

?

4
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Embedded Linux and real time

Linux is an operating system part of the large Unix family

It was originally designed as a time­sharing system

The main goal is to get the best throughput from the available 
hardware, by making the best possible usage of resources (CPU, 
memory, I/O)

Time determinism is not taken into account

On the opposite, real­time constraints imply time determinism, 
even at the expense of lower global throughput

Best throughput and time determinism are contradictory 
requirements

5
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Linux and real­time approaches

Over time, two major approaches have been taken to bring real­
time requirements into Linux

Approach 1

Improve the Linux kernel itself so that it matches real­time 
requirements, by providing bounded latencies, real­time APIs, etc.

Approach taken by the mainline Linux kernel and the 

PREEMPT_RT

 project.

Approach 2

Add a layer below the Linux kernel that will handle all the real­time 
requirements, so that the behaviour of Linux doesn’t affect real­time 
tasks.

Approach taken by RTLinux, RTAI and Xenomai

6
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Approach 1
Improving the main Linux kernel with 

PREEMPT_RT

7
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Understanding latency

When developing real­time applications with a system such as 
Linux, the typical scenario is the following

An event from the physical world happens and gets notified to the 
CPU by means of an interrupt

The interrupt handler recognizes and handles the event, and then 
wake­up the user­space task that will react to this event

Some time later, the user­space task will run and be able to react to 
the physical world event

Real­time is about providing guaranteed worst case latencies for 
this reaction time, called latency

Something not very important…
Your important
real­time task !

Interrupt

 ! ?

8
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Linux kernel latency components

Waiting
task

interrupt
latency

Interrupt

handler Scheduler

Running task

Interrupt

Scheduling latency

scheduler
latency

scheduler
duration

Process
context

Interrupt
context

Makes the
task runnable

kernel latency = interrupt latency + handler duration
+ scheduler latency + scheduler duration

handler
duration

9
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Interrupt latency

Waiting
task
interrupt
latency

Interrupt
handler Scheduler

Running task
Interrupt

handler
duration

scheduler
latency
scheduler
duration
Makes the
task runnable

Time elapsed before executing the interrupt handler

Scheduling latency

10
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Source of interrupt latency

One of the concurrency prevention mechanism used in the kernel 
is the spinlock

It has several variants, but one of the variant commonly used to 
prevent concurrent accesses between a process context and an 
interrupt context works by disabling interrupts

Critical sections protected by spinlocks, or other section in which 
interrupts are explictly disabled will delay the beginning of the 
execution of the interrupt handler

The duration of these critical sections is unbounded

Other possible source: shared interrupts

Kernel
code

Critical section
protected by spinlock

Interrupt
handler

Interrupt ?

11
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Interrupt handler duration

Waiting
task
interrupt
latency
Interrupt
handler Scheduler
Running task
Interrupt
handler
duration
scheduler
latency
scheduler
duration
Makes the
task runnable

Time taken to execute the interrupt handler

Scheduling latency

12
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Interrupt handler implementation

In Linux, many interrupt handlers are split in two parts

A top­half, started by the CPU as soon as interrupt are 
enabled. It runs with the interrupt line disabled and is 
supposed to complete as quickly as possible.

A bottom­half, scheduled by the top­half, which starts after all 
pending top­half have completed their execution.

Therefore, for real­time critical interrupts, bottom­half 
shouldn’t be used: their execution is delayed by all other 
interrupts in the system.

Top half

Interrupt ACK Exit

Bottom half

Schedule
bottom
half

Other interrupt
handlers…

Handle
device
data…

Wake up
waiting
tasks

User space…

13
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Scheduler latency

Waiting
task
interrupt
latency
Interrupt
handler Scheduler
Running task
Interrupt
handler
duration
scheduler
latency
scheduler
duration
Makes the
task runnable

Time elapsed before executing the scheduler

Scheduling latency

14
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Understanding preemption (1)

The Linux kernel is a preemptive operating system

When a task runs in user­space mode and gets interrupted by an 
interruption, if the interrupt handler wakes up another task, this 
task can be scheduled as soon as we return from the interrupt 
handler.

Task A
(running in user mode)

Interrupt handler
Wakes up Task B

Task B
(running in user mode)

Interrupt

15
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Understanding preemption (2)

However, when the interrupt comes while the task is executing a 
system call, this system call has to finish before another task can 
be scheduled.

By default, the Linux kernel does not do kernel preemption.

This means that the time before which the scheduler will be 
called to schedule another task is unbounded.

Task A
(user mode)

Interrupt handler
Wakes up Task B

Task B
(user mode)

System call

Task A
(kernel mode)

Task A
(kernel mode)
Interrupt

?
Return from syscall

16
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Scheduler duration

Waiting
task
interrupt
latency
Interrupt
handler Scheduler
Running task
Interrupt
handler
duration
scheduler
latency
scheduler
duration
Makes the
task runnable

Time taken to execute the scheduler
and switch to the new task.

Scheduling latency

17
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Other non­deterministic mechanisms

Outside of the critical path detailed previously, other non­
deterministic mechanisms of Linux can affect the execution time 
of real­time tasks

Linux is highly based on virtual memory, as provided by an MMU, 
so that memory is allocated on demand. Whenever an application 
accesses code or data for the first time, it is loaded on demand, 
which can creates huge delays.

Many C library services or kernel services are not designed with 
real­time constraints in mind.

18
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Priority inversion

Acquires
a lock

Priority

Time

preempted

Tries to get
the same 

lock
waits

A process with a low priority might hold a lock needed by a higher 
priority process, effectively reducing the priority of this process. 
Things can be even worse if a middle priority process uses the CPU.

19
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Interrupt handler priority

top priority task

Any interrupt

top priority task

Any interrupt…

In Linux, interrupt handlers are executed directly by the CPU 
interrupt mechanisms, and not under control of the Linux 
scheduler. Therefore, all interrupt handlers have an higher 
priority than all tasks running on the system.

20
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

The PREEMPT_RT project

Long­term project lead by Linux kernel developers Ingo Molnar, 
Thomas Gleixner and Steven Rostedt

https://rt.wiki.kernel.org

The goal is to gradually improve the Linux kernel regarding real­
time requirements and to get these improvements merged into 
the mainline kernel

PREEMPT_RT development works very closely with the mainline 
development

Many of the improvements designed, developed and debugged 
inside PREEMPT_RT over the years are now part of the mainline 

Linux kernel

The project is a long­term branch of the Linux kernel that ultimately 
should disappear as everything will have been merged

https://rt.wiki.kernel.org/

21
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Improvements in the mainline kernel

Coming from the 
PREEMPT_RT project

Since the beginning of 2.6

O(1) scheduler

Kernel preemption

Better POSIX real­time API 
support

Since 2.6.18

Priority inheritance support 
for mutexes

Since 2.6.21

High­resolution timers

Since 2.6.30

Threaded interrupts

Since 2.6.33

Spinlock annotations

22
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

New preemption options in Linux 2.6

2 new preemption models offered by standard Linux 2.6:

23
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

1st option: no forced preemption

CONFIG_PREEMPT_NONE
Kernel code (interrupts, exceptions, system calls) never preempted.
Default behavior in standard kernels.

Best for systems making intense computations,
on which overall throughput is key.

Best to reduce task switching to maximize CPU and cache usage
(by reducing context switching).

Still benefits from some Linux 2.6 improvements:
O(1) scheduler, increased multiprocessor safety (work on RT 
preemption was useful to identify hard to find SMP bugs).

Can also benefit from a lower timer frequency
(100 Hz instead of 250 or 1000).

24
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

2nd option: voluntary kernel preemption

CONFIG_PREEMPT_VOLUNTARY
Kernel code can preempt itself

Typically for desktop systems, for quicker application reaction to 
user input.

Adds explicit rescheduling points throughout kernel code.

Minor impact on throughput.

25
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

3rd option: preemptible kernel

CONFIG_PREEMPT
Most kernel code can be involuntarily preempted at any time.
When a process becomes runnable, no more need to wait for 
kernel code (typically a system call) to return before running the 
scheduler.

Exception: kernel critical sections (holding spinlocks), but a 
rescheduling point occurs when exiting the outer critical section,
in case a preemption opportunity would have been signaled while
in the critical section.

Typically for desktop or embedded systems with latency 
requirements in the milliseconds range.

Still a relatively minor impact on throughput.

26
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Priority inheritance

One classical solution to the priority inversion problem is called 
priority inheritance

The idea is that when a task of a low priority holds a lock requested 
by an higher priority task, the priority of the first task gets temporarly 
raised to the priority of the second task : it has inherited its priority.

In Linux, since 2.6.18, mutexes support priority inheritance

In userspace, priority inheritance must be explictly enabled on a 
per­mutex basis.

27
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

High resolution timers

The resolution of the timers used to be bound to the resolution of 
the regular system tick

Usually 100 Hz or 250 Hz, depending on the architecture and the 
configuration

A resolution of only 10 ms or 4 ms.

Increasing the regular system tick frequency is not an option as it 
would consume too much resources

The high­resolution timers infrastructure, merged in 2.6.21, 
allows to use the available hardware timers to program interrupts 
at the right moment.

Hardware timers are multiplexed, so that a single hardware timer is 
sufficient to handle a large number of software­programmed timers.

Usable directly from user­space using the usual timer APIs

28
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Threaded interrupts

To solve the interrupt inversion problem, PREEMPT_RT has 
introduced the concept of threaded interrupts

The interrupt handlers run in normal kernel threads, so that the 
priorities of the different interrupt handlers can be configured

The real interrupt handler, as executed by the CPU, is only in 
charge of masking the interrupt and waking­up the corresponding 
thread

The idea of threaded interrupts also allows to use sleeping 
spinlocks (see later)

Merged since 2.6.30, the conversion of interrupt handlers to 
threaded interrupts is not automatic : drivers must be modified

In PREEMPT_RT, all interrupt handlers are switched to threaded 
interrupts

29
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

PREEMPT_RT specifics

30
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

CONFIG_PREEMPT_RT (1)

The PREEMPT_RT patch adds a new « level » of preemption, 
called 

CONFIG_PREEMPT_RT

This level of preemption replaces all kernel spinlocks by mutexes 
(or so­called sleeping spinlocks)

Instead of providing mutual exclusion by disabling interrupts and 
preemption, they are just normal locks : when contention happens, 
the process is blocked and another one is selected by the scheduler

Works well with threaded interrupts, since threads can block, while 
usual interrupt handlers could not

Some core, carefully controlled, kernel spinlocks remain as normal 
spinlocks

31
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

CONFIG_PREEMPT_RT (2)

With CONFIG_PREEMPT_RT, virtually all kernel code becomes 
preemptible

An interrupt can occur at any time, when returning from the interrupt 
handler, the woken up process can start immediately

This is the last big part of PREEMPT_RT that isn’t fully in the 
mainline kernel yet

Part of it has been merged in 2.6.33 : the spinlock annotations. The 
spinlocks that must remain as spinning spinlocks are now 
differentiated from spinlocks that can be converted to sleeping 
spinlocks. This has reduced a lot the PREEMPT_RT patch size !

32
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Threaded interrupts

The mechanism of threaded interrupts in PREEMPT_RT is still 
different from the one merged in mainline

In PREEMPT_RT, all interrupt handlers are unconditionally 
converted to threaded interrupts.

This is a temporary solution, until interesting drivers in mainline 
get gradually converted to the new threaded interrupt API that 
has been merged in 2.6.30.

33
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Setting up PREEMPT_RT

34
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

PREEMPT_RT setup (1)

PREEMPT_RT is delivered as a patch against the mainline 
kernel

Best to have a board supported by the mainline kernel, otherwise 
the PREEMPT_RT patch may not apply and may require some 
adaptations

Many official kernel releases are supported, but not all. For 
example, 2.6.31 and 2.6.33 are supported, but not 2.6.32.

Quick set up

Download and extract mainline kernel

Download the corresponding PREEMPT_RT patch

Apply it to the mainline kernel tree

35
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

PREEMPT_RT setup (2)

In the kernel configuration, be sure to enable

CONFIG_PREEMPT_RT
High­resolution timers

Compile your kernel, and boot

You are now running the real­time Linux kernel

Of course, some system configuration remains to be done, in 
particular setting appropriate priorities to the interrupt threads, 
which depend on your application.

36
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Real­time application development

37
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Development and compilation

No special library is needed, the POSIX realtime API is part of 
the standard C library

The glibc or eglibc C libraries are recommended, as the support 
of some real­time features is not available yet in uClibc

Priority inheritance mutexes or NPTL on some architectures, for 
example

Compile a program

ARCH­linux­gcc ­o myprog myprog.c ­lrt

To get the documentation of the POSIX API

Install the manpages­posix­dev package

Run man functioname

38
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Process, thread ?

Confusion about the terms «process», «thread» and «task»

In Unix, a process is created using fork() and is composed of

An address space, which contains the program code, data, stack, 
shared libraries, etc.

One thread, that starts executing the main() function.

Upon creation, a process contains one thread

Additional threads can be created inside an existing process, 
using pthread_create()

They run in the same address space as the initial thread of the 
process

They start executing a function passed as argument to 
pthread_create()

39
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Process, thread: kernel point of view

The kernel represents each thread running in the system by a 
structure of type task_struct

From a scheduling point of view, it makes no difference between 
the initial thread of a process and all additional threads created 
dynamically using pthread_create()

Address space

Thread
A

Process after fork()

Address space
Thread
A

Thread
B

Same process after pthread_create()

40
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Creating threads

Linux support the POSIX thread API

To create a new thread

pthread_create(pthread_t *thread,
pthread_attr_t *attr,
void *(*routine)(*void*),
void *arg);

The new thread will run in the same address space, but will be 
scheduled independently

Exiting from a thread

pthread_exit(void *value_ptr);

Waiting for a thread termination

pthread_join(pthread_t *thread, void **value_ptr);

41
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Scheduling classes (1)

The Linux kernel scheduler support different scheduling classes

The default class, in which processes are started by default is a 
time­sharing class

All processes, regardless of their priority, get some CPU time

The proportion of CPU time they get is dynamic and affected by the 
nice value, which ranges from ­20 (highest) to 19 (lowest). Can be 
set using the nice or renice commands

The real­time classes SCHED_FIFO and SCHED_RR

The highest priority process gets all the CPU time, until it blocks.

In SCHED_RR, round­robin scheduling between the processes of 
the same priority. All must block before lower priority processes get 
CPU time.

Priorities ranging from 0 (lowest) to 99 (highest)

42
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Scheduling classes (2)

An existing program can be started in a specific scheduling class 
with a specific priority using the chrt command line tool

Example: chrt ­f 99 ./myprog

The sched_setscheduler() API can be used to change the 
scheduling class and priority of a process

int sched_setscheduler(pid_t pid, int policy, 
const struct sched_param *param);

policy can be SCHED_OTHER, SCHED_FIFO, SCHED_RR, etc.

param is a structure containing the priority

43
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Scheduling classes (3)

The priority can be set on a per­thread basis when a thread is 
created :

Then the thread can be created using pthread_create(), 
passing the attr structure.

Several other attributes can be defined this way: stack size, etc.

struct sched_param parm;
pthread_attr_t attr;

pthread_attr_init(&attr);
pthread_attr_setinheritsched(&attr,

PTHREAD_EXPLICIT_SCHED);
pthread_attr_setschedpolicy(&attr, SCHED_FIFO);
parm.sched_priority = 42;
pthread_attr_setschedparam(&attr, &parm);

44
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Memory locking

In order to solve the non­determinism introduced by virtual 
memory, memory can be locked

Guarantee that the system will keep it allocated

Guarantee that the system has pre­loaded everything into memory

mlockall(MCL_CURRENT | MCL_FUTURE);

Locks all the memory of the current address space, for currently 
mapped pages and pages mapped in the future

Other, less useful parts of the API: munlockall, mock, 
munlock.

Watch out for non­currently mapped pages

Stack pages

Dynamically­allocated memory

45
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Mutexes

Allows mutual exclusion between two threads in the same 
address space

Initialization/destruction
pthread_mutex_init(pthread_mutex_t *mutex, const 
pthread_mutexattr_t *mutexattr);
pthread_mutex_destroy(pthread_mutex_t *mutex);

Lock/unlock
pthread_mutex_lock(pthread_mutex_t *mutex);
pthread_mutex_unlock(pthread_mutex_t *mutex);

Priority inheritance must explictly be activated
pthread_mutexattr_t attr;
pthread_mutexattr_init (&attr);
pthread_mutexattr_getprotocol 

(&attr, PTHREAD_PRIO_INHERIT);

46
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Timers
timer_create(clockid_t clockid,

struct sigevent *evp,
timer_t *timerid)

Create a timer. clockid is usually CLOCK_MONOTONIC. 
sigevent defines what happens upon timer expiration : send a 
signal or start a function in a new thread. timerid is the returned 
timer identifier.

timer_settime(timer_t timerid, int flags,
struct itimerspec *newvalue,
struct itimerspec *oldvalue)

Configures the timer for expiration at a given time.

timer_delete(timer_t timerid), delete a timer

clock_getres(), get the resolution of a clock

Other functions: timer_getoverrun(), timer_gettime()

47
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Signals

Signals are an asynchronous notification mechanism

Notification occurs either

By the call of a signal handler. Be careful with the limitations of 
signal handlers!

By being unblocked from the sigwait(), sigtimedwait() or 
sigwaitinfo() functions. Usually better.

Signal behaviour can be configured using sigaction()

Mask of blocked signals can be changed with 
pthread_sigmask()

Delivery of a signal using pthread_kill() or tgkill()

All signals between SIGRTMIN and SIGRTMAX, 32 signals under 
Linux.

48
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Inter­process communication

Semaphores

Usable between different processes using named semaphores

sem_open(), sem_close(), sem_unlink(), sem_init(), 
sem_destroy(), sem_wait(), sem_post(), etc.

Message queues

Allows processes to exchange data in the form of messages. 

mq_open(), mq_close(), mq_unlink(), mq_send(), 
mq_receive(), etc.

Shared memory

Allows processes to communicate by sharing a segment of memory

shm_open(), ftruncate(), mmap(), munmap(), 
close(), shm_unlink()

49
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Debugging real­time latencies

50
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

ftrace ­ Kernel function tracer

New infrastructure that can be used for debugging or analyzing 
latencies and performance issues in the kernel.

Developed by Steven Rostedt. Merged in 2.6.27.
For earlier kernels, can be found from the rt­preempt patches.

Very well documented in Documentation/ftrace.txt

Negligible overhead when tracing is not enabled at run­time.

Can be used to trace any kernel function!

See our video of Steven’s tutorial at OLS 2008:
http://free­electrons.com/community/videos/conferences/

http://free-electrons.com/community/videos/conferences/

51
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Using ftrace

Tracing information available through the debugfs virtual fs 
(CONFIG_DEBUG_FS in the Kernel Hacking section) 

Mount this filesystem as follows:
mount ­t debugfs nodev /debug

When tracing is enabled (see the next slides),
tracing information is available in /debug/tracing.

Check available tracers
in /debug/tracing/available_tracers

52
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Scheduling latency tracer
CONFIG_SCHED_TRACER (Kernel Hacking section)

Maximum recorded time between waking up a top priority task
and its scheduling on a CPU, expressed in µs.

Check that wakeup is listed in 
/debug/tracing/available_tracers

To select, reset and enable this tracer:
echo wakeup > /debug/tracing/current_tracer
echo 0 > /debug/tracing/tracing_max_latency
echo 1 > /debug/tracing/tracing_enabled

Let your system run, in particular real­time tasks.
Example: chrt ­f 5 sleep 1

Disable tracing:
echo 0 > /debug/tracing/tracing_enabled

Read the maximum recorded latency and the corresponding trace:
cat /debug/tracing/tracing_max_latency

53
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Useful reading

About real­time support in the standard Linux kernel

Internals of the RT Patch, Steven Rostedt, Red Hat, June 2007
http://www.kernel.org/doc/ols/2007/ols2007v2­pages­161­172
Definitely worth reading.

The Real­Time Linux Wiki: http://rt.wiki.kernel.org
“The Wiki Web for the CONFIG_PREEMPT_RT community,
and real­time Linux in general.”
Contains nice and useful documents!

See also our books page.

http://www.kernel.org/doc/ols/2007/ols2007v2-pages-161-172

http://rt.wiki.kernel.org/

54
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Approach 2
Real­time extensions to the Linux kernel

55
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Linux real­time extensions

Three generations

RTLinux

RTAI

Xenomai

A common principle

Add a extra layer between the 
hardware and the Linux kernel, 
to manage real­time tasks 
separately.

Hardware

Micro­kernel

Linux
kernel

real­time
tasks

real­time
tasks

56
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

RTLinux

First real­time extension for Linux, created by Victor Yodaiken.

Nice, but the author filed a software patent covering the addition of real­
time support to general operating systems as implemented in RTLinux!

Its Open Patent License drew many developers away and frightened 
users. Community projects like RTAI and Xenomai now attract most 
developers and users.

February, 2007: RTLinux rights sold to Wind River.
Now supported by Wind River as “Real­Time Core for Wind River Linux.”

Free version still advertised by Wind River on http://www.rtlinuxfree.com,
but no longer a community project.

http://www.rtlinuxfree.com/

57
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

RTAI

http://www.rtai.org/  ­ Real­Time Application Interface for Linux

Created in 1999, by Prof. Paolo Montegazza (long time 
contributor to RTLinux), Dipartimento di Ingegneria 
Aerospaziale Politecnico di Milano (DIAPM).

Community project. Significant user base.
Attracted contributors frustrated by the RTLinux legal issues.

Only really actively maintained on x86

May offer slightly better latencies than Xenomai, at the 
expense of a less maintainable and less portable code base

Since RTAI is not really maintained on ARM and other 
embedded architectures, our presentation is focused on 
Xenomai.

http://www.rtai.org/

58
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Xenomai project

http://www.xenomai.org/ 

Started in 2001 as a project aiming at emulating
traditional RTOS.

Initial goals: facilitate the porting of programs to GNU / Linux.

Initially related to the RTAI project (as the RTAI / fusion 
branch), now independent.

Skins mimicking the APIs of traditional
RTOS such as VxWorks, pSOS+, and VRTXsa as well as the 
POSIX API, and a “native” API.

Aims at working both as a co­kernel and on top of 
PREEMPT_RT in the upcoming 3.0 branch.

Will never be merged in the mainline kernel.

http://www.xenomai.org/

59
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Xenomai architecture

Adeos I­Pipe

Xenomai RTOS
(nucleus)

VxWorks application

glibc

Xenomai

libvxworks

POSIX application

glibc
Xenomai

libpthread_rt

Linux application

glibc

VFS Network

Memory …

System calls

Linux
kernel space

Pieces added
by Xenomai

Xenomai
skins

60
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

The Adeos interrupt pipeline abstraction

From Adeos point of view, guest OSes are prioritized domains.

For each event (interrupts, exceptions, syscalls, etc…), the 
various domains may handle the event or pass it down the 
pipeline.

61
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Adeos virtualized interrupts disabling

Each domain may be “stalled”, meaning that it does not accept 
interrupts.

Hardware interrupts
are not disabled
 however (except
 for the domain
 leading the pipeline),
 instead the interrupts
 received during that
 time are logged and
 replayed when the
 domain is unstalled.

62
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Adeos additional features

The Adeos I­pipe patch implement additional features, essential 
for the implementation of the Xenomai real­time extension:

Disables on­demand mapping of kernel­space vmalloc/ioremap 
areas.

Disables copy­on­write when real­time processes are forking.

Allow subscribing to event allowing to follow progress of the Linux 
kernel, such as Linux system calls, context switches, process 
destructions, POSIX signals, FPU faults.

On the ARM architectures, integrates the FCSE patch, which allows 
to reduce the latency induced by cache flushes during context 
switches.

63
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Xenomai features

Factored real­time core with skins implementing various real­time 
APIs

Seamless support for hard real­time in user­space

No second­class citizen, all ports are equivalent feature­wise

Xenomai support is as much as possible independent from the 
Linux kernel version (backward and forward compatible when 
reasonable)

Each Xenomai branch has a stable user/kernel ABI

Timer system based on hardware high­resolution timers

Per­skin time base which may be periodic

RTDM skin allowing to write real­time drivers

64
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Xenomai user­space real­time support.

Xenomai supports real­time in user­space on 5 architectures, 
including 32 and 64 bits variants.

Two modes are defined for a thread

the primary mode, where the thread is handled by Xenomai 
scheduler

the secondary mode, when it is handled by Linux scheduler.

Thanks to the services of the Adeos I­pipe service, Xenomai 
system calls are defined.

A thread migrates from secondary mode to primary mode when 
such a system call is issued

It migrates from primary mode to secondary mode when a Linux 
system call is issued, or to handle gracefully exceptional events 
such as exceptions or Linux signals.

65
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Life of a Xenomai application

Xenomai applications are started like normal Linux processes, 
they are initially handled by the Linux scheduler and have access 
to all Linux services

After their initialization, they declare themselves as real­time 
application, which migrates them to primary mode. In this mode:

They are scheduled directly by the Xenomai scheduler, so they 
have the real­time properties offered by Xenomai

They don’t have access to any Linux service, otherwise they get 
migrated back to secondary mode and looses all real­time 
properties

They can only use device drivers that are implemented in Xenomai, 
not the ones of the Linux kernel

Need to implement device drivers in Xenomai, and to split real­
time and non real­time parts of your applications.

66
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Real Time Driver Model (RTDM)

An approach to unify the interfaces for developing device drivers 
and associated applications under real­time Linux

An API very similar to the native Linux kernel driver API

Allows the development, in kernel space, of

Character­style device drivers

Network­style device drivers

See the whitepaper on
http://www.xenomai.org/documentation/xenomai­2.4/pdf/RTDM­and­Applications

Current notable RTDM based drivers:

Serial port controllers;

RTnet UDP/IP stack;

RT socket CAN, drivers for CAN controllers;

Analogy, fork of the Comedy project, drivers for acquisition cards.

http://www.xenomai.org/documentation/xenomai-2.4/pdf/RTDM-and-Applications

67
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Setting up Xenomai

68
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

How to build Xenomai

Download Xenomai sources at 
http://download.gna.org/xenomai/stable/

Download one of the Linux versions supported by this release
(see ksrc/arch//patches/)

Since version 2.0, split kernel/user building model.

Kernel uses a script called script/prepare­kernel.sh which 
integrates Xenomai kernel­space support in the Linux sources.

Run the kernel configuration menu.

http://download.gna.org/xenomai/stable/

69
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Linux options for Xenomai configuration

70
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Xenomai user­space support

User­space libraries are compiled using the traditional autotools

./configure ­­target=arm­linux && make &&
make DESTDIR=/your/rootfs/ install

The xeno­config script, installed when installing Xenomai user­
space support helps you compiling your own programs.

See Xenomai’s examples directory.

Installation details may be found in the README.INSTALL guide.

For an introduction on programming with the native API, see: 
http://www.xenomai.org/documentation/branches/v2.3.x/pdf/Native­API­Tour­rev­C

For an introduction on programming with the POSIX API, see:
http://www.xenomai.org/index.php/Porting_POSIX_applications_to_Xenomai

http://www.xenomai.org/documentation/branches/v2.3.x/pdf/Native-API-Tour-rev-C

http://www.xenomai.org/index.php/Porting_POSIX_applications_to_Xenomai

71
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Developing applications on Xenomai

72
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

The POSIX skin

The POSIX skin allows to recompile without changes a traditional 
POSIX application so that instead of using Linux real­time 
services, it uses Xenomai services

Clocks and timers, condition variables, message queues, mutexes, 
semaphores, shared memory, signals, thread management

Good for existing code or programmers familiar with the POSIX API

Of course, if the application uses any Linux service that isn’t 
available in Xenomai, it will switch back to secondary mode

To link an application against the POSIX skin
DESTDIR=/path/to/xenomai/
export DESTDIR
CFL=`$DESTDIR/bin/xeno­config ­­posix­cflags`
LDF=`$DESTDIR/bin/xeno­config ­­posix­ldflags`
ARCH­gcc $CFL ­o rttest rttest.c $LDF

73
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Communication with a normal task

If a Xenomai real­time application using the POSIX skin wishes to 
communicate with a separate non­real­time application, it must 
use the rtipc mechanism

In the Xenomai application, create an IPCPROTO_XDDP socket
socket(AF_RTIPC, SOCK_DGRAM, IPCPROTO_XDDP);
setsockopt(s, SOL_RTIPC, XDDP_SETLOCALPOOL,&poolsz, 
sizeof(poolsz));
memset(&saddr, 0, sizeof(saddr));
saddr.sipc_family = AF_RTIPC;
saddr.sipc_port = MYAPPIDENTIFIER;
ret = bind(s, (struct sockaddr *)&saddr, sizeof(saddr));

And then the normal socket API sendto() / recvfrom()

In the Linux application

Open /dev/rtpX, where X is the XDDP port

Use read() and write()

74
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

The native API (1)

A Xenomai­specific API for developing real­time tasks

Usable both in user­space and kernel space. Development of tasks 
in user­space is the preferred way.

More coherent and more flexible API than the POSIX API. Easier to 
learn and understand. Certainly the way to go for new applications.

Applications should include , where 
service can be alarm, buffer, cond, event, heap, 
intr, misc, mutex, pipe, queue, sem, task, timer

To compile applications :
DESTDIR=/path/to/xenomai/
export DESTDIR
CFL=`$DESTDIR/bin/xeno­config ­­xeno­cflags`
LDF=`$DESTDIR/bin/xeno­config ­­xeno­ldflags`
ARCH­gcc $CFL ­o rttest rttest.c $LDF ­lnative

75
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

The native API (2)

Task management services

rt_task_create(), rt_task_start(), 
rt_task_suspend(), rt_task_resume(), 
rt_task_delete(), rt_task_join(), etc.

Counting semaphore services

rt_sem_create(), rt_sem_delete(), rt_sem_p(), 
rt_sem_v(), etc.

Message queue services

rt_queue_create(), rt_queue_delete(), 
rt_queue_alloc(), rt_queue_free(), 
rt_queue_send(), rt_queue_receive(), etc.

Mutex services

rt_mutex_create(), rt_mutex_delete(), 
rt_mutex_acquire(), rt_mutex_release(), etc.

76
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

The native API (3)

Alarm services

rt_alarm_create(), rt_alarm_delete(), 
rt_alarm_start(), rt_alarm_stop(), 
rt_alarm_wait(), etc.

Memory heap services

Allows to share memory between processes and/or to pre­allocate 
a pool of memory

rt_heap_create(), rt_heap_delete(), 
rt_heap_alloc(), rt_heap_bind()

Condition variable services

rt_cond_create(), rt_cond_delete(), 
rt_cond_signal(), rt_cond_broadcast(), 
rt_cond_wait(), etc.

77
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Xenomai and normal task communication

Using rt_pipes

In the native Xenomai application, use the Pipe API

rt_pipe_create(), rt_pipe_delete(), 
rt_pipe_receive(), rt_pipe_send(), 
rt_pipe_alloc(), rt_pipe_free()

In the normal Linux application

Open the corresponding /dev/rtpX file, the minor is specified at 
rt_pipe_create() time

Then, just read() and write() to the opened file

Xenomai application
Uses the rt_pipe_*() API

Linux application
open(“/dev/rtpX”)

78
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Real­time approaches

The following table is Paul Mac Kenney’s summary of his own
article describing the various approaches for real­time on Linux:

Inspection API

POSIX + RT N/A None

PREEMPT POSIX + RT N/A None

RTOS Excellent

PREEMPT_RT

POSIX + RT None

OK

POSIX + RT None

Approach Quality Complexity
Fault
isolation

HW/SW
Configs

Vanilla Linux
10s of ms
all services All All
100s of us
Schd, Int

preempt or
irq disable All

Nested OS
(co­kernel)

~10us
RTOS svcs

RTOS,
hw irq disable

RTOS (can
be POSIX RT) Dual env. Good All

Dual­OS/Dual­Core
(ASMP)

<1us RTOS svcs

RTOS (can
 be POSIX RT) Dual env. Specialized

10s of us
Schd, Int

preempt and irq
disable (most
ints in process ctx),
(mostly drivers)

“Modest” patch
(careful tuning)

All (except
some
drivers)

Migration between OSes ? us
RTOS svcs

RTOS,
hw irq disable

RTOS (can
be POSIX RT)

Dual env. (easy
mix) All

Migration within OS
? us
RTOS svcs

Sched,
RTOS svcs Small patch All?
(additions in blue)

Full story at http://lwn.net/Articles/143323

http://lwn.net/Articles/143323

79
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Books

Building Embedded Linux Systems, O’Reilly
By Karim Yaghmour, Jon Masters,
Gilad Ben­Yossef, Philippe Gerum and others
(including Michael Opdenacker), August 2008

A nice coverage of Xenomai (Philippe Gerum)
and the RT patch (Steven Rostedt)
http://oreilly.com/catalog/9780596529680/

http://oreilly.com/catalog/9780596529680/

80
Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http://free­electrons.com

Organizations

http://www.realtimelinuxfoundation.org/
Community portal for real­time Linux.
Organizes a yearly workshop.

http://www.osadl.org
Open Source Automation Development Lab (OSADL)
Created as an equivalent of OSDL for machine and plant control 
systems. Member companies are German so far (Thomas Gleixner 
is on board). One of their goals is to supports the development of 
RT preempt patches in the mainline Linux kernel (HOWTOs, live 
CD, patches).

http://www.realtimelinuxfoundation.org/

http://www.osadl.org/

Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http//free­electrons.com

Related documents

All our technical presentations
on http://free­electrons.com/docs

Linux kernel
Device drivers
Architecture specifics
Embedded Linux system development

http://free-electrons.com/docs

Free Electrons. Kernel, drivers and embedded Linux development, consulting, training and support. http//free­electrons.com

How to help

You can help us to improve and maintain this document…

By sending corrections, suggestions, contributions and 
translations

By asking your organization to order development, consulting 
and training services performed by the authors of these 
documents (see http://free­electrons.com/).

By sharing this document with your friends, colleagues
and with the local Free Software community.

By adding links on your website to our on­line materials,
to increase their visibility in search engine results.

http://free-electrons.com/

Custom Development

System integration
Embedded Linux demos and prototypes
System optimization
Application and interface development

Free Electrons
Our services

Embedded Linux Training

All materials released with a free license!

Unix and GNU/Linux basics
Linux kernel and drivers development
Real­time Linux, uClinux
Development and profiling tools
Lightweight tools for embedded systems
Root filesystem creation
Audio and multimedia
System optimization

Consulting and technical support

Help in decision making
System architecture
System design and performance review
Development tool and application support
Investigating issues and fixing tool bugs

Linux kernel

Linux device drivers
Board support code
Mainstreaming kernel code
Kernel debugging

week 12

Real time kernel

Ref: Extracts from: http://www.swd.de/documents/manuals/sysarch/intro_en.html#id1

The QNX Operating System
QNX provides multitasking, priority-driven preemptive scheduling, and fast context switching
QNX is flexible. It can be customized from a “bare-bones” kernel with a few small modules to a network-wide system equipped to serve hundreds of users.
QNX achieves this through two fundamental principles:
microkernel architecture
message-based interprocess communication

QNX’s microkernel architecture

Architecture
QNX Microkernel is dedicated to only two essential functions:
message passing – the Microkernel handles the routing of all messages among all processes throughout the entire system
scheduling – the scheduler is a part of the Microkernel and is invoked whenever a process changes state as the result of a message or interrupt
Unlike processes, the Microkernel itself is never scheduled for execution. It is entered only as the direct result of kernel calls, either from a process or from a hardware interrupt

QNX configuration
A typical QNX configuration has the following system processes:
Process Manager (Proc)
Filesystem Manager (Fsys)
Device Manager (Dev)
Network Manager (Net)
System processes vs. user-written processes
System processes are like user processes: they have no private or hidden interfaces that are unavailable to user processes.
This gives QNX unparalleled extensibility: the OS itself can be augmented by user written programs.
The only real difference between system services and applications is that OS services manage resources for clients.

Drivers
Since drivers start up as standard processes, adding a new driver to QNX doesn’t affect any other part of the operating system.
The only change you need to make to your QNX environment is to actually start the new driver.
Once they’ve completed their initialization, drivers can do either of the following:
choose to disappear as standard processes, simply becoming extensions to the system process they’re associated with
retain their individual identity as standard processes

QNX as a message-passing operating system
In QNX, a message is a packet of bytes passed from one process to another.
the data in a message has meaning for the sender of the message and for its receiver, but for no one else.
Message passing not only allows processes to pass data to each other, but also provides a means of synchronizing the execution of several processes.
As they send, receive, and reply to messages, processes undergo various “changes of state” that affect when, and for how long, they may run.
Knowing their states and priorities, the Microkernel can schedule all processes as efficiently as possible to make the most of available CPU resources. This single, consistent method – message-passing – is thus constantly operative throughout the entire system.

The Microkernel
The QNX Microkernel is responsible for :
IPC
the Microkernel supervises the routing of messages; it also manages two other forms of IPC: proxies and signals
low-level network communication
The Microkernel delivers all messages destined for processes on other nodes
process scheduling
the Microkernel’s scheduler decides which process will execute next
first-level interrupt handling
all hardware interrupts and faults are first routed through the Microkernel, then passed on to the appropriate driver or system manager

Microkernel

Process synchronization
Message passing not only allows processes to pass data to each other, but also provides a means of synchronizing the execution of several cooperating processes.

Send() request, and the message it has sent hasn’t yet been received by the recipient process SEND-blocked
Send() request, and the message has been received by the recipient process, but that process hasn’t yet replied REPLY-blocked
Receive() request, but hasn’t yet received a message RECEIVE-blocked

Send and Reply Blocking

IPC via proxies
A proxy is a form of non-blocking message especially suited for event notification where the sending process doesn’t need to interact with the recipient

By using a proxy, a process or an interrupt handler can send a message to another process without blocking or having to wait for a reply.
Here are some examples of when proxies are used:
A process wants to notify another process that an event has occurred,
A process wants to send data to another process, but needs neither a reply nor any other acknowledgment that the recipient has received the message.
An interrupt handler wants to tell a process that some data is available for processing

IPC via signals
Signals are a traditional method of asynchronous communication that have been available for many years in a variety of operating systems.
QNX supports a rich set of POSIX-compliant signals, some historical UNIX signals, as well as some QNX-specific signals.

Receiving signals
A process can receive a signal in one of three ways:
The default action for the signal is taken – usually, this default action is to terminate the process.
The process can ignore the signal.
The process can provide a signal handler for the signal
Signals are delivered to a process when the process is made ready to run by the Microkernel’s scheduler.

Scheduling methods
QNX provides three scheduling methods:
FIFO scheduling
round-robin scheduling
adaptive scheduling
They are effective on a per-process basis, not on a global basis for all processes on a node.
These methods apply only when two or more processes that share the same priority are READY. If a higher-priority process becomes READY, it immediately preempts all lower-priority processes.

Realtime performance
Interrupt latency is the time from the reception of a hardware interrupt until the first instruction of a software interrupt handler is executed. QNX leaves interrupts fully enabled almost all the time, so that interrupt latency is typically insignificant.
But certain critical sections of code do require that interrupts be temporarily disabled. The maximum such disable time usually defines the worst-case interrupt latency – in QNX this is very small.

hardware interrupt is processed by an established interrupt handler.
The interrupt handler either will simply return, or it will return and cause a proxy to be triggered.

Scheduling latency
In some cases, the low-level hardware interrupt handler must schedule a higher-level process to run. In this scenario, the interrupt handler will return and indicate that a proxy is to be triggered. This introduces a second form of latency – scheduling latency – which must be accounted for.
Scheduling latency is the time between the termination of an interrupt handler and the execution of the first instruction of a driver process.
This usually means the time it takes to save the context of the currently executing process and restore the context of the required driver process. Although larger than interrupt latency, this time is also kept small in a QNX system.

Process Manager responsibilities
The Process Manager works with the Microkernel to provide essential operating system services. Although it shares the same address space as the Microkernel, it runs as a true process.
It is scheduled to run by the Microkernel like other processes and it uses the Microkernel’s message-passing primitives to communicate with other processes in the system.
It is responsible for creating new processes in the system and managing the resources associated with a process. These services are all provided via messages

Process creation primitives
QNX supports three process-creation primitives:
fork()
exec()
spawn()
Both fork() and exec() are defined by POSIX, while the implementation of spawn() is unique to QNX.
The spawn() primitive creates a new process as a child of the calling process.

Process states

Interrupt handlers
Interrupt handlers react to hardware interrupts and manage the low-level transfer of data between the computer and external devices.
Interrupt handlers are physically packaged as part of a standard QNX process (e.g. a driver), but they always run asynchronously to the process they’re associated with.
An interrupt handler:
is entered with a far call, not directly from the interrupt itself (this can be written in C, rather than in assembler)
runs in the context of the process it is embedded in, so it has access to all the global variables of the process
runs with interrupts enabled, but is preempted only if a higher-priority interrupt occurs
shouldn’t talk directly to the 8259 interrupt hardware (the operating system takes care of this)
should be as short as possible.

Week 11

Linux Internals

Processes, scheduling

Lecture organization
Kernel Structure
Process structure
Process creation
Signals
Process management, scheduling
Linux for embedded systems

References
Linux Internals (to the power of -1)
Simone Demblon, Sebastian Spitzner
http://
www.tutorialized.com/view/tutorial/Linux-kernel-internals-from-Process-Birth-to-Death/40955
Linux Knowledge Base and Tutorial
http
://
linux-tutorial.info/modules.php?name=MContent&pageid=224

Inside the Linux scheduler IBM Developer Works
http://www.ibm.com/developerworks/linux/library/l-scheduler
/

The Linux Kernel
A Unix kernel fulfills 4 main management tasks:
• Memory management
• Process management
• File system management
• IO management
For our study of real time implications, we will examine Process management

A Rather brief look at the kernel

Part 1
Process structure

Process data structure
A process is represented by a rather large structure called task_struct.
It contains all of the necessary data to represent the process, along with data for accounting and to maintain relationships with other processes (parents and children).

The actual structure of the task_struct is many pages long
A short sample of task_struct is shown in the next slide

Task_struct
Pointers to open files
Memory map
Signals: received, masked
Register contents
Everything defining the state of the computation

Task_struct detail
The sample contains the state of execution, a stack, a set of flags, the parent process, the thread of execution (of which there can be many), and open files.

The state variable: the state of the task.
Typical states:
the process is running
In a run queue about to be running (TASK_RUNNING),
sleeping (TASK_INTERRUPTIBLE),
sleeping but unable to be woken up (TASK_UNINTERRUPTIBLE),
stopped (TASK_STOPPED), or a few others.

Flags: the process is being created (PF_STARTING) or exiting (PF_EXITING), or currently allocating memory (PF_MEMALLOC).
The comm (command) field: the name of the executable
Priority: (called static_prio). The actual priority is determined dynamically based on loading and other factors.

More Task_struct
The tasks field is a linked-list
mm and active_mm fields The process’s address space. mm represents the process’s memory descriptors, while the active_mm is the previous process’s memory descriptors

The thread_struct identifies the stored state of the process: The CPU state (hardware registers, program counter, etc.).

Part 2
Process creation

System call functions
user-space tasks and kernel tasks, rely on a function called do_fork to create the new process.
In the case of creating a kernel thread, the kernel calls a function called kernel_thread
In user-space, a program calls fork, which results in a system call to the kernel function called sys_fork
The function relationships are shown graphically in the next slide.

Function hierarchy for process creation

do_fork

The do_fork function begins with a call to alloc_pidmap, which allocates a new PID.
The do_fork function then calls copy_process, passing the flags, stack, registers, parent process, and newly allocated PID.
The copy_process function is where the new process is created as a copy of the parent. This function performs all actions except for starting the process

Copying the process
Next, dup_task_struct allocates a new task_struct and copies the current process’s descriptors into it.
After a new thread stack is set up, control returns to copy_process. A sequence of copy functions is then invoked to copy, open file descriptors, signal information, process memory and finally the thread.
The new task is then assigned to a processor. The priority of the new process inherits the priority of the parent, and control returns to do_fork. At this point, your new process exists but is not yet running.
The do_fork function fixes this with a call to wake_up_new_task. This function places the new process in a run queue, then wakes it up for execution.
Finally, upon returning to do_fork, the PID value is returned to the caller and the process is complete.

Part 3
Process management

Process Management
A program is an executable file, whereas a process is a running program.
A process is an instance of a program in memory, executing instructions on the processor.
The only way to run a program on a Unix/Linux system is to request the kernel to execute it via an exec() system call.
Remember that the only things that can make system calls are processes (binary programs that are executing.)

Scheduler
The scheduler manages processes and the fair distribution of CPU time between them.
The kernel classifies processes as being in one of two possible queues at any given time: the sleep queue and the run queue.

Run Queue
Processes in the run queue compete for access to the CPU.
The processes in the run queue compete for the processor(s). It is the schedulers’ job to allocate a time slice to each process, and to let each process run on the processor for a certain amount of time in turn.
Each time slice is so short (fractions of a second) that each process in the run queue gets to run often every second. It appears as though all of these processes are “running at the same time”. This is called round robin scheduling.
On a uniprocessor system, only one process can execute instructions at any one time.
Only on a multiprocessor system can true multiprocessing occur, with more than one process (as many as there are CPUs) executing instructions simultaneously.

Classes of scheduling
There are different classes of scheduling besides round-robin.
An example would be real-time scheduling.

Different Unix systems have different scheduling classes and features, and Linux is no exception.

Sleep queue
Processes waiting for a resource wait on the sleep queue. These processes do not take up a slot on the run queue
Once the resource becomes available, it is reserved by the process, which is then moved back onto the run queue to wait for a turn on the processor.
Every process waiting for a resource is placed on the sleep queue, including new processes, whose resources still have to be allocated, even if they are readily available.

Queue management

Time-slicing
Each 10 milli-seconds (This may change with the HZ value) an Interrupt comes on IRQ0, which helps us in a multitasking environment.
The interrupt signal to the CPU comes from PIC (Programmable Interrupt Controller).
With each Time-slice the current process execution is interrupted (without task switching), and the processor does housekeeping then the current process is re-activated unless a higher priority process is waiting.

When does switching occur?
Some examples are:
• When a Time Slice ends the scheduler gives access to another process
• If needing a resource, the process will have to go back into the sleep queue to wait for or to be given access to that resource, and only then would it be ready to be scheduled access to the processor again.
• If we have a process waiting for information from another process in the form of piped information. That process would have to run before this process can continue, so the other process would be given priority for the processor.

Process destruction
Process destruction can be driven by several events:
normal process termination,
through a signal
through a call to the exit function.
However process exit is driven, the process ends through a call to the kernel function do_exit
This process is shown graphically in the next slide

Function hierarchy for process destruction

do_exit
do_exit removes all references to the current process from the operating system
The cycle of detaching the process from the various resources that it attained during its life is performed through a series of calls, including exit_mm (to remove memory pages) to exit_keys (which disposes of per-thread session and process security keys).
The do_exit function performs various accountings for the disposal of the process, then a series of notifications

Finally, the process state is changed to PF_DEAD, and the schedule function is called to select a new process to execute.

Time slice
When a process’s time slice has run out or for some other reason another process gets to run, it is suspended (placed on the sleep or run queue).
It’s state is stored in task_struct, so that when it gets a turn again. This enables the process to return to the exact place where it left off.

System Processes
In addition to user processes, there are system processes running.
Some deal with managing memory and scheduling turns on the CPU. Others deal with delivering mail, printing.
In principle, both of these kinds of processes are identical. However, system processes can run at much higher priorities and therefore run more often than user processes.
Typically a system process of this kind is referred to as a daemon process because they run without user intervention.

Filesystems
Under Linux, files and directories are grouped into units called filesystems. Filesystems exist within a section of the hard disk called a partition.
Each hard disk can be broken down into multiple partitions and a filesystem is created within the partition. (Some dialects of UNIX allow multiple filesystems within a partition.)

The Life Cycle of Processes
http://
linux-tutorial.info/modules.php?name=MContent&pageid=84
From the time a process is created with a fork() until it has completed its job and disappears from the process table, it goes through many different states. The state a process is in changes many times during its “life.” These changes can occur, for example, when the process makes a system call, it is someone else’s turn to run, an interrupt occurs, or the process asks for a resource that is currently not available.
A commonly used model shows processes operating in one of six separate states:
executing in user mode
executing in kernel mode
ready to run
sleeping
newly created, not ready to run, and not sleeping
issued exit system call (zombie)

Device Files
In Linux, almost everything is either treated as a file or is only accessed through files.
For example, to read the contents of a data file, the operating system must access the hard disk. Linux treats the hard disk as if it were a file. It opens it like a file, reads it like a file, and closes it like a file. The same applies to other hardware such as tape drives and printers. Even memory is treated as a file. The files used to access the physical hardware are device files.
When the Linux wants to access any hardware device, it first opens a file that “points” toward that device (the device node). Based on information it finds in the inode, the operating system determines what kind of device it is and can therefore access it in the proper manner. This includes opening, reading, and closing, just like any other file.

Process states

State Diagram

State transitions
A newly created process enters the system in state 5.
If an exec() is made, then this process will end up in kernel mode (2).
When a process is running, an interrupt may be generated (more often than not, this is the system clock) and the currently running process is pre-empted (3).
This is the same state as state 3 because it is still ready to run and in main memory.

More Transitions
When the process makes a system call while in user mode (1), it moves into state 2 where it begins to run in kernel mode.
Assume at this point that the system call made was to read a file on the hard disk. Because the read is not carried out immediately, the process goes to sleep, waiting on the event that the system has read the disk and the data is ready. It is now in state 4.
When the data is ready, the process is awakened. This does not mean it runs immediately, but rather it is once again ready to run in main memory (3).
If a process that was asleep is awakened (perhaps when the data is ready), it moves from state 4 (sleeping) to state 3 (ready to run). This can be in either user mode (1) or kernel mode (2).

End
A process can end its life by either explicitly calling the exit() system call or having it called for them.
The exit() system call releases all the data structures that the process was using.
One exception is the slot in the process table, which is the responsibility of the init process. The slot in the process table is used for the exit code of the exiting process. This can be used by the parent process to determine whether the process did what it was supposed to do or whether it ran into problems.
The process shows that it has terminated by putting itself into state 8, and it becomes a “zombie.” Once here, it can never run again because nothing exists other than the entry in the process table.
This is why you cannot “kill” a zombie process. There is nothing there to kill. To kill a process, you need to send it a signal (more on signals later). Because there is nothing there to receive or process that signal, trying to kill it makes little sense. The only thing to do is to let the system clean it up.

Access to a resource
Processes waiting for a common resource all wait on the same wait channel. All are woken up when the resource is ready. Only one process gets the resource. The others put themselves back to sleep on the same wait channel.
When a process puts itself to sleep, it voluntarily gives up the CPU. It may be that this process had just started its turn when it couldn’t access the resource it needed.

Signals
Signals are a way of sending simple messages to processes.
Most of these messages are already defined and can be found in . Most are generated in the kernel
Signals can only be processed when the process is in user mode. If a signal has been sent to a process that is in kernel mode, it is dealt with immediately on returning to user mode.

Signals -2
Signals are used to signal asynchronous events to one or more processes. A signal could be generated by a keyboard interrupt or an error condition such as the process attempting to access a non-existent location in its virtual memory.
Signals are also used by the shells to signal job control commands to their child processes.
There are a set of defined signals that the kernel can generate or that can be generated by other processes in the system, provided that they have the correct privileges.

Signals – 3
Linux implements signals using information stored in the task_struct for the process. The number of supported signals is limited to the word size of the processor.
The currently pending signals are kept in the signal field with a mask of blocked signals held in blocked. With the exception of SIGSTOP and SIGKILL, all signals can be blocked.
If a blocked signal is generated, it remains pending until it is unblocked.
Linux also holds information about how each process handles every possible signal and this is held in an array of sigaction data structures pointed at by the task_struct for each process. Amongst other things it contains either the address of a routine that will handle the signal or a flag which tells Linux that the process either wishes to ignore this signal or let the kernel handle the signal for it.
The process modifies the default signal handling by making system calls and these calls alter the sigaction for the appropriate signal as well as the blocked mask.

Problems with earlier Linux schedulers
Before the 2.6 kernel, the scheduler had a significant limitation when many tasks were active. This was due to the scheduler being implemented using an algorithm with O(n) complexity. In other words, the more tasks (n) are active, the longer it takes to schedule a task. At very high loads, the processor can be consumed with scheduling and devote little time to the tasks themselves.
The pre-2.6 scheduler also used a single runqueue for all processors in a symmetric multiprocessing system (SMP). This meant a task could be scheduled on any processor — which can be good for load balancing but bad for memory caches
Finally, preemption wasn’t possible in the earlier scheduler; this meant that a lower priority task could execute while a higher priority task waited for it to complete.

Introducing the Linux 2.6 scheduler
The 2.6 scheduler resolves the primary three issues found in the earlier scheduler (O(n) and SMP scalability issues), as well as other problems. Now we’ll explore the basic design of the 2.6 scheduler.
Major scheduling structures
Each CPU has a runqueue made up of 140 priority lists that are serviced in FIFO order.
Tasks that are scheduled to execute are added to the end of their respective runqueue’s priority list.
Each task has a time slice that determines how much time it’s permitted to execute.
The first 100 priority lists of the runqueue are reserved for real-time tasks, and the last 40 are used for user tasks (see next slide).

Priority lists
expired active

Runqueues
In addition to the CPU’s runqueue, which is called the active runqueue, there’s also an expired runqueue.
When a task on the active runqueue uses all of its time slice, it’s moved to the expired runqueue. During the move, its time slice is recalculated.
If no tasks exist on the active runqueue for a given priority, the pointers for the active and expired runqueues are swapped, thus making the expired priority list the active one.
The job of the scheduler is simple: choose the task on the highest priority list to execute.

http://
www.linuxjournal.com/article/7477
By Aseem
R.
Deshpande
With the release of kernel 2.6, Linux now poses serious competition to major RTOS vendors in the embedded market space.
Linux 2.6 introduces many new features that make it an excellent operating system for embedded computing.
Among these new features are enhanced real-time performance, easier porting to new computers, support for large memory models, support for microcontrollers and an improved I/O system.

Characteristics of Embedded Systems
Embedded systems often need to meet timing constraints reliably.
,embedded systems have access to far fewer resources than does a normal PC. They have to squeeze maximum value out of whatever is available.
. The OS should perform reliably and efficiently, if possible, under the cases of extreme load.
If a crash occurs in one part of the module, it should not effect other parts of the system. Furthermore, recovery from crashes should be graceful.

How Linux 2.6 Satisfies the Requirements
Although Linux 2.6 is not yet a true real-time operating system, it does contain improvements that make it a worthier platform than previous kernels when responsiveness is desirable.
Three significant improvements are preemption points in the kernel, an efficient scheduler and improved synchronization.

Kernel Preemption
As with most general-purpose operating systems, Linux always has forbidden the process scheduler from running when a process is executing in a system call. Therefore, once a task is in a system call, that task controls the processor until the system call returns, no matter how long that might take.
As of kernel 2.6, the kernel is preemptible. A kernel task now can be preempted, so that some important user application can continue to run.

In Linux 2.6, kernel code has been laced with preemption points, instructions that allow the scheduler to run and possibly block the current process so as to schedule a higher priority process.

Kernel Preemption
Linux 2.6 avoids unreasonable delays in system calls by periodically testing a preemption point. During these tests, the calling process may block and let another process run.
Thus, under Linux 2.6, the kernel now can be interrupted mid-task, so other applications can continue to run even when something low-level and complicated is going on in the background.
Embedded software often has to meet deadlines that renders it incompatible with virtual memory demand paging, in which slow handling of page faults would ruin responsiveness.
To eliminate this problem, the 2.6 kernel can be built with no virtual memory system. Of course, it then becomes the software designer’s responsibility to ensure enough real memory always is available to get the job done.

An Efficient Scheduler
The process scheduler has been rewritten in the 2.6 kernel to eliminate the slow algorithms of previous versions.
Formerly, in order to decide which task should run next, the scheduler had to look at each ready task and make a computation to determine that task’s relative importance.
After all computations were made, the task with the highest score would be chosen.
Because the time required for this algorithm varied with the number of tasks, complex multitasking applications suffered from slow scheduling.

2.6 improvements
The scheduler in Linux 2.6 no longer scans all tasks every time. Instead, when a task becomes ready to run, it is sorted into position on a queue, called the current queue.
Then, when the scheduler runs, it chooses the task at the most favorable position in the queue. As a result, scheduling is done in a constant amount of time.
When the task is running, it is given a time slice, or a period of time in which it may use the processor, before it has to give way to another thread.
When its time slice has expired, the task is moved to another queue, called the expired queue.
The task is sorted into this expired queue according to its priority. This new procedure is substantially faster than the old one, and it works equally as well whether there are many tasks or only a few in queue. This new scheduler is called the O(1) scheduler.

New Synchronization Primitives
Applications involving the use of shared resources, such as shared memory or shared devices, have to be developed carefully to avoid race conditions.
The solution implemented in Linux, called Mutex, ensured that only one task is using the resource at a time. Mutex involved a system call to the kernel to decide whether to block the thread or allow it to continue executing.
But when the decision is to continue, the time-consuming system call was unnecessary. The new implementation in Linux 2.6 supports Fast User-Space Mutexes (Futex). These functions can check from user space whether blocking is necessary and perform the system call to block the thread only when it is required.
When blocking is not required, avoiding the unneeded system call saves time. It also supports setting priorities to allow applications or threads of higher priority to have first access to the contested resource.

Improved Threading Model and Support for NPTL
threading in 2.6 is based on a 1:1 threading model, one kernel thread for one user thread. It also includes in-kernel support for the new Native Posix Threading Library (NPTL).
NPTL brings an eight-fold improvement over its predecessor. Tests conducted by its authors have shown that Linux, with this new threading, can start and stop 100,000 threads simultaneously in about two seconds. This task took 15 minutes on the old threading model.
Along with POSIX threads, 2.6 provides POSIX signals and POSIX high-resolution timers as part of the mainstream kernel. POSIX signals are an improvement over UNIX-style signals, which were the default in previous Linux releases. Unlike UNIX signals, POSIX signals cannot be lost and can carry information as an argument. Also, POSIX signals can be sent from one POSIX thread to another, rather than only from process to process, like UNIX signals.
Embedded systems often need to poll hardware or do other tasks on a fixed schedule. POSIX timers make it easy to arrange any task to be scheduled periodically. The clock that the timer uses can be set to tick at a rate as fine as one kilohertz, so software engineers can control the scheduling of tasks with precision.

Subarchitecture Support
Hardware designs in the embedded world often are customized for special applications
For example, a purpose-built board may use different IRQ management than what a similar reference design uses.
In order to run on the new board, Linux has to be ported or altered to support the new hardware. This porting is easier if the operating system is made of components that are well separated, making it necessary to change only the code that has to change.
The components of Linux 2.6 that are likely to be altered for a custom design have been refactored with a concept called Subarchitecture. Components are separated clearly and can be modified or replaced individually with minimal impact on other components of the board support package.
By formalizing Linux’s support for the slightly different hardware types, the kernel can be ported more easily to other systems, such as dedicated storage hardware and other components that use industry-dominant processor types.

A small portion of task_struct

struct task_struct {

volatile long state;
void *stack;
unsigned int flags;

int prio, static_prio;

struct list_head tasks;

struct mm_struct *mm, *active_mm;

pid_t pid;
pid_t tgid;

struct task_struct *real_parent;

char comm[TASK_COMM_LEN];

struct thread_struct t hread;

struct files_struct *files;

};

The states listed here describe what is happening conceptually and do not indicate what “official”
state a process is in. The official states are listed below:
TASK_RUNNING task (process) currently running
TASK_INTERRUPTABLE process is sleeping but can b e woken up (interrupted)
TASK_UNINTERRUPTABLE process is sleeping but can not be woken up (interrupted)
TASK_ZOMBIE process terminated but its status was not collected (it was not waited for)
TASK_STOPPED process stopped by a debugger or job control
TASK_SWAPPING (removed in 2.3.x kernel)
Table – Process States in sched.h

Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER