Cookies helfen uns bei der Bereitstellung unserer Dienste. Durch die Nutzung unserer Dienste erklären Sie sich damit einverstanden, dass wir Cookies setzen.

Cuvillier Verlag

30 Jahre Kompetenz im wissenschaftlichen Publizieren
Internationaler Fachverlag für Wissenschaft und Wirtschaft

Cuvillier Verlag

De En Es
Instruction and Data Cache Timing Analysis in Fixed-Priority Preemptive Real-Time Systems

Printausgabe
EUR 28,00 EUR 26,60

E-Book
EUR 0,00

Instruction and Data Cache Timing Analysis in Fixed-Priority Preemptive Real-Time Systems

Jan Staschulat (Autor)

Vorschau

Inhaltsverzeichnis, Datei (37 KB)
Leseprobe, Datei (78 KB)

ISBN-13 (Printausgabe) 386727195X
ISBN-13 (Printausgabe) 9783867271950
ISBN-13 (E-Book) 9783736921955
Sprache Englisch
Seitenanzahl 208
Auflage 1
Band 0
Erscheinungsort Göttingen
Promotionsort Braunschweig
Erscheinungsdatum 20.03.2007
Allgemeine Einordnung Dissertation
Fachbereiche Informatik
Elektrotechnik
Beschreibung

Embedded systems are prevalent in today’s society and promise to be even more pervasive in the future. Applications vary from airplane jet or car controllers, communication devices like cellular phones to consumer electronics like set-top boxes. The steadily increasing number of functional requirements lead to a complex embedded hardware and software architecture. Often, applications not only have to compute correct results but have to achieve this within a given time period. Timing behavior is an important requirement if the application has to react to signals from the environment. To safely and tightly verify timing behavior is very challenging for today’s complex embedded designs. Caches are small memories close to the processor and they are needed to increase the processor performance but their influence on execution time is difficult to predict because of their complex behavior. Preemptive scheduling is popular in real-time systems to guarantee short response times and a high processor utilization. An additional cache-related preemption delay has to be considered when several tasks share the same cache and when preemptive task scheduling is used. Cache improvements can be strongly degraded by frequent replacements of cache blocks.

There are several approaches to make caches more predictable and efficient. Cache partitioning and cache locking strategies are used to make cache behavior partly orthogonal. These approaches require larger caches and main memories to become effective. However, caches are usually small in embedded systems because of their high cost. While these approaches are certainly a very useful add-on to improve cache predictability and efficiency, they do not solve the problem of cache behavior prediction if all tasks shared the cache.

This thesis makes several contributions to instruction and data cache timing behavior. First, we propose a novel schedulability analysis for fixed priority preemptive scheduling to consider timing effects for associative instruction caches at a context switch. The preemption delays are calculated by considering the preempted as well as the preempting task. The proposed schedulability analysis bounds the number of preemptions more tightly by excluding infeasible cache interferences. The analysis is conservative, e.g. determines a safe upper bound of the preemption delay, and has a low time complexity. As a refinement, the cache interference by multiple task preemptions is analyzed. While previous approaches calculate the worst-case preemption point and assume that each preemption takes place at this preemption point, we consider the preemption history in the calculation of the total cost for multiple task preemptions. The advantage is that the bound of the total preemption delay for multiple task preemptions can consider the preemption history. Execution time verification is often used on different levels of the system design. Less precise estimates are acceptable in early design stages while highly accurate ones are necessary for verification of hard real time constraints. Two approaches to bound the preemption delay have been proposed which both use data flow techniques but differ significantly in respect to time-complexity and analysis precision.

In this thesis we combine these two approaches in a single scalable precision cache analysis to scale the analysis precision and the time-complexity. In an automotive case study we found out that control intensive applications designed with ASCET-SD and Matlab/Simulink models contain only sequential code without loops. Caches cannot increase the performance for such applications because linear code significantly limits the spacial and temporal locality of memory accesses for which a cache is optimized.

Existing timing analyses focus on a single task execution. However, embedded applications are activated very frequently if not regularly. Cache lines from a previous task activation might still be available in the cache and need not be loaded during a subsequent task execution. This effect of multiple task execution can result in a significantly reduced number of cache misses. In this thesis we estimate a conservative bound of the cache contents at the beginning of task activation and consider the effect in instruction cache timing behavior.

While previous analysis techniques focus on instruction caches, we also provide a novel timing analysis for data caches. Data cache behavior is more difficult to predict because it depends on control flow of the application but also on the input data. While instruction addresses are fixed, a single instruction can access different data memory addresses, for example operations on an array. In this thesis we propose a static timing analysis for data caches which considers input data dependency of memory accesses.

Finally, we integrate instruction and data cache timing analysis in a measurement-based WCET-analysis tool, which has been developed in previous work. Measuring the execution time requires insertion of instrumentation points which disturbs the temporal behavior of an application. In this thesis we present a novel instrumentation methodology that reduces the number of instrumentation points. In summary, this thesis provides a sophisticated framework to analyze instruction and data cache effects for fixed priority preemptive real-time systems.