In multicore processor systems, lower level processor caches are shared amongst multiple threads that are executed in parallel. Depending on memory access patterns of the threads, a higher or lower amount of so-called cache contention occurs, degrading overall performance. To maximize overall performance, future operating system have to predict cache contention and co-schedule threads accordingly. This thesis reprocesses several state-of-the-art cache contention prediction techniques to fit a unified notation, introduces new methods, and performs an evaluation of the methods. The thesis shows that cache misses observed when appliations run stand-alone are well suited to predict performance degradation also for co-scheduled applications.
«
In multicore processor systems, lower level processor caches are shared amongst multiple threads that are executed in parallel. Depending on memory access patterns of the threads, a higher or lower amount of so-called cache contention occurs, degrading overall performance. To maximize overall performance, future operating system have to predict cache contention and co-schedule threads accordingly. This thesis reprocesses several state-of-the-art cache contention prediction techniques to fit a un...
»