Skip to content
- Principled, retargetable, ultra high-performance automated ISA testing
We propose to combine recent progress in software testing (e.g. coverage-guided greybox fuzzers, explosion in SMT-solver performance, self-composition for hyperproperty testing) ISA specification (e.g. rise of rigorous ISA specification DSLs such as Sail and ASL) and high-performance ISA emulation (e.g. GenSim), into a unified whole for automated ISA testing. We propose testing of information flow properties as an especially demanding use case.
The time is ripe for this research since each of the above has shown considerable promise on its own, and awaits integration. Our team have deep background in most of the above and, unlike most testing researchers, is deeply familiar with the hardware/software boundary, making the team the world’s leading to carry out this research.
We envisage JIT-as-a-Service as a universal JIT compiler supporting a wide range of guest lan- guages and target platforms. The common entry point will be a compact and efficient-to-generate intermediate representation (IR), which each guest VM can use to interact with the JIT-as-a-Service compiler.
Mobile/IoT guest devices employ a flexible JIT compilation framework, which may rely either on local hardware JIT accelerators or use a remote JIT compiler, deployed e.g. in the radio access network as ”JIT in the Edge” node. Within data centres backing the radio access network further opportunities for JIT-as-a-Service arise. Data centre applications as well as deferred end-user code may be JIT- compiled by dedicated JIT compilation servers. These may target code quality for particularly hot code, which warrants extensive optimisation, but may also rely on hardware JIT acceleration themselves for greater JIT compilation throughput. Finally, data centre JIT servers may also be used for JIT compilation for accelerators, for example, to transparently map compute-intensive deep learning applications to dedicated DL accelerators in end-user devices (e.g. Kirin 970’s NPU) or high-performance data centre ML accelerators (e.g. Da Vinci). This enables workloads to benefit from dedicated AI accelerators without the need for users to rewrite their applications to support a plethora of proprietary accelerators and frameworks.
- Data-Centric Parallelisation
Automatic parallelisation is loop parallelisation. Traditional parallelising compilers attempt, for each loop individually, to identify a suit- able parallel execution schedule, which honours dependences dictated by the underlying sequential program semantics. In contrast, manual parallelisation is a holistic process. Human programmers consider the entire program and relationships between its individual components. It is standard practice for a human expert to initially rewrite and restructure a program before attempting parallelisation. We propose a fundamental paradigm shift by attempting to mimic what human experts would do: We aim to enable automatic parallelisation to incorporate whole program context and knowledge of the most widely used abstract data types in order to overcome the limitations of todays compilers.
Our main aim is to develop a novel ”data first” paradigm for automatic parallelisation of sequential legacy code, outperforming every existing parallelising compiler on irregular, pointer-based or control flow dominated applications. We aim to make automatic parallelisation a viable alternative to manual parallelisation, i.e. resulting in competitive parallel performance levels whilst reducing manual human intervention to a minimum.