In application domains like data analysis or image processing, ever-increasing performance demands push the capabilities of computational systems to their limits. With technology scaling plateauing out, engineers are forced to rethink their approach to system design. The research field of approximate computing provides a new design paradigm which trades off accuracy against computational resources. In a complex system, multiple approximation methods can be combined to maximize the resulting benefits, but because of error propagation in the system, doing this in a controlled manner is challenging. To solve this problem, we propose to use concepts developed in the field of evolutionary machine learning to optimize approximation parameters, focusing on systems implemented on FPGA hardware. Our approach uses the rules of a learning classifier system to adjust approximation parameters. The resulting effects on both the application quality and resource usage are estimated on-the-fly and fed back to the rules with every fitness update, allowing the system to be carefully tuned to specific design goals. We illustrate the application of the proposed system to a real-world image processing problem and highlight some practical implications. As this is work in progress, we outline remaining open questions and future directions for our research.
«
In application domains like data analysis or image processing, ever-increasing performance demands push the capabilities of computational systems to their limits. With technology scaling plateauing out, engineers are forced to rethink their approach to system design. The research field of approximate computing provides a new design paradigm which trades off accuracy against computational resources. In a complex system, multiple approximation methods can be combined to maximize the resulting bene...
»