There’s a group of embedded software developers who are half programmer and half mathematician. You know who you are. You’re using fast Fourier transforms (FFT), coding IIR filters and operating on large matrices. When working with large amounts of data, you’ve found that SIMD (single instruction, multiple data) instructions can really speed things up.
If you work with large floating point numbers, I have some good news. Get twice the throughput of floating point operations with the latest Intel® architecture processors that double the size of SIMD registers – going from 128 bits to 256 bits. These changes to the instruction set are called Intel® Advanced Vector Extensions or Intel® AVX. Even if you don’t call SSE instructions explicitly, it’s possible to see a performance increase when calling one of over a hundred Intel® IPPs (Intel® Integrated Performance Primitives) that already take advantage of the extensions.
Floating point instructions are used all over embedded:
learn more at http://software.intel.com/en-us/avx.