There’s a group of embedded software developers who are half programmer and half mathematician. You know who you are. You’re using fast Fourier transforms (FFT), coding IIR filters and operating on large matrices. When working with large amounts of data, you’ve found that SIMD (single instruction, multiple data) instructions can really speed things up.

 

If you work with large floating point numbers, I have some good news. Get twice the throughput of floating point operations with the latest Intel® architecture processors that double the size of SIMD registers – going from 128 bits to 256 bits. These changes to the instruction set are called Intel® Advanced Vector Extensions or Intel® AVX. Even if you don’t call SSE instructions explicitly, it’s possible to see a performance increase when calling one of over a hundred Intel® IPPs (Intel® Integrated Performance Primitives) that already take advantage of the extensions.

 

Floating point instructions are used all over embedded:

 

  • Medical – noise removal in medical imaging
  • Industrial – adaptive PID (proportional–integral–derivative) algorithms in control applications
  • Military - compensation for non-straight-line motion when processing SAR (synthetic aperture radar) images
  • Communications – echo cancellation and precise pulse code modulation (PCM)

 

learn more at http://software.intel.com/en-us/avx.