Home > On-Demand Archives > Theatre Talks >

Back to the Future with Embedded Software (and Predictable Timing)

Henk Muller - XMOS - Watch Now - Duration: 20:23

In this session, we will explore how we can still learn a thing or two from the past. The very first microprocessors employed a simple model that enabled programmers to reason about the speed of their programs - up to the clock-cycle precision. Taking advantage of this, some early computers generated video and audio streams directly from software. Over time, microprocessors and microcontrollers became faster and more complex, and they lost this property, making it harder for embedded programs to respond in a precise way. At the same time, embedded programs became more complex - and the focus shifted to executing multiple real-time tasks, simultaneously. 

Discover how we, at XMOS, have gone back to a computational model that enables the programmer to reason about time, and to write software that you proof once correct, and can re-use in different contexts. We offer a bag of 16 hardware threads that each have predictable timing. Each thread can be programmed in hard-real-time individually, and be composed with other threads without affecting each other’s timing. Offering both vector compute and IO compute, xcores can deal with 10 ns accurate IO timings from software, whilst offering up to a million FFTs per second (256 points) for DSP; all in a 7x7mm QFN. 

We will explain how the hardware enables event driven programming - where the compiler and programmer can reason about response times and throughput – which means that interrupts are mostly avoided. For computational code the compiler can verify the timing, so the programmer can guarantee that their DSP is fast enough to keep up with the IO.

M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

tommypro
Score: 0 | 3 years ago | no reply

Are there trace techniques for bare metal functions/tasks?
How is interrupt tracing handled, and what is the overhead for this?

DanGreen
Score: -1 | 3 years ago | no reply

Hi, a big bottleneck is memory access times. Thus, caches are traditionally used. How does XMOS deal with memory access time, is the internal 512kB RAM available to be read/written in 1 clock cycle?

Naveen_Shankar
Score: 0 | 3 years ago | 1 reply

Thank you for such an informative talk. When you say multiple (16 in this case) micros are used in a single chip, what about the power consumption and price increase for such an architecture. Also, as you mentioned caches were not used, I was little confused how memory synchronisation was taken care.

HenkSpeaker
Score: 0 | 3 years ago | no reply

Hello Naveen, there are 16 logical cores, but these are implemented with just two physical actual cores. So pricing at volume is highly competitive. Each of the two physical cores has its own memory, and all eight logical cores in that physical core have single cycle access, so memory is effectively synchronised on a per-clock cycle basis.

KushP
Score: 1 | 3 years ago | no reply

Are interrupts still allowed with XCORE? It seems like there is no way around using interrupts to respond to external events. Thank you.

Joffrey
Score: 0 | 3 years ago | no reply

Thanks Henk for this intro. I've heard some good things about the "Transputer" from my "older" colleauges. I believe XMOS is a continuation of the Transputer? I am unfamiliar with the subject but it is facinating to learn that there is an alternative architecture to traditional micro-processor.

OUR SPONSORS