The next wave of pervasive AI pushes machine learning (ML) acceleration toward the extreme edge, with mW  power budgets, while at the same time it raises the bar in terms of accuracy  and capabilities, with new ML models being propose on a daily basis.

To succeed in this balancing act, we need principled ways to walk the line between  flexible and highly specialized ML acceleration architectures.

In this talk I will detail on how to walk the line, drawing from the experience of the open PULP (parallel ultra-low power) platform, based on ML-enhanced RISC-V processors coupled with domain-specific acceleration engines.

June 29 @ 09:20
09:20 — 10:00 (40′)

Luca Benini