(英) |
Widely spreading AI applications are highly dependent on cloud computing. Specifically, AI applications based on neural networks are inherently data-hungry; while the computational structure is relatively simple, they are known to require a large amount of input and output data. Edge AI, which performs AI processing solely on user (edge) devices without the need for cloud servers, is actively researched. To realize AI applications within limited power budgets and computation time, edge AI processor technology that minimizes data movement between memory and processors, namely near-data processing, is attracting attention. It is essential that algorithms are designed considering the hardware structure to achieve both efficient processing in terms of time and power and accurate inference as an AI system. In this talk, an overview of the theoretical and practical aspects of the technical requirements and realization of edge AI processors will be explained, along with an introduction to the author's ongoing edge AI processor projects towards the next-generation information technology where the devices function collaboratively. |