Embedded General-Purpose Cognitive Computing for Sensory Processing
Project Abstract/Statement of Work:
With the proliferation of deployed sensors to collect data from many sources and modalities, from cameras and acoustic sensors, to MEMS and GPS, there is an increasing demand to extract meaningful information for intelligence and decision making. Extracting meaningful information from a massive amount of noise-like data from multiple sensory inputs is a major computational challenge. The common approach of building accelerators for each type of sensory input is becoming unscalable due to the increasing development and integration cost. There is a need for a general-purpose computing platform for efficiently encoding and classifying data from multiple sensory inputs, and combining them.
Neuro-inspired computing has emerged to be a strong contender for sensory data processing. Popular algorithms rely on deep (multilayer) feedforward convolutional neural networks (ConvNet) to project a sensory input to a set of specialized kernels (features) for detection and classification tasks. The powerful deep ConvNet algorithms demand intense computation for practical applications, and the training of deep ConvNets is especially painstaking and requires very large labeled datasets. Yet deep ConvNets still do not offer the assurance in dealing with unexpected environments and events in real-world driving scenarios. How to design versatile neuromorphic computing that can be efficiently trained, scaled up and adapted for multiple sensory processing in practice is a challenge.
Prior work has demonstrated solutions through chip-, package- and board-level integration and clever circuit designs to deliver massive-scale neuromorphic computing, including IBM’s TrueNorth, Stanford’s Neurogrid, and Manchester’s SpiNNaker. Despite these impressive strides, it is unclear how these architectures can be miniaturized to embedded platforms to carry out multi-sensory processing with a limited power source, and how they can be trained quickly.
In this pilot study, we plan to investigate algorithm, architecture, circuit and device co-optimizations in designing multimodal sensory processing hardware to achieve fundamental improvements in function, efficiency and scalability. The objective of this study is to develop general-purpose neuromorphic computing for extracting sparse representations of sensory inputs and fusing them. The work relies on the latest work in sparse coding to learn better features, improve classification, and join inputs from multiple sources. The sparse neuromorphic architecture will take advantage of sparsity to significantly reduce the workload to achieve an improved performance and energy efficiency for embedded platforms.
Through this pilot study, we will create sparse neuromorphic computing hardware architecture that provides three key capabilities for multi-sensory data processing: 1) to extract sparse representations of sensory inputs and fuse multiple sensory inputs; 2) to exploit sparsity for significant gain in computational performance and efficiency; and 3) to be easily configured and programmed for all types of sensory inputs, as opposed to using one accelerator for each sensor and function.