Neuromorphic Computing

Ultra-low-latency and ultra-low-power data processing are in urgent demand by customers in self-driving cars and IoT industries. We provide the revolutionary brain-inspired and AI-based solutions that outperforms current solutions by more than 20x in latency and power consumption.

In conventional data-processing solutions, such as CPU or GPU-based methods, the data is processed in a frame-based manner, which means the data within a predefined time window (e.g. 1 ms) is packaged first and then processed every time step cycle (e.g. 1ms). Normally resource-expensive parallel architecture such as GPUs is needed to handle the heavy traffic at peak time. To further add to the energy expenditure, processing has to be carried out every cycle even there is no data or very little data. This solution is like a big bus that arrives every 15 mins (Latency), so people (Data) need to queue at the bus stop waiting for the next bus to come. A big-size bus (GPUs) is needed to handle a large number of people at peak time, while the bus is almost empty or not fully loaded during the off-peak time (energy and resource wasted).

Different from the existing processors such as CPU and GPU on the market today, our solutions process the data in an event-based manner, which means that 1) the data is processed immediately and asynchronously to achieve ultra-low-latency and 2) the processing is only initiated when data event appear to achieve ultra-low-power. This is like a very low-cost small-size energy-efficient Uber private car that provides the service when needed.

 

 

 

 

 

 

 

© Copyright. All rights reserved.