RadiumBoards’ HD Camera Cape is a high-resolution portable camera that features an Aptina MT9M114 1.3-megapixel 1/6-inch sensor, which can capture images at 1280 X 960 and 720P video at 30fps. The Cape offers a pair of 46-pin connectors, two pairs of 30-position sensor board connectors, and a 24-pin camera socket. Power is supplied via 3.3V or 5V expansion connectors, and sports a pair of LED indicators. RadiumBoards touts the camera as having superior low-light performance, low power consumption, and has a progressive scan ERS (Electronic Rolling Shutter).

" />

Apex Microtechnology

<p>Pressed for more details, ST declined to comment, but the announcement appears slated for May.</p>

RadiumBoards’ HD Camera Cape is a high-resolution portable camera that features an Aptina MT9M114 1.3-megapixel 1/6-inch sensor, which can capture images at 1280 X 960 and 720P video at 30fps. The Cape offers a pair of 46-pin connectors, two pairs of 30-position sensor board connectors, and a 24-pin camera socket. Power is supplied via 3.3V or 5V expansion connectors, and sports a pair of LED indicators. RadiumBoards touts the camera as having superior low-light performance, low power consumption, and has a progressive scan ERS (Electronic Rolling Shutter).

F1206FA8000V032TM

Figure 4. Lattice’s sensAI hardware/software stack includes two low power FPGA platforms, IP cores, software tools and reference designs. (Source: Lattice Semi)

GPU Deep Learning

F1206FA8000V032TM

What about using a more specialised hardware accelerator such as a GPU? GPUs are highly optimised to perform simple operations on large amounts of data in parallel, making them an efficient choice for AI. However, while GPU makers are targeting AI at the edge, they are somewhat geared towards computer vision and object recognition applications.

Nvidia released the Jetson Nano a few weeks ago, a small board which comes in two versions: a $99 dev kit and a $129 production-ready module (see figure 5). It includes 128 CUDA-X GPU cores, with a 4-core CPU and 4GB memory.

F1206FA8000V032TM

Recommended AI Flood Drives Chips to the Edge

While providing few details, Qualcomm executives stressed the power efficiency of the Cloud AI 100, noting that the company’s heritage as a smartphone chip designer has ingrained power efficiency into its engineers’ DNA.

I really think the way to get the most power-efficient processor is that you have to start with a mobile mindset,” Kressin said.

The market for inference accelerators is currently dominated by Nvidia GPUs, though more specialized solutions such as Google’s Tensor and Amazon’s Inferentia are available. In all, more than 30 companies are known to be developing chips purpose-built for AI inferencing, ranging from heavyweights such as Intel and Xilinx to a host of startups led by the likes of Graphcore, Wave Computing, and Mythic.

Qualcomm will certainly have to do something impressive to differentiate from these other vendors,” said Linley Gwennap, president and principal analyst at the Linley Group. Qualcomm executives like to wave their hand and say ‘power efficiency.’ But what you can do in a smartphone is very different from what you can do in a server.”

The processor-in-memory architectures in the works today using analog computing in 40-nm NOR cells are just the start of strange beasts to come. Such designs will need to migrate to exotic phase-change, resistive, or magnetic RAM cells using FinFETS to scale, notes another researcher.

Copyright © 苏ICP备11090050号-1 tl431 datasheet All Rights Reserved. 版权投诉及建议邮箱:291310024@725.com