SEMRON

About SEMRON

SEMRON develops a 3D-scalable AI inference chip using its proprietary CapRAM™ technology, which integrates compute-in-memory architecture to enhance energy efficiency and parameter density for AI applications. This technology addresses the high costs and power consumption of traditional AI chips, enabling efficient deployment of generative AI models directly on edge devices like smartphones and wearables.

```xml <problem> Existing AI chips face rising costs and efficiency challenges as AI model complexity grows. Server-class chips are expensive and power-hungry, while mobile chips lack the performance to run generative AI models directly on edge devices. This limitation forces reliance on cloud computing, increasing latency and reducing margins for edge device manufacturers. </problem> <solution> SEMRON addresses these challenges with CapRAM™, a 3D-scalable, compute-in-memory (CIM) technology that enhances energy efficiency and parameter density for AI inference. CapRAM™ utilizes a memcapacitive approach, achieving up to 50x greater energy efficiency compared to memristive solutions. By integrating memory and processing within the same device, CapRAM™ reduces data transfer bottlenecks and enables efficient deployment of generative AI models on devices like smartphones, wearables, and headsets. SEMRON provides a workflow for deploying AI models on its hardware, starting with Hugging Face or custom ONNX models, which are then compiled and packaged into a container to run directly on SEMRON hardware. </solution> <features> - 3D-scalable CapRAM™ architecture for high parameter density and energy efficiency - Memcapacitive compute-in-memory technology, offering up to 50x greater energy efficiency than memristive solutions - Achieves 0.2-1 TOPS/mW energy efficiency with multi-bit precision (INT8) - Parameter density of 500M parameters/mm² - Compiler converts floating-point models into efficient integer versions using a Brevitas-inspired API - Embedded control software manages execution - SEMRON Host Library ensures seamless integration with the customer’s hardware and software </features> <target_audience> The primary target audience includes manufacturers of smartphones, wearables, headsets, and other edge devices seeking to integrate generative AI capabilities directly into their products. </target_audience> ```

What does SEMRON do?

SEMRON develops a 3D-scalable AI inference chip using its proprietary CapRAM™ technology, which integrates compute-in-memory architecture to enhance energy efficiency and parameter density for AI applications. This technology addresses the high costs and power consumption of traditional AI chips, enabling efficient deployment of generative AI models directly on edge devices like smartphones and wearables.

Where is SEMRON located?

SEMRON is based in Dresden, Germany.

When was SEMRON founded?

SEMRON was founded in 2020.

How much funding has SEMRON raised?

SEMRON has raised 9710000.

Location
Dresden, Germany
Founded
2020
Funding
9710000
Employees
18 employees
Major Investors
Join Capital

Find Investable Startups and Competitors

Search thousands of startups using natural language

SEMRON

⚠️ AI-generated overview based on web search data – may contain errors, please verify information yourself! You can claim this account with your email domain to make edits.

Executive Summary

SEMRON develops a 3D-scalable AI inference chip using its proprietary CapRAM™ technology, which integrates compute-in-memory architecture to enhance energy efficiency and parameter density for AI applications. This technology addresses the high costs and power consumption of traditional AI chips, enabling efficient deployment of generative AI models directly on edge devices like smartphones and wearables.

semron.ai2K+
cb
Crunchbase
Founded 2020Dresden, Germany

Funding

$

Estimated Funding

$5M+

Major Investors

Join Capital

Team (15+)

No team information available.

Company Description

Problem

Existing AI chips face rising costs and efficiency challenges as AI model complexity grows. Server-class chips are expensive and power-hungry, while mobile chips lack the performance to run generative AI models directly on edge devices. This limitation forces reliance on cloud computing, increasing latency and reducing margins for edge device manufacturers.

Solution

SEMRON addresses these challenges with CapRAM™, a 3D-scalable, compute-in-memory (CIM) technology that enhances energy efficiency and parameter density for AI inference. CapRAM™ utilizes a memcapacitive approach, achieving up to 50x greater energy efficiency compared to memristive solutions. By integrating memory and processing within the same device, CapRAM™ reduces data transfer bottlenecks and enables efficient deployment of generative AI models on devices like smartphones, wearables, and headsets. SEMRON provides a workflow for deploying AI models on its hardware, starting with Hugging Face or custom ONNX models, which are then compiled and packaged into a container to run directly on SEMRON hardware.

Features

3D-scalable CapRAM™ architecture for high parameter density and energy efficiency

Memcapacitive compute-in-memory technology, offering up to 50x greater energy efficiency than memristive solutions

Achieves 0.2-1 TOPS/mW energy efficiency with multi-bit precision (INT8)

Parameter density of 500M parameters/mm²

Compiler converts floating-point models into efficient integer versions using a Brevitas-inspired API

Embedded control software manages execution

SEMRON Host Library ensures seamless integration with the customer’s hardware and software

Target Audience

The primary target audience includes manufacturers of smartphones, wearables, headsets, and other edge devices seeking to integrate generative AI capabilities directly into their products.

Want to add first party data to your startup here or get your entry removed? You can edit it yourself by logging in with your company domain.