Quadric

About Quadric

Quadric has developed the Chimera GPNPU, a licensable processor architecture that integrates on-device machine learning inference with the ability to run complex C++ code without requiring code partitioning across multiple processor types. This technology scales from 1 to 864 TOPs and supports all machine learning models, including classical networks and large language models, streamlining SoC design and accelerating model porting.

```xml <problem> Modern System on Chip (SoC) designs for AI inference often require partitioning code across multiple processor types, increasing complexity and development time. Existing solutions struggle to efficiently handle both machine learning inference and complex C++ code on a single architecture, leading to performance bottlenecks and integration challenges. </problem> <solution> Quadric offers the Chimera GPNPU, a licensable processor architecture designed to streamline SoC design by integrating on-device machine learning inference with the ability to run complex C++ code without code partitioning. The Chimera GPNPU scales from 1 to 864 TOPs and supports a wide range of machine learning models, including classical networks, vision transformers, and large language models (LLMs). This unified architecture simplifies application development and accelerates the porting of new ML models, reducing the need for multiple specialized processors. </solution> <features> - Single, unified architecture for both ML inference and complex C++ code execution - Scalable performance from 1 to 864 TOPs - Support for all machine learning models, including classical networks, vision transformers, and LLMs - Chimera DevStudio for AI software simulation and SoC design visualization - Safety-enhanced versions available for automotive applications (ASIL-ready cores) - Simplified SoC hardware design and accelerated ML model porting </features> <target_audience> The primary target audience includes SoC designers and system architects in markets such as automotive, edge computing, and embedded systems who require high-performance, flexible, and scalable AI inference capabilities. </target_audience> ```

What does Quadric do?

Quadric has developed the Chimera GPNPU, a licensable processor architecture that integrates on-device machine learning inference with the ability to run complex C++ code without requiring code partitioning across multiple processor types. This technology scales from 1 to 864 TOPs and supports all machine learning models, including classical networks and large language models, streamlining SoC design and accelerating model porting.

Where is Quadric located?

Quadric is based in Burlingame, United States.

When was Quadric founded?

Quadric was founded in 2017.

How much funding has Quadric raised?

Quadric has raised 48250000.

Location
Burlingame, United States
Founded
2017
Funding
48250000
Employees
10 employees

Find Investable Startups and Competitors

Search thousands of startups using natural language

Quadric

⚠️ AI-generated overview based on web search data – may contain errors, please verify information yourself! You can claim this account with your email domain to make edits.

Executive Summary

Quadric has developed the Chimera GPNPU, a licensable processor architecture that integrates on-device machine learning inference with the ability to run complex C++ code without requiring code partitioning across multiple processor types. This technology scales from 1 to 864 TOPs and supports all machine learning models, including classical networks and large language models, streamlining SoC design and accelerating model porting.

quadric.io500+
Founded 2017Burlingame, United States

Funding

$

Estimated Funding

$20M+

Team (10+)

No team information available.

Company Description

Problem

Modern System on Chip (SoC) designs for AI inference often require partitioning code across multiple processor types, increasing complexity and development time. Existing solutions struggle to efficiently handle both machine learning inference and complex C++ code on a single architecture, leading to performance bottlenecks and integration challenges.

Solution

Quadric offers the Chimera GPNPU, a licensable processor architecture designed to streamline SoC design by integrating on-device machine learning inference with the ability to run complex C++ code without code partitioning. The Chimera GPNPU scales from 1 to 864 TOPs and supports a wide range of machine learning models, including classical networks, vision transformers, and large language models (LLMs). This unified architecture simplifies application development and accelerates the porting of new ML models, reducing the need for multiple specialized processors.

Features

Single, unified architecture for both ML inference and complex C++ code execution

Scalable performance from 1 to 864 TOPs

Support for all machine learning models, including classical networks, vision transformers, and LLMs

Chimera DevStudio for AI software simulation and SoC design visualization

Safety-enhanced versions available for automotive applications (ASIL-ready cores)

Simplified SoC hardware design and accelerated ML model porting

Target Audience

The primary target audience includes SoC designers and system architects in markets such as automotive, edge computing, and embedded systems who require high-performance, flexible, and scalable AI inference capabilities.

Want to add first party data to your startup here or get your entry removed? You can edit it yourself by logging in with your company domain.