Tensil

About Tensil

Provides an open-source machine learning model compiler and hardware generator that creates custom inference accelerators for edge FPGAs. This enables rapid deployment of optimized ML models on resource-constrained devices, improving performance and efficiency in edge computing applications.

```xml <problem> Deploying machine learning (ML) models to edge devices is challenging due to the limited resources and power constraints of these devices. Existing solutions often require significant manual optimization and hardware expertise, slowing down deployment and increasing costs. </problem> <solution> Tensil provides an open-source ML model compiler and hardware generator that automates the creation of custom inference accelerators for edge field-programmable gate arrays (FPGAs). The compiler optimizes ML models for specific FPGA architectures, while the hardware generator produces synthesizable RTL (register-transfer level) code for the accelerator. This allows developers to rapidly deploy optimized ML models on resource-constrained edge devices, improving performance and energy efficiency. </solution> <features> - Open-source ML model compiler and hardware generator - Automated creation of custom inference accelerators for edge FPGAs - Supports ONNX and TensorFlow frozen graphs as input model formats - Generates synthesizable Verilog RTL code for the accelerator - Includes a bit-accurate emulator for functional verification - Provides tutorials and documentation for various FPGA development platforms (e.g., PYNQ Z1, Ultra96, ZCU104) - Docker container for easy setup and deployment </features> <target_audience> The primary users are developers and engineers working on edge computing applications who need to deploy ML models on FPGAs, including those in robotics, IoT, and embedded systems. </target_audience> ```

What does Tensil do?

Provides an open-source machine learning model compiler and hardware generator that creates custom inference accelerators for edge FPGAs. This enables rapid deployment of optimized ML models on resource-constrained devices, improving performance and efficiency in edge computing applications.

Where is Tensil located?

Tensil is based in San Francisco, United States.

When was Tensil founded?

Tensil was founded in 2018.

How much funding has Tensil raised?

Tensil has raised 150000.

Location
San Francisco, United States
Founded
2018
Funding
150000
0
Major Investors
Y Combinator, UpHonest Capital

Find Investable Startups and Competitors

Search thousands of startups using natural language

Tensil

⚠️ AI-generated overview based on web search data – may contain errors, please verify information yourself! You can claim this account with your email domain to make edits.

Executive Summary

Provides an open-source machine learning model compiler and hardware generator that creates custom inference accelerators for edge FPGAs. This enables rapid deployment of optimized ML models on resource-constrained devices, improving performance and efficiency in edge computing applications.

tensil.ai100+
cb
Crunchbase
Founded 2018San Francisco, United States

Funding

$

Estimated Funding

$100K+

Major Investors

Y Combinator, UpHonest Capital

Team

No team information available.

Company Description

Problem

Deploying machine learning (ML) models to edge devices is challenging due to the limited resources and power constraints of these devices. Existing solutions often require significant manual optimization and hardware expertise, slowing down deployment and increasing costs.

Solution

Tensil provides an open-source ML model compiler and hardware generator that automates the creation of custom inference accelerators for edge field-programmable gate arrays (FPGAs). The compiler optimizes ML models for specific FPGA architectures, while the hardware generator produces synthesizable RTL (register-transfer level) code for the accelerator. This allows developers to rapidly deploy optimized ML models on resource-constrained edge devices, improving performance and energy efficiency.

Features

Open-source ML model compiler and hardware generator

Automated creation of custom inference accelerators for edge FPGAs

Supports ONNX and TensorFlow frozen graphs as input model formats

Generates synthesizable Verilog RTL code for the accelerator

Includes a bit-accurate emulator for functional verification

Provides tutorials and documentation for various FPGA development platforms (e.g., PYNQ Z1, Ultra96, ZCU104)

Docker container for easy setup and deployment

Target Audience

The primary users are developers and engineers working on edge computing applications who need to deploy ML models on FPGAs, including those in robotics, IoT, and embedded systems.

Want to add first party data to your startup here or get your entry removed? You can edit it yourself by logging in with your company domain.