Find Investable Startups and Competitors
Search thousands of startups using natural language—just describe what you're looking for
Top 50 Ai Accelerator Chip
Discover the top 50 Ai Accelerator Chip startups. Browse funding data, key metrics, and company insights. Average funding: $68.7M.
Sort by
RaiderChip
RaiderChip designs semiconductor hardware accelerators that enhance AI performance by addressing memory bandwidth limitations. Their solutions enable efficient AI inference for both edge and cloud applications, allowing users to run complex large language models locally with full privacy and without ongoing subscriptions.
Funding: $1M+
Rough estimate of the amount of funding raised
Rebellions
Rebellions develops AI accelerators that utilize HBM3e chiplet architecture and 5nm System-on-Chip technology to enhance energy efficiency and computational performance for deep learning applications. The company addresses the need for scalable and efficient AI inference solutions in the rapidly growing generative AI market.
Funding: $200M+
Rough estimate of the amount of funding raised
BrainChip
BrainChip licenses AI accelerator hardware designs and development tools for on-device intelligence. Their Akida processor IP utilizes sparsity and event-based neural networks to deliver unmatched efficiency for real-time AI applications. This technology reduces latency and power consumption, enabling devices to detect, analyze, and respond to events without cloud dependence.
Funding: $20M+
Rough estimate of the amount of funding raised
MemryX
MemryX is developing an Edge AI Accelerator that employs specialized AI chip architecture to improve processing efficiency for edge devices. This technology enables real-time data analysis and decision-making in environments with limited computational resources.
Funding: $10M+
Rough estimate of the amount of funding raised
Habana
Habana Labs develops Intel® Gaudi® AI accelerators designed for high-performance deep learning training and inference, providing enterprises and cloud providers with efficient compute solutions. Their technology delivers up to 40% better price/performance on cloud instances, addressing the need for cost-effective and scalable AI infrastructure.
Funding: $50M+
Rough estimate of the amount of funding raised
NEUCHIPS
NEUCHIPS develops AI ASIC solutions, including the Evo Gen 5 PCIe Card and Gen AI N3000 Accelerator, specifically designed for deep learning inference in data centers. Their technology addresses the need for energy-efficient hardware that minimizes total cost of ownership (TCO) while enhancing performance for machine learning applications.
Funding: $50M+
Rough estimate of the amount of funding raised
Axelera AI
Axelera AI manufactures AI acceleration hardware, specifically the Metis AI Processing Unit (AIPU), designed for efficient edge computing with up to 214 TOPS performance and 15 TOPS per watt. The technology addresses the need for cost-effective and energy-efficient solutions in generative AI and computer vision applications across various industries, including retail and security.
Funding: $100M+
Rough estimate of the amount of funding raised
Mythic
Mythic provides analog compute‑in‑memory AI inference accelerators that integrate compute and weight storage on a single silicon plane, eliminating off‑chip memory traffic. Delivered as standard M.2 cards, the APUs achieve up to 25 TOPS with 3‑4× lower power than comparable digital accelerators, and are compatible with TensorFlow and PyTorch for edge devices such as robots, drones, and smart‑city cameras.
Funding: $10M+
Rough estimate of the amount of funding raised
Exa Laboratories
Exa Laboratories manufactures reconfigurable chips for AI that achieve up to 27.6 times the efficiency of traditional GPUs by dynamically adapting to various AI models through software configuration. This technology addresses the limitations of classical computing architectures, enhancing speed and energy efficiency for applications ranging from data centers to edge devices.
Untether AI
Untether AI develops high-density AI accelerators that utilize at-memory computing to enhance the speed and energy efficiency of AI inference tasks. Their technology enables real-world applications, such as autonomous vehicles and smart cities, to operate more effectively and affordably.
Funding: $100M+
Rough estimate of the amount of funding raised
Edgecortix
EdgeCortix develops the SAKURA-II Edge AI Platform, an energy-efficient AI accelerator that delivers up to 240 TOPS for real-time inferencing in compact, low-power modules. This technology addresses the need for high-performance AI processing at the edge, significantly reducing operational costs across various sectors, including defense, robotics, and smart manufacturing.
Funding: $20M+
Rough estimate of the amount of funding raised
Panmnesia
The startup manufactures a chip that utilizes Compute Express Link technology to enable data center operators to efficiently pool and manage artificial intelligence accelerators, processors, and memory. This approach enhances system performance by providing adequate memory resources for diverse device integration, addressing the challenges of scalability and resource allocation in large-scale computing environments.
Funding: $50M+
Rough estimate of the amount of funding raised
Etched.ai
Etched.ai develops Sohu, the world's first ASIC specifically designed for transformer models, enabling AI computations to be executed at least ten times faster and more cost-effectively than traditional GPUs. This technology allows for real-time processing of large-scale AI models, enhancing applications such as voice agents and content generation.
Funding: $100M+
Rough estimate of the amount of funding raised
SEMRON
SEMRON develops a 3D-scalable AI inference chip using its proprietary CapRAM™ technology, which integrates compute-in-memory architecture to enhance energy efficiency and parameter density for AI applications. This technology addresses the high costs and power consumption of traditional AI chips, enabling efficient deployment of generative AI models directly on edge devices like smartphones and wearables.
Funding: $5M+
Rough estimate of the amount of funding raised
DEEPX
Develops on-device AI semiconductor solutions, including custom NPUs, SoC ASICs, and specialized modules, optimized for low power consumption and high performance in applications like video analytics, security, and robotics. By enabling real-time AI processing with support for multiple models on a single chip, DEEPX addresses the challenges of latency, privacy, and network costs associated with cloud-based systems. Its scalable architecture and 259 patents ensure cost-competitive, silicon-proven products for global markets.
Funding: $100M+
Rough estimate of the amount of funding raised
FuriosaAI
FuriosaAI develops the RNGD data center accelerator, utilizing a Tensor Contraction Processor architecture to enhance the efficiency of AI inference with a power profile of just 150W. This technology enables enterprises to deploy large language models and multimodal applications with low latency and high throughput, significantly reducing energy consumption and operational costs in data centers.
Funding: $100M+
Rough estimate of the amount of funding raised
Esperanto Technologies
Esperanto Technologies develops massively parallel, energy-efficient chips based on the RISC-V instruction set architecture, specifically designed for Generative AI and high-performance computing (HPC) applications. Their ET-SoC-1 chip features over a thousand low-power RISC-V cores, providing superior compute efficiency and significantly reducing total cost of ownership for AI inference and HPC workloads.
Funding: $50M+
Rough estimate of the amount of funding raised
Flux Computing
Flux Computing builds silicon‑photonic accelerator modules that perform matrix multiplications with light, delivering over ten times the performance per watt of conventional GPUs. The system uses wavelength‑division multiplexed optical interconnects and exposes standard CUDA/OpenCL and TensorFlow/PyTorch APIs, allowing seamless integration into existing AI software stacks. Modular cards can be tiled in standard racks, giving hyperscale data centers and AI labs a scalable, energy‑efficient compute solution.
Salience Labs
Salience Labs is developing a hybrid photonic-electronic chip designed to enhance the processing speed and energy efficiency of artificial intelligence applications. This technology addresses the limitations of traditional electronic chips by enabling faster data transfer and lower power consumption, crucial for scaling AI systems.
Funding: $20M+
Rough estimate of the amount of funding raised
GEMESYS
The startup develops a neuromorphic chip that mimics human brain information-processing mechanisms to enhance artificial intelligence hardware. This technology addresses computing bottlenecks by enabling more efficient training of neural networks for AI applications.
Enflame
Enflame develops cloud-based deep learning chips specifically designed for AI training platforms, enhancing computational efficiency and speed. This technology addresses the high resource demands of AI model training, enabling faster iterations and reduced operational costs for businesses.
Funding: $200M+
Rough estimate of the amount of funding raised
Lemurian Labs
Lemurian Labs develops programmable hardware accelerators designed for edge AI and robotics, enabling efficient training and deployment of large-scale AI models. The company addresses the high costs and accessibility issues associated with current hardware solutions, providing an architecture-agnostic platform that enhances portability and reduces vendor lock-in.
Expedera
Provides scalable neural processor unit (NPU) semiconductor IP with a packet-based architecture that enables parallel execution of AI workloads, achieving up to 90% processor utilization. This approach reduces memory overhead, power consumption, and latency while supporting complex AI models across edge devices in industries like mobile, automotive, and industrial automation.
Funding: $20M+
Rough estimate of the amount of funding raised
DreamBig Semiconductor
DreamBig Semiconductor offers a Chiplet platform with SMARTNIC-DPU solutions designed for low latency and high throughput in AI, data centers, and storage acceleration. Their technology addresses the need for efficient data processing and inherent security in high-demand computing environments.
Funding: $50M+
Rough estimate of the amount of funding raised
Inspire Semiconductor
Inspire Semiconductor provides the Thunderbird accelerated computing platform, a "supercomputer-cluster-on-a-chip" solution for HPC and AI workloads. Its RISC-V architecture and all-CPU programming model offer energy efficiency and simplified development, reducing datacenter TCO and carbon footprint.
Funding: $3M+
Rough estimate of the amount of funding raised
NextSilicon
NextSilicon's Maverick-2 Intelligent Compute Accelerator (ICA) utilizes software-defined hardware to dynamically optimize performance for high-performance computing (HPC) and artificial intelligence (AI) workloads. This technology eliminates the need for extensive code rewrites, significantly reducing development time and enabling faster insights across various applications.
Funding: $200M+
Rough estimate of the amount of funding raised
Lightelligence
Develops photonic computing solutions that integrate optical and electronic components to accelerate AI workloads, addressing the limitations of traditional electronic systems, such as the "memory wall." Products like the HUMMINGBIRD optical network-on-chip and PACE photonic arithmetic engine enable exponential increases in processing speed and efficiency for domain-specific applications.
Funding: $200M+
Rough estimate of the amount of funding raised
TensorWave
TensorWave provides a cloud platform optimized for AI workloads, utilizing AMD's Instinct MI300X accelerators for enhanced training, fine-tuning, and inference capabilities. The platform offers immediate availability, lower total cost of ownership, and seamless integration with popular frameworks like PyTorch and TensorFlow, addressing the need for efficient and scalable AI compute solutions.
Funding: $20M+
Rough estimate of the amount of funding raised
Luminous Computing
Luminous Computing develops photonics chips designed to provide the necessary compute, memory, and bandwidth for advanced artificial intelligence applications. This technology addresses the limitations of current hardware, enabling instant processing of complex queries and facilitating the development of next-generation AI solutions.
Funding: $100M+
Rough estimate of the amount of funding raised
Omni Design Technologies
Omni Design Technologies provides high‑speed, low‑power analog‑to‑digital converter and front‑end semiconductor IP for heterogeneous system‑on‑chip designs, offered as reusable IP blocks, chiplet “droplets,” or hard macros. Their 64 GS/s ADC cores and multi‑channel analog front‑ends deliver >10 ENOB with sub‑100 µW power per channel, include on‑chip PVT monitoring, and support DSP‑ready interfaces such as JESD204B/C and high‑bandwidth SerDes. The IP enables fabless and in‑house design teams to accelerate development of AI accelerators, data‑center networking, automotive ADAS, telecom RF, aerospace, and quantum computing products.
Funding: $20M+
Rough estimate of the amount of funding raised
Taalas
Taalas provides a platform that transforms any AI model into custom silicon, creating Hardcore Models that are hardwired for optimal performance. This approach significantly enhances computational efficiency, achieving up to 1000 times the performance of traditional software implementations.
Funding: $50M+
Rough estimate of the amount of funding raised
Tsavorite Scalable Intelligence
Tsavorite develops composable silicon chiplets that enable scalable AI compute for enterprises, allowing for the training of trillion-parameter models and rapid fine-tuning of large language models. Their software provides a streamlined, no-code deployment process, addressing the need for efficient and accessible AI infrastructure in a resource-constrained environment.
Lumiphase
10
Relative Traction Score based on online presence metrics compared to companies in the same age group.
Lumiphase develops silicon photonics-based optical processors for AI inference, enabling faster and more energy-efficient AI computation. Their technology replaces traditional electronic components with light-based circuits, accelerating AI workloads while reducing power consumption in data centers and edge devices.
Funding: $2M+
Rough estimate of the amount of funding raised
XMOS
XMOS provides the XCORE® Generative System‑on‑Chip (GenSoC), a programmable silicon platform that compiles natural‑language system specifications into deterministic, parallel firmware with sub‑microsecond latency. The SoC integrates audio I/O, voice‑fusion DSP, motor‑control peripherals and an on‑chip AI inference engine, allowing OEMs to replace multiple discrete chips with a single component for audio, voice, robotics and industrial automation applications. This reduces hardware bill‑of‑materials, development time and timing‑error risk while delivering guaranteed real‑time performance.
Funding: $10M+
Rough estimate of the amount of funding raised
SiPearl
SiPearl is developing a high-performance, low-power microprocessor specifically for supercomputing and artificial intelligence, designed to integrate with any third-party accelerator. This technology addresses the need for efficient processing of large volumes of data in critical fields such as medical research, energy management, and climate modeling, while minimizing carbon footprint.
Funding: $100M+
Rough estimate of the amount of funding raised
Neurophos
Neurophos develops a photonic computing architecture that utilizes ultra-dense optical modulators to achieve 160,000 TOPS at 300 TOPS per watt, significantly outperforming traditional GPUs. This technology addresses the escalating demand for AI compute power by providing a solution that replaces 100 GPUs with a single processor while consuming only 1% of the energy.
Funding: $10M+
Rough estimate of the amount of funding raised
Axelera AI
10
Relative Traction Score based on online presence metrics compared to companies in the same age group.
Axelera AI develops and sells high-performance, energy-efficient AI inference hardware for edge devices. Their Metis AI Platform integrates a specialized in-memory computing architecture with a comprehensive software stack, enabling efficient deployment of deep learning models for computer vision and natural language processing applications.
Funding: $50M+
Rough estimate of the amount of funding raised
MatX
MatX manufactures specialized hardware designed for training and inference of large AI models, delivering up to 10× more computing power for workloads with over 7 billion parameters. This enables researchers and startups to efficiently train advanced models, significantly reducing the time and cost associated with developing state-of-the-art AI systems.
Funding: $100M+
Rough estimate of the amount of funding raised
Brain-CA Technologies
The startup develops AI processors that mimic human brain architecture to enhance energy efficiency and reduce complexity in AI systems. By addressing the limitations of current chip technology, their solutions enable clients to achieve high performance with minimal power consumption.
Funding: $2M+
Rough estimate of the amount of funding raised
Chipletti
Chipletti designs and manufactures advanced node chiplet modules specifically for AI compute applications. Their technology enables high-performance, scalable solutions for demanding AI workloads.
AiM Future, Inc.
The startup develops an AI-based NeuroMosAIc Processor (NMP) that integrates a RISC-V architecture for high-performance computing in semiconductor applications. Its technology enables clients to efficiently evaluate neural network performance metrics such as accuracy, memory bandwidth, and run-time using SDK solutions compatible with TensorFlow, Caffe, PyTorch, and ONNX frameworks.
Funding: $5M+
Rough estimate of the amount of funding raised
Graphcore
Graphcore designs and manufactures Intelligence Processing Units (IPUs) and the Poplar software stack to accelerate machine learning workloads. Their technology enables faster training and inference for complex AI models across various industries. IPUs are optimized for the parallel processing demands of deep learning, offering a distinct advantage for AI innovation.
Aligned
Aligned provides high-performance GPU platforms powered by AMD Instinct™ MI300X accelerators for custom AI model training and inference, enabling enterprises to efficiently handle large datasets and complex workloads. The company delivers tailored computing solutions that optimize speed and efficiency, addressing the need for scalable infrastructure in AI and machine learning applications.
Funding: $20M+
Rough estimate of the amount of funding raised
Cambricon
Cambricon designs and develops artificial intelligence (AI) processors and acceleration cards for cloud, edge, and terminal applications. Their products, including MLUs and IP cores, are built on advanced architectures to enhance AI computing performance. The company also provides software development platforms and systems to support AI deployment.
Axera
Axera develops high-performance AI System-on-Chips (SoCs) that utilize hybrid precision processing and pixel-level AI imaging technology to enhance edge computing applications in smart IoT, autonomous driving, and robotics. Their solutions address the need for efficient, high-quality data processing and imaging in complex environments, enabling advanced functionalities in various edge devices.
Gigantor
Gigantor provides the GigaMAACS platform that automatically transforms trained neural‑network models into custom FPGA or ASIC hardware pipelines, delivering a synthesized netlist and bitstream ready for deployment. The solution enables edge AI devices to run HD/4K object detection and multi‑object tracking at over 240 FPS with microsecond latency while respecting power and area constraints. It targets OEMs, system integrators and hardware manufacturers building real‑time AI for defense, autonomous vehicles, robotics, medical imaging and smart‑city applications.
Quadric
Quadric has developed the Chimera GPNPU, a licensable processor architecture that integrates on-device machine learning inference with the ability to run complex C++ code without requiring code partitioning across multiple processor types. This technology scales from 1 to 864 TOPs and supports all machine learning models, including classical networks and large language models, streamlining SoC design and accelerating model porting.
Funding: $20M+
Rough estimate of the amount of funding raised
LightSpeedAI Labs
The startup develops an optoelectronic processor that utilizes light for high-speed artificial intelligence computations, designed to fit into standard PCIe slots in server racks. This technology enhances performance for machine learning applications while significantly lowering the cost per compute compared to traditional electronic processors.
Funding: $500K+
Rough estimate of the amount of funding raised
deepsilicon
Deepsilicon develops software and hardware solutions that optimize neural network performance on-device, achieving 8x less RAM usage, 20x higher throughput, and 100x improved power efficiency. This technology addresses the challenges of high resource consumption and slow processing speeds in running complex AI models.
HyperCIM
HyperCIM offers dedicated hardware accelerators, Load Processing Units (LPUs), to offload data preprocessing and protocol handling from CPUs and GPUs. These LPUs enable ultra-low latency processing of high-frequency data streams and financial messaging, accelerating AI inference and real-time decision-making.