Through Innovative ASIC Solutions
our company has been dedicated to ASIC design, leveraging the power of technology to shape the future. We have a refined team, with more than 80% of our members being exceptional engineers who have experience with leadingsemiconductor companies both domestically and internationallyAs a founding member of MLCommons, we are always at the forefront of technological innovation Our headquarters are located in the Hsinchu Science Park, close to top academic research institutions and an advanced technology industry chain, providing our employees with an ideal environment full of challenges and growth opportunities. We stood out in the MLPerf 3.0 DLRM efficiency competition, showcasing our unparalleled technical prowess. Currently, our products have expanded to the fields of LLM and generative AI, continuing to write a glorious chapter of technological innovation.
Neuchips' integrated software stack combines our AI ASIC hardware with a comprehensive software solution. Starting with our AI ASIC and OS drivers at the base, our stack includes optimized compilers and ML frameworks working alongside our Neuchips Engine. We support popular pre-trained AI models like Llama and Mistral, complete with user-friendly application interfaces and management tools for seamless deployment.

Designed to unleash the full potential of LLM (Large Language Model) by offloading more than 90% of the resources required for generative AI from the CPU for maximum LLM-focused performance.
Elevate your AI capabilities with our Gen AI Inferencing Cards. Engineered for high-performance AI applications, our cards offer seamless integration, exceptional reliability, and scalable solutions tailored to your needs.
Our comprehensive solution integrates our cutting-edge hardware with powerful software components, creating a complete end-to-end system designed to accelerate AI adoption. This seamless hardware-software integration breaks down implementation barriers, enabling AI applications to rapidly deploy across industries and use cases.
As AI’s growth faces energy challenges, the company is focusing on energy efficiency — with the capability to run a 14-billion parameter model on a single AI card and chip at just 45W.
Ahead of Embedded World 2025, Neuchips, a leading Artificial Intelligence (AI) Application-Specific Integrated Circuits (ASIC) provider, is announcing a collaboration with Vecow and Golden Smart Home (GSH) Technology Corp.’s ShareGuru. The partnership is aimed at revolutionizing SQL data processing using a private, secure, and power-efficient AI solution, which delivers real-time insights from in-house databases via natural language requests.
Neuchips, a leading AI Application-Specific Integrated Circuits (ASIC) solutions provider, will demo its revolutionary Raptor Gen AI accelerator chip (previously named N3000) and Evo PCIe accelerator card LLM solutions at CES 2024.
Neuchips, the leader in AI ASIC platforms for deep learning recommendation, participated in MLPerf™ v3.0 with their RecAccel™ N3000 and demonstrated industry leading performance and power efficiency.
As AI’s growth faces energy challenges, the company is focusing on energy efficiency — with the capability to run a 14-billion parameter model on a single AI card and chip at just 45W.
Ahead of Embedded World 2025, Neuchips, a leading Artificial Intelligence (AI) Application-Specific Integrated Circuits (ASIC) provider, is announcing a collaboration with Vecow and Golden Smart Home (GSH) Technology Corp.’s ShareGuru. The partnership is aimed at revolutionizing SQL data processing using a private, secure, and power-efficient AI solution, which delivers real-time insights from in-house databases via natural language requests.
Neuchips, a leading AI Application-Specific Integrated Circuits (ASIC) solutions provider, will demo its revolutionary Raptor Gen AI accelerator chip (previously named N3000) and Evo PCIe accelerator card LLM solutions at CES 2024.
Neuchips, the leader in AI ASIC platforms for deep learning recommendation, participated in MLPerf™ v3.0 with their RecAccel™ N3000 and demonstrated industry leading performance and power efficiency.
“There are a lot of opportunities in the AI space,” says Ken Lau, CEO of AI chip startup Neuchips. “If you look at any public data, you will see that AI, in particular, generative AI [GenAI], could be a trillion-dollar market by 2030 timeframe. A lot of money is actually being spent on training today, but the later part of the decade will see investments going to inferencing.”
Ken Lau, CEO of Neuchips, shared his views on the AI chip sector in a recent interview with DIGITIMES. He said while Nvidia seemsto be the main supplier of general-purpose GPUs, which are mostly used for AI training, there are more chip choices for AI inference.
Taiwanese startup Neuchips showed off Nvidia-beating recommendation (DLRM) power scores. Neuchips’ first chip, RecAccel 3000, is specially designed to accelerate recommendation workloads.
Taiwanese startup Neuchips has taped out its AI accelerator designed specifically for data center recommendation models. Emulation of the chip suggests it will be the only solution on the market to achieve one million DLRM inferences per Joule of energy (or 20 million inferences per second per 20–Watt chip). The company has already demonstrated that its software can achieve world–beating INT8 DLRM accuracy at 99.97% of FP32 accuracy.
“There are a lot of opportunities in the AI space,” says Ken Lau, CEO of AI chip startup Neuchips. “If you look at any public data, you will see that AI, in particular, generative AI [GenAI], could be a trillion-dollar market by 2030 timeframe. A lot of money is actually being spent on training today, but the later part of the decade will see investments going to inferencing.”
Ken Lau, CEO of Neuchips, shared his views on the AI chip sector in a recent interview with DIGITIMES. He said while Nvidia seemsto be the main supplier of general-purpose GPUs, which are mostly used for AI training, there are more chip choices for AI inference.
Taiwanese startup Neuchips showed off Nvidia-beating recommendation (DLRM) power scores. Neuchips’ first chip, RecAccel 3000, is specially designed to accelerate recommendation workloads.
Taiwanese startup Neuchips has taped out its AI accelerator designed specifically for data center recommendation models. Emulation of the chip suggests it will be the only solution on the market to achieve one million DLRM inferences per Joule of energy (or 20 million inferences per second per 20–Watt chip). The company has already demonstrated that its software can achieve world–beating INT8 DLRM accuracy at 99.97% of FP32 accuracy.
As global AI energy demand surges, Neuchips demonstrates breakthrough power efficiency with technology capable of running a 14-billion parameter model on a single AI card at just 45W.
As global AI energy demand surges, Neuchips demonstrates breakthrough power efficiency with technology capable of running a 14-billion parameter model on a single AI card at just 45W.