Meet HC1 The $200M Startup Chip That Makes AI Responses Instantaneous.

 

Meet HC1 The $200M Startup Chip That Makes AI Responses Instantaneous.
TAALAS HC1: The AI-on-Silicon Breakthrough Delivering 17,000 Tokens per Second

The Canadian chip design startup TAALAS has demonstrated a stunning leap in AI inference technology. Their new HC1 chip features the Llama 3.1 8B model etched directly into its silicon a "hardwired" approach that sacrifices modifiability for unprecedented speed and efficiency.

Mind-Blowing Speed: Instant Results

By embedding the model directly into the hardware, the HC1 achieves an astonishing inference speed of 16,960 tokens per second. At this rate, entire pages of text are generated almost instantaneously. To achieve this, TAALAS utilized a 3-bit quantized version of the Llama 3.1 8B model. While this compression results in a slight trade-off in output quality compared to the full-precision model, the performance gain is transformative.

The Power of "Hardwired" AI

The HC1 chip is a massive engineering feat, boasting 53 billion transistors and a power draw of 2.5kW. Although the core model cannot be altered once manufactured, TAALAS has integrated support for LoRA (Low-Rank Adaptation) adapters. This allows developers to fine-tune the chip for specific tasks or domains even after the main model is locked in silicon.

With over $200 million in funding and only 2.5 years of development, TAALAS is moving fast. The company plans to evolve the HC series to support larger "Reasoning" models and has already teased the launch of the HC2 later this year.

Currently, we use GPUs (like the Nvidia H100) designed to run anything, but this flexibility comes at the cost of power consumption and data transfer bottlenecks. TAALAS's concept is "Computational Storage," integrating processing power and memory to run AI dozens of times faster than typical GPUs in specialized areas.

In the past, the Bitcoin mining industry shifted from using general-purpose computers to specialized mining rigs (ASICs), leading to massive wealth accumulation. TAALAS is doing the same with AI. If models like Llama become the global standard, creating chips specifically designed to run Llama will provide the most cost-effective performance-per-watt solution for large data centers.

Running at 17,000 tokens/s isn't just about "speed"; it opens the door to real-time AI agents that interact with humans seamlessly (zero latency), crucial for instant language translation or automated control systems requiring split-second decision-making.

While "embedding models onto a chip" sounds risky given the rapid changes in AI, support for LoRA is a key advantage. Because users can still update the AI's "personality" or "specialized knowledge" through the software without having to discard the original chip. 

 

Tencent has closed TiMi Montreal, ending nearly five years of AAA game development experiments without a game being released.

 

Source: TAALAS 

Comments

Popular posts from this blog

Critical 8.8 Risk Why Your Chrome Browser Needs an Emergency Update Today.

Google Gemini Hit by "Chat Amnesia" Sidebar History Vanishes for Many Users.

Beyond the Hard Drive Microsoft Hits New Milestone in Borosilicate Glass Storage.

Canva Surpasses $4B Revenue AI Innovation and Enterprise Demand Fuel Growth.

Google I/O 2026 Returns to Shoreline Amphitheatre with AI-First Agenda.

Apple Walled Garden Crumbles Inside the iPhone’s Open-System Revolution in Europe.

eBay Snaps Up Depop for $1.2B A Strategic Bet on Gen-Z Fashion.