📡 Breaking news
Analyzing latest trends...

Developers have created a tiny μLM AI that runs on a Z80 CPU using only 64KB of RAM.

Developers have created a tiny μLM AI that runs on a Z80 CPU using only 64KB of RAM.
Unix developer Harry Reed released the Z80-μLM project, a tiny AI model aiming to create a conversational AI model running on a Z80 CPU operating at only 4MHz with just 64KB of RAM. The result is a language model that demonstrates a reasonable level of user understanding while using only 40KB of real-world space, although its responses are limited to specific points.

Reed's model is extremely small, with only 150,000 parameters and using only 2-bit processing. The training process utilizes Quantization-Aware Training (QAT), which trains floating-point parameters alongside 2-bit integers to ensure the model's knowledge remains after quantization. The training process also relies on larger models to generate specialized datasets.

The result is a very small model, such as tinychat, a language model that provides short responses with just a few words but demonstrates understanding of the language being used, or guess, a specialized chatbot that answers 20 questions. Yes or no? The bot uses this question to guess what the bot is thinking about.

💬 AI Content Assistant

Ask me anything about this article. No data is stored for your question.

Comments

Popular posts from this blog

Adobe Settles FTC Lawsuit for $150M Over "Difficult-to-Cancel" Subscriptions.

[Rumor] Meta Secret Model Avocaco Slips to May Following Underwhelming Benchmarks.

Smartphone Stalemate Apple and Samsung Tie for World's Top Producer in 2025.

AWS and Cerebras Launch Inference Disaggregation to Slash AI Latency on Bedrock.

NVIDIA Pulls Back Jensen Huang Abandons $100B OpenAI Investment Plan.

Meta Unleashes AI-Powered Safety Shield to Protect Users Across Facebook and WhatsApp.

Microsoft AI Shake-up Nadella Splits Research from Product to Tackle Costs and OpenAI Dependency.