All
Search
Images
Videos
Shorts
Maps
News
More
Shopping
Flights
Travel
Notebook
Report an inappropriate content
Please select one of the options below.
Not Relevant
Offensive
Adult
Child Sexual Abuse
BitMinter Client
Bitnet
B1 58
Population Variances
Inference Video
What Is the NVIDIA
Inference Server
Two Sample Inference
for Proportions
Observation vs Conclusion
NVIDIA Tensorrt for RTX
Inference
Questions Sat
Bayesian Inference
Tree Mega X
Define Indecency
Birst Modeler Demo
Inferance Engin Graph
IP to Binary Easy
1 58 Bit Model
How to Use One Proportion Applet
Permanent Weight Hypothesis
Inference
Ladder Models
Length
All
Short (less than 5 minutes)
Medium (5-20 minutes)
Long (more than 20 minutes)
Date
All
Past 24 hours
Past week
Past month
Past year
Resolution
All
Lower than 360p
360p or higher
480p or higher
720p or higher
1080p or higher
Source
All
Dailymotion
Vimeo
Metacafe
Hulu
VEVO
Myspace
MTV
CBS
Fox
CNN
MSN
Price
All
Free
Paid
Clear filters
SafeSearch:
Moderate
Strict
Moderate (default)
Off
Filter
BitMinter Client
Bitnet
B1 58
Population Variances
Inference Video
What Is the NVIDIA
Inference Server
Two Sample Inference
for Proportions
Observation vs Conclusion
NVIDIA Tensorrt for RTX
Inference
Questions Sat
Bayesian Inference
Tree Mega X
Define Indecency
Birst Modeler Demo
Inferance Engin Graph
IP to Binary Easy
1 58 Bit Model
How to Use One Proportion Applet
Permanent Weight Hypothesis
Inference
Ladder Models
0:35
Microsoft researchers release bitnet.cpp, the official inference framework for 1-bit LLMs like BitNet b1.58. It has optimized kernels for fast, lossless inference on CPUs, achieving impressive speedups on ARM and x86 CPUs and significant energy reductions. https://msft.it/6180WGJjy | Microsoft Research
1.3K views
Oct 23, 2024
Facebook
Microsoft Research
Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU.It's called BitNet. And it does what was supposed to be impossible.No GPU. No cloud. No $10K hardware… | Mariano Aloi
2 months ago
linkedin.com
Holy ****... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU.It's called BitNet. And it does what was supposed to be impossible.No GPU. No cloud. No… | George Spanidis | 45 comments
45 views
2 months ago
linkedin.com
13:49
Microsoft GPU Killer? Why BitNet B1.58 Runs Complex LLMs on CPUs 20x More Efficiently
326 views
7 months ago
YouTube
DeepCombinator
12:25
I Built a Jarvis AI That Runs on CPU With NO GPU — Microsoft BitNet 1-Bit LLM
251 views
1 month ago
YouTube
Jownology
10:32
"Microsoft's BitNet AI: 85% Less Power, Runs on ANY CPU!"
129 views
Apr 27, 2025
YouTube
AI Revolution
4:11
I Finetuned Microsoft's 1-Bit AI on CPU Only
41 views
1 month ago
YouTube
Jownology
5:39
I Finetuned Microsoft's 1-Bit AI on CPU Only — Then Tested It Live
32 views
1 month ago
YouTube
Jownology
14:27
Microsoft Drops New 1-Bit LLM: Bitnet b1.58 2B-4T | Install and Test Locally
7.4K views
Apr 15, 2025
YouTube
Fahd Mirza
16:40
BitNet b1.58 LOCAL Test & Install (A 1-Bit LLM!)
33.1K views
Apr 16, 2025
YouTube
Bijan Bowen
0:13
🚨 Microsoft has solved the biggest problem with AI.They open-sourced bitnet.cpp. It’s a 1-bit inference framework that runs massive 100B parameter models directly on your CPU without GPUs.it uses 82% less energy.. 100% open-source.
485.9K views
1 month ago
x.com
How To AI
Leading Inference Providers Achieve Lowest Token Cost With Open Source Models on NVIDIA Blackwell
3 months ago
nvidia.com
4:07
Bitnet.cpp Explained: 6.25x Faster Lossless Inference for Ternary LLMs on Edge Devices
42 views
1 month ago
YouTube
rayyy
9:15
FastAPI-BitNet: Running Microsoft's BitNet Inference Locally with 1-Bit LLM
1.7K views
Nov 19, 2024
YouTube
Fahd Mirza
1:10
[AI News.today] Microsoft's BitNet b1.58 Revolutionizes AI Energy Efficiency
676 views
Apr 20, 2025
YouTube
The AI Atelier
12:02
LLM BitNet-b1.58-2B-4T/ggml-model-i2_s.gguf - EPYC 7B12 180 thread !!
1 month ago
YouTube
Giovanni Manzoni
What is AI Inference? | IBM
Jun 18, 2024
ibm.com
5:21
Microsoft Just Fixed Local AI (BitNet 1.58-bit LLMs)
189 views
2 months ago
YouTube
NewTechWorld
Using the ladder of inference to make better decisions
Jun 14, 2022
leadingsapiens.com
Rules of Inference in Artificial Intelligence (AI)
90.5K views
Oct 17, 2023
intellipaat.com
17:33
BitNet: A Revolution in 1-Bit AI Models: Ultra-fast Inference with Just a CPU | Massive Models Pa...
1 views
3 months ago
YouTube
지투지 - 지식에서 지혜로
0:14
You can now run 100B parameter models on your local CPU without GPUs. Microsoft finally open-sourced their 1-bit LLM inference framework called bitnet.cpp: > 6.17x faster inference > 82.2% less energy on CPUs > Supports Llama3, Falcon3, and BitNet models | Md Ismail Sojal
9.3K views
8 months ago
Facebook
Md Ismail Sojal
5:06
No GPU? No Problem — 1-Bit AI on Raspberry Pi 5 (BitNet b1.58)
3.8K views
Apr 24, 2025
YouTube
That Project
25:08
how to run microsoft bitnet-b1.58-2B-4T locally
2.2K views
Apr 18, 2025
YouTube
Total Technology Zonne
0:28
Running massive LLMs on a single CPU - actually works #AI #LLM
1.5K views
2 weeks ago
YouTube
Amplify Imagination
7:46
BitNet b1.58 Explained | The End of GPU-Dependent AI?
2.8K views
2 weeks ago
YouTube
AI Mike Labs
Microsoft’s “1‑bit” AI model runs on a CPU only, while matching larger systems
Apr 18, 2025
arstechnica.com
6:00
bitnet.cpp from Microsoft: Run LLMs locally on CPU! (hands-on)
4K views
Oct 29, 2024
YouTube
AI Bites
12:35
BitNet B1.58: The Most Important AI Breakthrough Since ChatGPT! #ai
340 views
1 year ago
YouTube
The AI Prosperity Hub
9:45
Microsoft Accidentally Created the Most Efficient AI Ever
99.9K views
Apr 26, 2025
YouTube
AI Revolution
See more
More like this
Feedback