It needs a "buy a card" link and a lot more architectural details. Tenstorrent is selling chips that are pretty weak, but will beat these guys if they don't get serious about sharing.
Edit: It kind of looks like there's no silicon anywhere near production yet. Probably vaporware.
Nice wave they've been able to ride if it's vaporware, considering they're been at it for five years. Any guesses to why no one else seemingly see the obvious you see?
Look at the CGI graphics and indications in their published material that all they have is a simulation. A
It's all there without disclosing an anticipated release date. Even their product pages and their news page don't seem to have indications of this.
Also, the 3D graphic of their chip on a circuit board is missing some obvious support pieces, so it's clearly not from a CAD model.
Lots of chip startups start as this kind of vaporware, but very few of them obfuscate their chip timelines and anticipated release dates this much. 5 years is a bit long to tapeout, but not unreasonable.
Seems they have partners as well, who describe working together with a Taiwanese company as well.
You never know, guess they could have gotten others to fall for their illusions too, it's not unheard of. But considering how long time something like this takes to bring to market, that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly.
An FP8 performance of 3200TFLOPS is impressive, could be used for training as well as inference. "Close to theory efficiency" is a bold statement. Most accelerators achieve 60-80% of theoretical peak; if they're genuinely hitting 90%+, that's impressive. Now let's see the price.
The next generation will include another processor to offload the inference from the RISC V processors used to offload inference from the host machine.
It needs a "buy a card" link and a lot more architectural details. Tenstorrent is selling chips that are pretty weak, but will beat these guys if they don't get serious about sharing.
Edit: It kind of looks like there's no silicon anywhere near production yet. Probably vaporware.
Tapeout apparently completed last month, dev boards in early 2026: https://www.eetimes.eu/vsora-tapes-out-ai-inference-chip-for...
Nice wave they've been able to ride if it's vaporware, considering they're been at it for five years. Any guesses to why no one else seemingly see the obvious you see?
Look at the CGI graphics and indications in their published material that all they have is a simulation. A It's all there without disclosing an anticipated release date. Even their product pages and their news page don't seem to have indications of this.
Also, the 3D graphic of their chip on a circuit board is missing some obvious support pieces, so it's clearly not from a CAD model.
Lots of chip startups start as this kind of vaporware, but very few of them obfuscate their chip timelines and anticipated release dates this much. 5 years is a bit long to tapeout, but not unreasonable.
> Even their product pages and their news page don't seem to have indications of this.
This seems indicative enough for me, give or take a quarter or two probably, from the latest news post on their website:
> VSORA is now preparing for full-scale deployment, with development boards, reference designs, and servers expected in early 2026.
https://vsora.com/vsora-announces-tape-out-of-game-changing-...
Seems they have partners as well, who describe working together with a Taiwanese company as well.
You never know, guess they could have gotten others to fall for their illusions too, it's not unheard of. But considering how long time something like this takes to bring to market, that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly.
"that they have dev-boards ready is months rather than years at least gives me enough to wait until then to judge them too harshly."
So far, they just talk about it.
An FP8 performance of 3200TFLOPS is impressive, could be used for training as well as inference. "Close to theory efficiency" is a bold statement. Most accelerators achieve 60-80% of theoretical peak; if they're genuinely hitting 90%+, that's impressive. Now let's see the price.
288GB RAM on board, and RISC V processors to enable the option for offloading inference from the host machine entirely.
It sounds nice, but how much is it?
The next generation will include another processor to offload the inference from the RISC V processors used to offload inference from the host machine.
[dead]