Jordan Plawner’s Post

Nice to see a public mention of a customer moving from GPU to Intel Xeon. I see this dozens of times a week. GPU shortages may be the reason to initially consider CPUs. Once we engage, the customer realizes that they need the right compute fit for the AI workload demand. Most enterprise task specific models can be fine tuned in minutes-to-hours and achieve most inference latency targets on 4th Generation Intel Xeon Processors with Intel’s AI SW Suite #intel #ai #artificialintelligence #artificialintelligenceforbusiness #xeon #machinelearning #deeplearning

Laszlo Fusti-Molnar

Founder, QuantumFuture Scientific Software LLC

11mo

And the performance with double precision is great on the Xeon! Ai programs usually do not use that but some should they just do not do the test if switching to double precision would lead to the same results or not.

Axel Kloth

Founder & CEO at Abacus Semiconductor Corporation & Venture Partner at Pegasus Tech Ventures

11mo

Not all workloads lend themselves to GPGPUs, not even in AI. We have been doing compute.wrong for a while by forcing the problem to fit a compute architecture instead of the other way around. At Abacus Semiconductor Corporation we do things differently...

Chris B.

Senior Director Product Management @ Intel | AI, CPU, SoC, FPGA

11mo

There will be more!

And pairs well with Intel’s Ethernet products for scaled AI solutions :)

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics