Linus Torvalds says RISC-V will make the same mistakes as Arm and x86

Linus Torvalds interview
(Image credit: Mastery Learning / YouTube)

There's a vast difference between hardware and software developers, which opens up pitfalls for those trying to coordinate the two teams. Arm and x86 researchers encountered it years ago -- and Linus Torvalds, the creator of Linux, fears RISC-V development may fall into the same chasm again.

“Even when you do hardware design in a more open manner, hardware people are different enough from software people [that] there’s a fairly big gulf between the Verilog and even the kernel, much less higher up the stack where you are working in what [is] so far away from the hardware that you really have no idea how the hardware works,” he said (video embedded below).

“So, it’s really hard to kind of work across this very wide gulf of things and I suspect the hardware designers, some of them have some overlap, but they will learn by doing mistakes — all the same mistakes that have been done before.”

Linus Torvalds: RISC-V Repeating the Mistakes of Its Predecessors - YouTube Linus Torvalds: RISC-V Repeating the Mistakes of Its Predecessors - YouTube
Watch On

RISC-V is an open-standard ISA for processors that is slowly gaining traction, especially in China, where some tech companies are using it to bypass America’s sanctions on the country. Companies like DeepComputing and Framework have started developing, building, and selling consumer laptops powered by these new processors.

But even though RISC-V is slowly being built up, it’s still not at the performance level that it could compete against current generation x86 and Arm processors. It would still take several years or decades of development to play AAA games on a RISC-V chip. But even though Arm, which also uses a reduced instruction set computer (RISC) architecture, has already undergone intensive development, Linus fears RISC-V will still make the same mistakes.

“They’ll have all the same issues we have on the Arm side and that x86 had before them,” he says. “It will take a few generations for them to say, ‘Oh, we didn’t think about that,’ because they have new people involved.”

But even if RISC-V development is still expected to make many mistakes, he also said it will be much easier to develop the hardware now. Linus says, “It took a few decades to really get to the point where Arm and x86 are competing on fairly equal ground because there was al this software that was fairly PC-centric and that has passed. That will make it easier for new architectures like RISC-V to then come in.”

Jowi Morales
Contributing Writer

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.

  • Koen1982
    "from the hardware that you really have no idea how the hardware works"
    From my experience that just always the truth for a software developer, especially the ones that don't write the low level stuff. If a SW dev write verilog/VHDL then the whole code is full of state machines, writing raw signal code with adder/mulplier/delaying stuff is something that their mind can't seem to handle.
    You also notice this with the newest people graduating from school with a degree in electronics, everybody is becomming more high level minded and knowledge of low level things is getting lost.
    Reply
  • vijosef
    Koen1982 said:
    From my experience that just always the truth for a software developer, especially the ones that don't write the low level stuff.
    A programmer may be writing at high level, but if he doesn't knows how the cache works, or if a CPU register will be pushed to memory and pulled back to CPU, the difference in performance is enormous.
    Reply
  • Conor Stewart
    Koen1982 said:
    "from the hardware that you really have no idea how the hardware works"
    From my experience that just always the truth for a software developer, especially the ones that don't write the low level stuff. If a SW dev write verilog/VHDL then the whole code is full of state machines, writing raw signal code with adder/mulplier/delaying stuff is something that their mind can't seem to handle.
    You also notice this with the newest people graduating from school with a degree in electronics, everybody is becomming more high level minded and knowledge of low level things is getting lost.
    That's because they are totally different skills, hardware and software are very different, a pure software engineer should not be writing verilog or VHDL.

    There is a very good reason that software developers can't write good HDL, it is because they lack the background knowledge needed, a pure software developer likely has never learned about digital logic, especially if they use a high level programming language and don't need to deal with bits directly. Unfortunately a lot of software engineers think that if it is code then they can do it but HDL isn't just learning another programming language. It is also mainly software developers who don't understand hardware properly themselves that create all the high level synthesis languages that then practically no one that knows what they are doing uses.

    Software developers are not bad or worse than hardware developers like your comment seems to infer, they are just different skill sets with different backgrounds knowledge for completing different tasks that just so happen to be interconnected.
    Reply
  • bit_user
    Conor Stewart said:
    a lot of software engineers think that if it is code then they can do it but HDL isn't just learning another programming language.
    I've definitely seen the reverse problem, where hardware engineers believe all of the hard problems are solved in the hardware domain and think software is trivial. These folks usually make a big mess. Not to say there aren't some hardware engineers who are competent software developers, but there's definitely an overconfidence problem with others.
    Reply
  • NinoPino
    vijosef said:
    A programmer may be writing at high level, but if he doesn't knows how the cache works, or if a CPU register will be pushed to memory and pulled back to CPU, the difference in performance is enormous.
    The difference is noticeable only with very CPU optimized code. But nowadays who needs it ?
    99% of softwares development is done with high level languages and frameworks with many abstractions layers.
    Optimization is needed only partially by kernel and drivers developers.
    Reply
  • bit_user
    NinoPino said:
    The difference is noticeable only with very CPU optimized code. But nowadays who needs it ?
    99% of softwares development is done with high level languages and frameworks with many abstractions layers.
    Optimization is needed only partially by kernel and drivers developers.
    My take is somewhat different. I'd say the problem of needing to understand the hardware is largely addressed by optimized libraries, languages, and compilers. If you need an optimized datastructure, chances of achieving better than the ones in standard libraries and language runtimes with your own hand-coded version are slim, and mostly only to the extent that you can tailor your implementation to the constraints of your specific use case.

    I've definitely seen people get so carried away with loop-level code optimizations that they end up optimizing a bad datastructure with poor scalability. The wins you can get from clever code optimizations are usually just a few X, at best, while the wins from switching to scalable algorithms and datastructures can be orders of magnitude. Modern languages and libraries tend to be designed to make it easy to write code that scales well. We're finally exiting the dark ages of plain C code.
    Reply
  • DougMcC
    vijosef said:
    A programmer may be writing at high level, but if he doesn't knows how the cache works, or if a CPU register will be pushed to memory and pulled back to CPU, the difference in performance is enormous.
    By the time you are writing java/python/c#/javascript this stuff is abstracted away 3 layers below you. And that's almost all of the software being written today. If you are thinking about how the cache or registers work at this level, you are wasting valuable time.
    Reply
  • bit_user
    DougMcC said:
    If you are thinking about how the cache or registers work at this level, you are wasting valuable time.
    I'm not going to say it's never an issue, but a lot of these details are accounted for in best-practices guides you can find, which say things like "prefer placing variables on the stack instead of the heap", "avoid false sharing", etc.

    A few years ago, there was an interesting "debate" between a Windows game programmer who was porting a game to Linux (for Google Stadia) and Linus Torvalds. The game programmer was complaining that the spinlock code that worked well on Windows performed badly on Linux. Linus had an epic reply, but it basically boiled down to the game code trying to outsmart the operating system and why that's a really bad idea.

    TL;DR: going too low-level can hurt you!
    Reply
  • Nyara
    Direct ISA (& its assemblers, etc.) vs low-level vs high-level just depends on your work and task. A skillful programmer can use any approach freely and adapt to the needs of the work of turn, however those 90%> the time ends up working in low-level (or direct ISA), as a small majority of programmers just have a basic Python 101 knowledge and little more at best, as high-level languages are inherently easier and less time-consuming to learn and use, so it is natural the workforce demographic is inclined toward them.

    Any relatively sizeable project has both the low-end backend and high-level frontend programmers, since that is by far the best way to use the current workforce available, and there is nothing wrong with that, except that it is putting increasingly higher energy & hardware performance requirements for a stable and smooth runtime in a reasonable timeframe. This puts critically serious and currently no-way-to-solve issues for the environment that keep worsening.

    Outside that, it really has no other inherent issue, but I think that inherent issue is serious, personally, and something the whole computation field should acknowledge and handle as it keeps increasing its share of the world's energy consumption.

    New problems for a field requires new approaches, and a push toward high-level languages (and its users) and tools with more low-level concerns acknowledged, alongside hardware and low-level tools acknowledging how the high level (in performance relevant tasks) above them uses them, are both key to handle this issue.
    Reply
  • Steve Nord_
    It's good to see many of the Risc V implementers following Si-V's lead and using formal proof systems to guarantee everything. Then of course you imagine hardware gals meeting that with a separable set of slow-running gate basedope charge, homodyne, spinor, regenerative synchronous network and greying logics to drop into semiconductor and metal; maybe that undoes enough of the logic it is fed to leak state, remap memory, and run initializers that fight type constructors. I think they'll log the warnings if not make a graph universe of what they mean. Ah, the warning that AVX578 is suboptimal for odd bitcount enjoyers...meh.
    Reply