...but I have to say that I don't agree with Donald Knuth. Well to say the truth, it's not a big deal, he simply expressed some doubts, in a (very intresting) recent interview, about the recent multicore CPU trend. But I don't want to "flame", it's a very reasonable interview, he only says that for most applications multicore does not matter much, and probably it's also true, because for a lot of applications CPU does not matter, they are simply I/O bound.
What I can't agree with is that CPU designers are just becoming lazy and so are throwing at us all this parallel programming burden instead of finding nice ways to make us happy. In fact it's quite the contrary. Moore's law talks about transistors, not performace. Performance is due to transistors and clock speed. CPU performance following the Moore's law actually indicated that they were "wasting" transistor usage, as the "correct" speedup should be more than that, as it's not surprisingly happening with GPUs. The thing is that engineers were so concerned in not changing our habits that they did avoid spending transistors on raw computing power, but they tried to find smarter way to decode and execute our instruction streams instead. Until they reached a limit that was predicted and predictable, as it's ultimately driven by physics. Multithread and multicore architectures are the natural evolution and a paradigm shift towards them is surely needed. And the problem is that I suspect it's not the only one that will be needed. The limits of parallel programming lies in data access, we not only need to learn how to do threads, but also how to engineer data access in a more CPU friendly way. Luckily the answer of BOTH problems seem to be possible by employing data parallel paradigms, like the stream computing one, that we should all be familar with by having knowledge of how GPUs work.
Read it anyway, of course it's intresting, and then maybe you could compensate by reading something about parallel programming, as it seems to be our future. I would reccomend Suess' blog, in which I've found this nice article about OpenMP performance (and parallel programming in general).
P.S. It's incredible how some news sites take those very reasonable interviews and make big titles out of them. An example is the whole raytracing versus rasterization debate, surely it was intresting, probably there was something to say about Carmak's interview, but how can you title as Slashdot did, Yerli's one as "Crytek Bashes Intel's Ray Tracing Plans"?
What I can't agree with is that CPU designers are just becoming lazy and so are throwing at us all this parallel programming burden instead of finding nice ways to make us happy. In fact it's quite the contrary. Moore's law talks about transistors, not performace. Performance is due to transistors and clock speed. CPU performance following the Moore's law actually indicated that they were "wasting" transistor usage, as the "correct" speedup should be more than that, as it's not surprisingly happening with GPUs. The thing is that engineers were so concerned in not changing our habits that they did avoid spending transistors on raw computing power, but they tried to find smarter way to decode and execute our instruction streams instead. Until they reached a limit that was predicted and predictable, as it's ultimately driven by physics. Multithread and multicore architectures are the natural evolution and a paradigm shift towards them is surely needed. And the problem is that I suspect it's not the only one that will be needed. The limits of parallel programming lies in data access, we not only need to learn how to do threads, but also how to engineer data access in a more CPU friendly way. Luckily the answer of BOTH problems seem to be possible by employing data parallel paradigms, like the stream computing one, that we should all be familar with by having knowledge of how GPUs work.
Read it anyway, of course it's intresting, and then maybe you could compensate by reading something about parallel programming, as it seems to be our future. I would reccomend Suess' blog, in which I've found this nice article about OpenMP performance (and parallel programming in general).
P.S. It's incredible how some news sites take those very reasonable interviews and make big titles out of them. An example is the whole raytracing versus rasterization debate, surely it was intresting, probably there was something to say about Carmak's interview, but how can you title as Slashdot did, Yerli's one as "Crytek Bashes Intel's Ray Tracing Plans"?
No comments:
Post a Comment