I don't get this at all
CPUs & GPUs are very different
CPUs are general purpose processors
they have to be
they get asked to process all kinds of data from all manner of hardware
They cannot specialise
(this is *why* you have other dedicated processors for specific tasks)
GPU's on the other hand, basically do one thing (floating point arithmetic) and nothing else
and because they're only doing one thing
and because of the nature of graphical calculations
they can do most of their work in parallel
I do not see how on earth a software framework can, at run-time, reliably determine how best to deal with set of instructions
send them to the CPU?
send them to the GPU?
the only way to reliably determine this, is if the programmer specifically instructs certain code to be executed on one or the other
I know I'm missing something fundamental here,
but anyway....
|
|
Bookmarks