Question

I am writing a Program for Parabolic Time Price Systems based on the book written by J.Welles Wilder Jr. I am have way through the program, running with an execution time of 122 microsecs. This is way above the benchmark limit. What I was looking for is a few views and tips if I

  1. write a kernel space program to achieve the same. Implementing it through drivers
  2. [Really keen on this method] Is it possible, if yes then how and where I should start looking, passing instructions to a graphic driver to perform the steps and calculation (Read this in a blog somewhere).

Thanks in Advance.

--->Programming on c

Was it helpful?

Solution

What makes GPU very fast is the fact that it can run around 2000~ (depending on the card) threads asynchronously. If you code can be divided into threads then it might improve your performance to do the calculations on the gpgpu since average CPU speed is 50-100 GFlops and average GPU speed is 1500~ when used correctly. Also you might want to consider the difficulties of maintaining gpgpu code. I suggest you that if you have an NVidia GPU you should check out 'Managed CUDA' since it contains a debugger and a GPU profiler which makes it possible to work with.

TL;DR: use gpgpu only for async code and preferably use 'managed CUDA' if possible

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top