Would it increase the speed when it comes to perform heavy simulations ?
Only if the program specifically asked the GPU to perform certain tasks. The CPU and GPU are not parallel devices that share the load, the GPU is a peripheral device that can autonomously perform certain tasks but it has to be told what they are by the main CPU. The GPU is also optimized to 'dual-port' some or all of it's memory so it can be read to the screen at the same time as it is being updated by the GPU, this is of little benefit to the program you run other than getting the results to your eyes a little quicker.
As far as instructions go, the GPU has little concept of a conventional program and you can't program it at low level (unless you have insider knowledge) but it can for example very rapidly calculate flood fills when given the boundary points of the flood region.
You could think of it this way:
To draw a 100x100 pixel blue box on the screen, the CPU alone might have to start at the top left corner and turn the blue bits of the mapped memory location on and the red & green bits off, then move to the next location and repeat, each time checking it was still within the boundary and if necessary moving to the next line down the screen and repeating until it reached the bottom right corner. That means everything is repeated 100x100 = 10,000 times before the CPU could move on to do something else. It would only be able to write to the video memory during the sync period (off the screen edge) so for most of the time it would be held up because the displaying hardware would have control of the memory.
Using a GPU, the CPU would tell it the coordinates of the box and the color to fill it then let the GPU take over, just a few instructions then it could continue more useful tasks.
Brian.