Nephazz
Junior Member level 1
Hi eda board,
I'm a "Electronical Engineering and Information Technology" student at his final project. Unfortunatly the manpages don't help me with my current idea.
My project is to write a MSVC++ programm to handle a constant data stream of 20 MB/s. This stream is produced by a custom electronic and the goal is to process the data as fast as it comes into my machine. My machine will be a WinXP-32Bit-Pro 8-Core with 8x1 GB RAM using Intels TBB-framework for parralisation.
The data comes into my programm via a library function which takes a reference to an unsigned char array and returnes after it is filled with new data. I also have to write a GUI (which I plan to use MFC for) so I plan to implement a MVC structure where the USB receiving is attached to the controller. The controller also "processes" the data and writes it to the model. Plus it dumps old data to the HDD. The model holds my current, valid data. The viewer is the GUI.
I'm not really processing the data. What I do with each data paket is:
- decode the header
- decide what kind of data it is (3 types) and call a corresponding put function
- the put function writes the data to the model
I told a friend of mine who studies Computerscience about my speed problem (comparing the already read data to a compare array takes 3 times longer than reading it from USB) and that I'm going to solve it using massive parallelism. He told me that he'd solve it in Java with streams. Unfortunatly he isn't firm with C++. I've read a bunch of internet pages and searched all my books but I can't find anything about selfmade streams.
Would you recommend using a stream here? If yes what can I read to learn more? Do you have any hints or suggestions?
----
Jan
PS: My first plan was to use a bunch of STL vectors storing the received arrays. They would be proteced by TBB sync-objects as single-write-multiple-read. Imagine a linked list of arrays. At the right end the USB receiver adds new arrays. At the left end multiple threads take arrays away to process them real-parallel. After putting new data into the model the "array taker" would call or notify a function to dump the old data to HDD (which is still stored in the model, marked as old data)
I'm a "Electronical Engineering and Information Technology" student at his final project. Unfortunatly the manpages don't help me with my current idea.
My project is to write a MSVC++ programm to handle a constant data stream of 20 MB/s. This stream is produced by a custom electronic and the goal is to process the data as fast as it comes into my machine. My machine will be a WinXP-32Bit-Pro 8-Core with 8x1 GB RAM using Intels TBB-framework for parralisation.
The data comes into my programm via a library function which takes a reference to an unsigned char array and returnes after it is filled with new data. I also have to write a GUI (which I plan to use MFC for) so I plan to implement a MVC structure where the USB receiving is attached to the controller. The controller also "processes" the data and writes it to the model. Plus it dumps old data to the HDD. The model holds my current, valid data. The viewer is the GUI.
I'm not really processing the data. What I do with each data paket is:
- decode the header
- decide what kind of data it is (3 types) and call a corresponding put function
- the put function writes the data to the model
I told a friend of mine who studies Computerscience about my speed problem (comparing the already read data to a compare array takes 3 times longer than reading it from USB) and that I'm going to solve it using massive parallelism. He told me that he'd solve it in Java with streams. Unfortunatly he isn't firm with C++. I've read a bunch of internet pages and searched all my books but I can't find anything about selfmade streams.
Would you recommend using a stream here? If yes what can I read to learn more? Do you have any hints or suggestions?
----
Jan
PS: My first plan was to use a bunch of STL vectors storing the received arrays. They would be proteced by TBB sync-objects as single-write-multiple-read. Imagine a linked list of arrays. At the right end the USB receiver adds new arrays. At the left end multiple threads take arrays away to process them real-parallel. After putting new data into the model the "array taker" would call or notify a function to dump the old data to HDD (which is still stored in the model, marked as old data)