Hello everybody,
I have a data buffer project in Windows 7 x64 embedded OS (using Visual Studio 2008 ide), that would work simply like that:
One writer application will send data (no network protocols, just procedure call on the same machine) to my app like 20 packages per second, each data packages will be approximately 3 MB size and comes with a timestamp tag. (The data items are in the same type and size as well)
My application will store each data item for 100 minutes and then remove. (So I can calculate the total size from beginnig no need for dynamic allocation etc...)
Meanwhile there will be up to 5 reader applications which will query data from my app via Timestamp tag and retreive data (no updates or deletitions on data by reader apps).
So since the data in my buffer app can grow over 50GB I don't think that shared memory is going to work for my case.
I'm thinking about using Memory Mapped Files, but I'm not sure how to do it.
Theoratically I will crate a fixed size File on harddisk (lets say 50GB) and then map some portion to RAM write the data to this portion, if the reader applications wants to use the data which is mapped currently on memory, then they will use directly, otherwise they will map some other potion of the file to their address spaces and use from there...
I've read the tutorial called "Managing memory-mapped files" but I don't know how to make search in the file with timestamp attribute and when it is found in the file how to map just the related portion automatically to memory. It means managing the views in the program is a big question for me.
Also I'm not sure if I need to align the data or serialization while writing the data.
Any help with some code portion or sample projects would help extremely...
Thank you very much for reading and helps...
I have a data buffer project in Windows 7 x64 embedded OS (using Visual Studio 2008 ide), that would work simply like that:
One writer application will send data (no network protocols, just procedure call on the same machine) to my app like 20 packages per second, each data packages will be approximately 3 MB size and comes with a timestamp tag. (The data items are in the same type and size as well)
My application will store each data item for 100 minutes and then remove. (So I can calculate the total size from beginnig no need for dynamic allocation etc...)
Meanwhile there will be up to 5 reader applications which will query data from my app via Timestamp tag and retreive data (no updates or deletitions on data by reader apps).
So since the data in my buffer app can grow over 50GB I don't think that shared memory is going to work for my case.
I'm thinking about using Memory Mapped Files, but I'm not sure how to do it.
Theoratically I will crate a fixed size File on harddisk (lets say 50GB) and then map some portion to RAM write the data to this portion, if the reader applications wants to use the data which is mapped currently on memory, then they will use directly, otherwise they will map some other potion of the file to their address spaces and use from there...
I've read the tutorial called "Managing memory-mapped files" but I don't know how to make search in the file with timestamp attribute and when it is found in the file how to map just the related portion automatically to memory. It means managing the views in the program is a big question for me.
Also I'm not sure if I need to align the data or serialization while writing the data.
Any help with some code portion or sample projects would help extremely...
Thank you very much for reading and helps...