[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Reply to: [list | sender only]
Re: [Imgcif-l] High speed image compression
- To: The Crystallographic Binary File and its imgCIF application to image data <imgcif-l@iucr.org>
- Subject: Re: [Imgcif-l] High speed image compression
- From: Jonathan WRIGHT <wright@esrf.fr>
- Date: Fri, 29 Jul 2011 18:59:03 +0200
- In-Reply-To: <4E32E086.7030101@rayonix.com>
- Organization: ESRF
- References: <4E31AE8C.8040405@rayonix.com><CAMkkSyn+uC4VxZpaqAhQb=ENzJYEgj+N5CCs+bPt2-JS+S_otQ@mail.gmail.com><4E31E452.3050905@rayonix.com> <4E327491.7050502@esrf.fr><a06240801ca58404ba1d6@[192.168.2.101]><a06240802ca5875ce2c7f@[192.168.2.101]><4E32D4BC.8030801@rayonix.com> <4E32E086.7030101@rayonix.com>
Thanks for your code. It gave 32 ms here, I should have mentioned, if you put your initial "packed.resize(size)" in program startup you can gain something more in timing (8 ms). Then don't change the size but recycle the same exact same buffer whenever you need it again. Just send packed_size into write_to_file. Cheers, Jon On 29/07/2011 18:32, Justin Anderson wrote: > By the way, attached is the new code. > > On 7/29/11 10:41 AM, Justin Anderson wrote: >> Thank you everyone for the great suggestions. >> >> Note: I am not including the time to write the compressed data to disk >> intentionally. I want to test only the compression time and not the >> disk speed. We will be writing these files to a PCIe solid state drive >> in production. These drives can write uncompressed frames in real time. >> >> Our goal is to be decently under 100 ms with the 4K (actually 1920 x >> 1920), 2 byte images to keep up at 10 fps. >> >> On an Intel Core i7 940 processor the same code runs in 50 - 60 ms. >> >> Some new runtimes (on the Core i7): >> Reserving the vector space for the compressed data ahead of time: >> 40 - 50 ms >> Adding compressed data via address instead of push_back: >> 30 - 40 ms >> >> Hopefully with the image correction time and transfer times this will >> work. >> >> ~Justin >> >> On 7/29/11 9:40 AM, Herbert J. Bernstein wrote: >>> And you can gain a little more speed once you preallocate by >>> switching internally from indexed references to Vectors to >>> indexed references to C pointers to the same Vectors, >>> e.g. >>> >>> const int16_t * vptr; >>> char * pptr; >>> vptr =&values[0]; >>> >>> and, after you preallocate packed >>> >>> pptr =&packed[0]; >>> >>> >>> At 6:53 AM -0400 7/29/11, Herbert J. Bernstein wrote: >>>> I agree. On my Mac, the time also drops sharply with pre-allocation >>>> and [] >>>> instead of push_back. >>>> >>>> >>>> At 10:51 AM +0200 7/29/11, Jonathan WRIGHT wrote: >>>>> Dear Justin, >>>>> >>>>> Your code counts the time compressing, but not the time writing the >>>>> file, which is much longer for me. As it stands, you might gain a >>>>> little >>>>> by adding "packed.reserve(size*2)" just before the call to compress >>>>> (54 >>>>> to 38 ms here on vista64, 3.3 Ghz). That falls further (28 ms) if you >>>>> stop using "push_back" and instead allocate something which is >>>>> "certainly" large enough to start with and use packed[p++]=c. >>>>> >>>>> Cheers, >>>>> >>>>> Jon >>>>> >>>>> On 29/07/2011 00:36, Justin Anderson wrote: >>>>>> Thanks Nicholas. >>>>>> >>>>>> I only made a couple small changes to Graeme's code. 1: to load an >>>>>> image >>>>>> from a file and write to file and 2: to pass the data vectors by >>>>>> reference. The last change seems to have sped things up a little but >>>>>> it's still taking 110 - 130 ms to compress which is too slow. We >>>>>> are not >>>>>> as concerned with decompression speed as that will not need to >>>>>> occur in >>>>>> real-time. >>>>>> >>>>>> I put on our FTP here: >>>>>> ftp://ftp.rayonix.com/pub/del_in_30_days/byte_offset.tgz. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Justin >>>>>> >>>>>> On 7/28/11 2:06 PM, Nicholas Sauter wrote: >>>>>>> Justin, >>>>>>> >>>>>>> Just some comments based on our experience...first, I haven't >>>>>>> tried the >>>>>>> compression extensively, just the decompression. But I've found >>>>>>> Graeme's >>>>>>> decompression code to be significantly faster than the CBF >>>>>>> library, first >>>>>>> because it is buffer-based instead of file-based, and also >>>>>>> because it >>>>>>> hard-codes some assumptions about data depth. >>>>>>> >>>>>>> I'd be happy to examine this in more detail if there is some way >>>>>>> to share >>>>>>> your code example... >>>>>>> >>>>>>> Nick >>>>>>> >>>>>>> On Thu, Jul 28, 2011 at 11:46 AM, Justin >>>>>>> Anderson<justin@rayonix.com>wrote: >>>>>>> >>>>>>>> Hello all, >>>>>>>> >>>>>>>> I have run Graeme's byte offset code on a 4k x 4k (2 byte depth) >>>>>>>> Gaussian >>>>>>>> noise image and found it to compress the image in around 150 ms >>>>>>>> (64-bit >>>>>>>> RHEL, Pentium D 3.46GHz). Using CBF library with byte offset >>>>>>>> compression, I >>>>>>>> find the compression takes around 125 ms. >>>>>>>> >>>>>>>> This will be too slow to keep up with our high speed CCD >>>>>>>> cameras. We are >>>>>>>> considering parallelizing the byte offset routine by operating on >>>>>>>> each line >>>>>>>> of the image individually. Note that this would mean that a given >>>>>>>> compressed image would be stored differently than via the whole >>>>>>>> image >>>>>>>> algorithm. >>>>>>>> >>>>>>>> Has anyone been thinking about this already or does anyone have any >>>>>>>> thoughts? >>>>>>>> >>>>>>>> Regards, >>>>>>>> >>>>>>>> Justin >>>>>>>> >>>>>>>> -- >>>>>>>> Justin Anderson >>>>>>>> Software Engineer >>>>>>>> Rayonix, LLC >>>>>>>> justin@rayonix.com >>>>>>>> 1880 Oak Ave. #120 >>>>>>>> Evanston, IL, USA 60201 >>>>>>>> PH:+1.847.869.1548 >>>>>>>> FX:+1.847.869.1587 >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> imgcif-l mailing list >>>>>>>> imgcif-l@iucr.org >>>>>>>> http://scripts.iucr.org/mailman/listinfo/imgcif-l >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> imgcif-l mailing list >>>>>> imgcif-l@iucr.org >>>>>> http://scripts.iucr.org/mailman/listinfo/imgcif-l >>>>> _______________________________________________ >>>>> imgcif-l mailing list >>>>> imgcif-l@iucr.org >>>>> http://scripts.iucr.org/mailman/listinfo/imgcif-l >>>> >>>> -- >>>> ===================================================== >>>> Herbert J. Bernstein, Professor of Computer Science >>>> Dowling College, Kramer Science Center, KSC 121 >>>> Idle Hour Blvd, Oakdale, NY, 11769 >>>> >>>> +1-631-244-3035 >>>> yaya@dowling.edu >>>> ===================================================== >>>> _______________________________________________ >>>> imgcif-l mailing list >>>> imgcif-l@iucr.org >>>> http://scripts.iucr.org/mailman/listinfo/imgcif-l >>> > > > _______________________________________________ > imgcif-l mailing list > imgcif-l@iucr.org > http://scripts.iucr.org/mailman/listinfo/imgcif-l _______________________________________________ imgcif-l mailing list imgcif-l@iucr.org http://scripts.iucr.org/mailman/listinfo/imgcif-l
Reply to: [list | sender only]
- References:
- [Imgcif-l] High speed image compression (Justin Anderson)
- Re: [Imgcif-l] High speed image compression (Nicholas Sauter)
- Re: [Imgcif-l] High speed image compression (Justin Anderson)
- Re: [Imgcif-l] High speed image compression (Jonathan WRIGHT)
- Re: [Imgcif-l] High speed image compression (Herbert J. Bernstein)
- Re: [Imgcif-l] High speed image compression (Herbert J. Bernstein)
- Re: [Imgcif-l] High speed image compression (Justin Anderson)
- Re: [Imgcif-l] High speed image compression (Justin Anderson)
- Prev by Date: Re: [Imgcif-l] High speed image compression
- Next by Date: Re: [Imgcif-l] High speed image compression
- Prev by thread: Re: [Imgcif-l] High speed image compression
- Next by thread: Re: [Imgcif-l] High speed image compression
- Index(es):