[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Reply to: [list | sender only]
Re: [Imgcif-l] High speed image compression
- To: The Crystallographic Binary File and its imgCIF application to image data <imgcif-l@iucr.org>
- Subject: Re: [Imgcif-l] High speed image compression
- From: "Herbert J. Bernstein" <yaya@bernstein-plus-sons.com>
- Date: Fri, 29 Jul 2011 10:40:14 -0400
- In-Reply-To: <a06240801ca58404ba1d6@[192.168.2.101]>
- References: <4E31AE8C.8040405@rayonix.com><CAMkkSyn+uC4VxZpaqAhQb=ENzJYEgj+N5CCs+bPt2-JS+S_otQ@mail.gmail.com><4E31E452.3050905@rayonix.com> <4E327491.7050502@esrf.fr><a06240801ca58404ba1d6@[192.168.2.101]>
And you can gain a little more speed once you preallocate by switching internally from indexed references to Vectors to indexed references to C pointers to the same Vectors, e.g. const int16_t * vptr; char * pptr; vptr = &values[0]; and, after you preallocate packed pptr = &packed[0]; At 6:53 AM -0400 7/29/11, Herbert J. Bernstein wrote: >I agree. On my Mac, the time also drops sharply with pre-allocation and [] >instead of push_back. > > >At 10:51 AM +0200 7/29/11, Jonathan WRIGHT wrote: >>Dear Justin, >> >>Your code counts the time compressing, but not the time writing the >>file, which is much longer for me. As it stands, you might gain a little >>by adding "packed.reserve(size*2)" just before the call to compress (54 >>to 38 ms here on vista64, 3.3 Ghz). That falls further (28 ms) if you >>stop using "push_back" and instead allocate something which is >>"certainly" large enough to start with and use packed[p++]=c. >> >>Cheers, >> >>Jon >> >>On 29/07/2011 00:36, Justin Anderson wrote: >>> Thanks Nicholas. >>> >>> I only made a couple small changes to Graeme's code. 1: to load an image >>> from a file and write to file and 2: to pass the data vectors by >>> reference. The last change seems to have sped things up a little but >>> it's still taking 110 - 130 ms to compress which is too slow. We are not >>> as concerned with decompression speed as that will not need to occur in >>> real-time. >>> >>> I put on our FTP here: >>> ftp://ftp.rayonix.com/pub/del_in_30_days/byte_offset.tgz. >>> >>> Thanks, >>> >>> Justin >>> >>> On 7/28/11 2:06 PM, Nicholas Sauter wrote: >>>> Justin, >>>> >>>> Just some comments based on our experience...first, I haven't tried the >>>> compression extensively, just the decompression. But I've found Graeme's >>>> decompression code to be significantly faster than the CBF library, first >>>> because it is buffer-based instead of file-based, and also because it >>>> hard-codes some assumptions about data depth. >>>> >>>> I'd be happy to examine this in more detail if there is some way to share >>>> your code example... >>>> >>>> Nick >>>> >>>> On Thu, Jul 28, 2011 at 11:46 AM, Justin >>>> Anderson<justin@rayonix.com>wrote: >>>> >>>>> Hello all, >>>>> >>>>> I have run Graeme's byte offset code on a 4k x 4k (2 byte depth) >>>>> Gaussian >>>>> noise image and found it to compress the image in around 150 ms (64-bit >>>>> RHEL, Pentium D 3.46GHz). Using CBF library with byte offset >>>>> compression, I >>>>> find the compression takes around 125 ms. >>>>> >>>>> This will be too slow to keep up with our high speed CCD cameras. We are >>>>> considering parallelizing the byte offset routine by operating on >>>>> each line >>>>> of the image individually. Note that this would mean that a given >>>>> compressed image would be stored differently than via the whole image >>>>> algorithm. >>>>> >>>>> Has anyone been thinking about this already or does anyone have any >>>>> thoughts? >>>>> >>>>> Regards, >>>>> >>>>> Justin >>>>> >>>>> -- >>>>> Justin Anderson >>>>> Software Engineer >>>>> Rayonix, LLC >>>>> justin@rayonix.com >>>>> 1880 Oak Ave. #120 >>>>> Evanston, IL, USA 60201 >>>>> PH:+1.847.869.1548 >>>>> FX:+1.847.869.1587 >>>>> >>>>> >>>>> _______________________________________________ >>>>> imgcif-l mailing list >>>>> imgcif-l@iucr.org >>>>> http://scripts.iucr.org/mailman/listinfo/imgcif-l >>>>> >>>>> >>>> >>>> >>> >>> >>> _______________________________________________ >>> imgcif-l mailing list >>> imgcif-l@iucr.org >>> http://scripts.iucr.org/mailman/listinfo/imgcif-l >>_______________________________________________ >>imgcif-l mailing list >>imgcif-l@iucr.org >>http://scripts.iucr.org/mailman/listinfo/imgcif-l > > >-- >===================================================== > Herbert J. Bernstein, Professor of Computer Science > Dowling College, Kramer Science Center, KSC 121 > Idle Hour Blvd, Oakdale, NY, 11769 > > +1-631-244-3035 > yaya@dowling.edu >===================================================== >_______________________________________________ >imgcif-l mailing list >imgcif-l@iucr.org >http://scripts.iucr.org/mailman/listinfo/imgcif-l -- ===================================================== Herbert J. Bernstein, Professor of Computer Science Dowling College, Kramer Science Center, KSC 121 Idle Hour Blvd, Oakdale, NY, 11769 +1-631-244-3035 yaya@dowling.edu ===================================================== _______________________________________________ imgcif-l mailing list imgcif-l@iucr.org http://scripts.iucr.org/mailman/listinfo/imgcif-l
Reply to: [list | sender only]
- Follow-Ups:
- Re: [Imgcif-l] High speed image compression (Justin Anderson)
- References:
- [Imgcif-l] High speed image compression (Justin Anderson)
- Re: [Imgcif-l] High speed image compression (Nicholas Sauter)
- Re: [Imgcif-l] High speed image compression (Justin Anderson)
- Re: [Imgcif-l] High speed image compression (Jonathan WRIGHT)
- Re: [Imgcif-l] High speed image compression (Herbert J. Bernstein)
- Prev by Date: Re: [Imgcif-l] High speed image compression
- Next by Date: Re: [Imgcif-l] High speed image compression
- Prev by thread: Re: [Imgcif-l] High speed image compression
- Next by thread: Re: [Imgcif-l] High speed image compression
- Index(es):