Timestamps are in GMT/BST.
[3:56] * Oak (~arslan@unaffiliated/alreadygone) has joined #duraspace
[6:31] -card.freenode.net- *** Looking up your hostname...
[6:31] -card.freenode.net- *** Checking Ident
[6:31] -card.freenode.net- *** Found your hostname
[6:31] -card.freenode.net- *** No Ident response
[6:31] * DuraLogBot (~PircBot@atlas.duraspace.org) has joined #duraspace
[6:31] * Topic is '[Welcome to DuraSpace - This channel is logged - http://irclogs.duraspace.org/]'
[6:31] * Set by cwilper!ad579d86@gateway/web/freenode/ip.173.87.157.134 on Fri Oct 22 01:19:41 UTC 2010
[9:50] * helix84_ (~ctenar@195.178.95.132) Quit (Remote host closed the connection)
[10:06] * kshepherd2 (~kim@121-99-149-36.bng1.nct.orcon.net.nz) has joined #duraspace
[10:20] * kshepherd2 (~kim@121-99-149-36.bng1.nct.orcon.net.nz) Quit (Ping timeout: 265 seconds)
[10:56] * kshepherd2 (~kim@121-99-149-36.bng1.nct.orcon.net.nz) has joined #duraspace
[11:15] * kshepherd2 (~kim@121-99-149-36.bng1.nct.orcon.net.nz) Quit (Ping timeout: 244 seconds)
[12:04] * fasseg_ (~ruckus@HSI-KBW-091-089-022-149.hsi2.kabelbw.de) has joined #duraspace
[12:05] * fasseg_ (~ruckus@HSI-KBW-091-089-022-149.hsi2.kabelbw.de) has left #duraspace
[12:19] * Oak (~arslan@unaffiliated/alreadygone) Quit (Remote host closed the connection)
[13:02] * misilot (~misilot@p-body.lib.fit.edu) has joined #duraspace
[13:20] * mhwood (mwood@mhw.ulib.iupui.edu) has joined #duraspace
[13:38] * helix84 (~ctenar@195.178.95.132) has joined #duraspace
[13:38] <helix84> mhwood: would you agree with my conclusion in DS-1938 ?
[13:38] <kompewter> [ https://jira.duraspace.org/browse/DS-1938 ] - [DS-1938] Replace insecure MD5 checksum calculation - DuraSpace JIRA
[13:39] <mhwood> Looking now.
[13:42] <mhwood> Yes, I think so. A security hash *might* be a good choice for detecting corrupt storage, but I don't actually know that, and in any case there are methods specifically designed for that purpose which probably serve better.
[13:43] <helix84> speaking of which, do you see an easy way for us to use some ECC?
[13:44] <mhwood> The case of BitstreamStorageManager is different yet again. There the hash is used to generate a unique (we hope) name for a file, as well as to distribute thousands or millions of files across filesystem structures in a statistically even pattern. Again, a security hash might serve well but I don't really know that, and we might consider other methods to see if they are better suited.
[13:44] <helix84> or is this one of those things that we should better leave for the layers below and wait for ZFS and Btrfs :)
[13:45] <helix84> mhwood: spreading the values evenly is the goal of all hash functions. So either there already was such a test done or we can treat them all equally.
[13:45] <mhwood> I would let storage experts handle most of that stuff. At the level that DSpace operates, I think it's best to support detection and replication, so we can discover that it's time to recover from another copy.
[13:47] <mhwood> Crypto hashes have other goals, so the distribution of the outputs might be lumpy. I simply don't know how well they serve as general hashing functions. Likely well enough; my point is simply that I don't *know* this. Somewhere at home I have an entire book on error-detecting and -correcting codes (which I still intend to read someday! :-) so I think it is not a simple subject.
[13:50] <mhwood> Anyway, if someone gets interested in reviewing BitstreamStorageManager, he should consider whether the goals are sensible, and then select methods that match those goals. Switching it to SHA2 might not buy us much, compared to the effort required.
[13:51] <mhwood> Likewise we should use error-detecting codes to detect errors. I think it unlikely that any crypto hash would serve *better* than purpose-built EDCs.
[13:52] <helix84> btw a +1 on my decision on the ticket won't hurt ;)
[13:53] <helix84> there should be a place where we should write down these ideas
[13:53] <helix84> probably in the class comments. if you are rewriting this class, consider this and this
[13:53] <mhwood> You've sparked a thought, though: I wonder whether any of those fancy new filesystems have methods for reporting errors to *applications*? So the filesystem could send DSpace a message: /var/lib/dspace/assetstore/aa/bb/cc/ddddddddddddddddddddddddddd is corrupt.
[13:54] <mhwood> Commentary is probably the best place we have for this. It already exists and is close to the code.
[13:54] <helix84> mhwood: no, they auto-heal the data, they're designed in such a way that the application should never get corrupt data
[13:55] <mhwood> There's always the possibility of an error with one more bit than any given algorithm can recover.
[13:55] <mhwood> Highly unlikely but not probability zero.
[13:56] <mhwood> Oh, well, if the incidence is low enough then manual methods would be the most economical.
[13:56] <helix84> mhwood: so during read (requested by the app or otherwise) the FS sees corruption by comparing the data to the checksum in FS metadata, it will either use an ECC or fetch the data from a replica on another stripe, correct the original copy or reallocate blocks and return the correct data to the application.
[13:57] * tdonohue (~tdonohue@c-50-179-112-246.hsd1.il.comcast.net) has joined #duraspace
[13:57] <helix84> of course, if you do a "dd if=/dev/zero of=/assetstore/whatever_bitstream" on it, it will consider _that_ valid data and maintain it
[13:58] <helix84> that is what we're protecting against
[13:58] <helix84> and yeah, no ECC would help us there, so that confirms your thinking
[13:59] <mhwood> DSpace is already doing far more than most applications, to detect damaged files.
[14:00] <helix84> well, it should, if we argue that preservation is its use case
[14:01] <mhwood> Yes. And, come to think of it, having multiple layers checking up on each other is what preservation needs.
[14:02] <mhwood> Fixity of a video stream for 100 years is a bit different from ensuring that a 4096-bit disk block is the same today and tomorrow.
[14:13] <mhwood> I've briefly commented on Ds-1938.
[16:00] * edInCo (~smuxi@seta.coalliance.org) has joined #duraspace
[18:40] * withper (~withper@91.207.117.168) has joined #duraspace
[18:47] * withper (~withper@91.207.117.168) Quit (Ping timeout: 264 seconds)
[20:22] * peking (~peking@195.88.190.128) has joined #duraspace
[20:27] * peking (~peking@195.88.190.128) Quit (Ping timeout: 252 seconds)
[21:12] * kshepherd2 (~kim@121-99-149-36.bng1.nct.orcon.net.nz) has joined #duraspace
[21:51] * lusico (~lusico@46.148.31.232) has joined #duraspace
[22:02] * mhwood (mwood@mhw.ulib.iupui.edu) Quit (Remote host closed the connection)
[22:15] * lusico (~lusico@46.148.31.232) Quit (Ping timeout: 264 seconds)
[22:51] * tdonohue (~tdonohue@c-50-179-112-246.hsd1.il.comcast.net) has left #duraspace
[23:02] * edInCo (~smuxi@seta.coalliance.org) Quit (Ping timeout: 240 seconds)
[23:05] * edInCo (~smuxi@seta.coalliance.org) has joined #duraspace
[23:26] * awoods (~awoods@c-67-165-245-76.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[23:35] * narval (~narval@82.146.59.132) has joined #duraspace
[23:48] * awoods (~awoods@c-67-165-245-76.hsd1.co.comcast.net) has joined #duraspace
[23:57] * edInCo (~smuxi@seta.coalliance.org) Quit (Remote host closed the connection)
These logs were automatically created by DuraLogBot on irc.freenode.net using the Java IRC LogBot.