#duraspace IRC Log

Index

IRC Log for 2015-11-18

Timestamps are in GMT/BST.

[0:10] * peterdietz (uid52203@gateway/web/irccloud.com/x-tacvxoyyibwldotw) Quit (Quit: Connection closed for inactivity)
[1:03] * terry-b (~chrome@71-212-69-224.tukw.qwest.net) Quit (Ping timeout: 276 seconds)
[6:53] -hobana.freenode.net- *** Looking up your hostname...
[6:53] -hobana.freenode.net- *** Checking Ident
[6:53] -hobana.freenode.net- *** Found your hostname
[6:53] -hobana.freenode.net- *** No Ident response
[6:54] * DuraLogBot (~PircBot@webster.duraspace.org) has joined #duraspace
[6:54] * Topic is '[Welcome to DuraSpace - This channel is logged - http://irclogs.duraspace.org/]'
[6:54] * Set by cwilper!ad579d86@gateway/web/freenode/ip.173.87.157.134 on Fri Oct 22 01:19:41 UTC 2010
[12:25] * pbecker (~pbecker@ubwstmapc098.ub.tu-berlin.de) has joined #duraspace
[13:28] * mhwood (mwood@mhw.ulib.iupui.edu) has joined #duraspace
[14:10] * tdonohue (~tdonohue@c-98-226-113-104.hsd1.il.comcast.net) has joined #duraspace
[14:51] * KevinVdV (~kevin@85.234.195.109.static.edpnet.net) has joined #duraspace
[15:01] <tdonohue> Hello all, welcome to our weekly DSpace Developers Mtg. Today's agenda is mostly just 6.0 topics: https://wiki.duraspace.org/display/DSPACE/DevMtg+2015-11-18
[15:01] <kompewter> [ DevMtg 2015-11-18 - DSpace - DuraSpace Wiki ] - https://wiki.duraspace.org/display/DSPACE/DevMtg+2015-11-18
[15:01] <tdonohue> Since we look to be a smaller than normal group, I'm going to ping the Committers awake... helix84, KevinVdV, mhwood, pbecker
[15:02] <helix84> hi
[15:02] <tdonohue> As mentioned though, today is really about 6.0. Our "official" PR deadline is now past (end of last week), but we do have a few PRs still being rebased by pbecker (by end of this week).
[15:03] <mhwood> Hi
[15:03] <tdonohue> All that means we have a *ton* of PRs ready for review / analysis.
[15:03] <tdonohue> As noted in our 6.0 Release Timeline, today's meeting is supposed to be devoted towards PR reviews/assignment: https://wiki.duraspace.org/display/DSPACE/DSpace+Release+6.0+Status#DSpaceRelease6.0Status-ReleaseTimeline
[15:03] <kompewter> [ DSpace Release 6.0 Status - DSpace - DuraSpace Wiki ] - https://wiki.duraspace.org/display/DSPACE/DSpace+Release+6.0+Status#DSpaceRelease6.0Status-ReleaseTimeline
[15:04] <KevinVdV> hi
[15:04] <tdonohue> But, before we do some PR reviews, I wanted to pause to be sure there's no other topics of high importance that folks feel we *need* to discuss first
[15:06] <tdonohue> Ok, not hearing anything
[15:06] <mhwood> Just noting that FindBugs called attention to over a thousand concerns in dspace-api alone.
[15:07] <tdonohue> mhwood: that is a good point. If there are one or two folks willing to do some "findbugs" work on even just dspace-api, that'd be a huge benefit to all of us
[15:07] <mhwood> I'm slogging through the "incorrect serialization" hits.
[15:07] <tdonohue> and thanks for that. The Serialization issues have been a thorn on master
[15:07] <KevinVdV> I’ll add a small note on my list to check that out
[15:08] <mhwood> There's little to say about this except that there's plenty to look at.
[15:08] <KevinVdV> IF I can find the time between reviews
[15:08] <helix84> we may want to discuss the strategy of merging the dynamic config PR, as there's high likelihood of conflicts
[15:08] <tdonohue> mhwood: Perhaps we should create a JIRA ticket just as a "placeholder" / reminder, with the quick notes on how to run the findbugs reports (which I know you sent to dspace-devel)
[15:08] <mhwood> Will do.
[15:09] <tdonohue> helix84: I was just going to bring that up first, but thanks for the prompt
[15:09] <helix84> obviously, dynamic configs are highly attractive, so we may get a vote on the feature out of the way first, then focus on the timeline
[15:09] * cknowles (uid98028@gateway/web/irccloud.com/x-gjjexcredeykexdm) has joined #duraspace
[15:09] <mhwood> DS-2222 already exists.
[15:09] <kompewter> [ https://jira.duraspace.org/browse/DS-2222 ] - [DS-2222] Fix DSpace code via FindBugs - DuraSpace JIRA
[15:10] <tdonohue> Jumping in on the PRs, this one is definitely the *largest* out there (with a very high probability of creating conflicts): DSPR#1104
[15:10] <kompewter> [ https://github.com/DSpace/DSpace/pull/1104 ] - DS-2654: Enhanced Configurations via Apache Commons Configuration by tdonohue
[15:10] <tdonohue> mhwood: thanks for the note on that
[15:11] <tdonohue> regarding 1104, I'll say that it all now works (at least from my tests, but it needs more eyes). It took much longer than I anticipated, but it turned out to be a major refactor
[15:11] <tdonohue> I've documented it now over at: https://wiki.duraspace.org/display/DSPACE/Enhanced+Configuration+Scheme
[15:11] <kompewter> [ Enhanced Configuration Scheme - DSpace - DuraSpace Wiki ] - https://wiki.duraspace.org/display/DSPACE/Enhanced+Configuration+Scheme
[15:12] <tdonohue> A few warnings here to be aware of... (1) it does touch a LOT of files (high likelihood of conflicts), (2) It changes the build/install process too (no more build.properties)... but in the end, the Configuration is SIMPLIFIED in many ways
[15:13] <helix84> tdonohue: regarding your FAQ 1) I think this should be a supported use case
[15:13] <helix84> (Can I have different local.cfg files for different environments)
[15:13] <tdonohue> So, the big question at hand is what do we want to do with this feature? Do we work to move it into master early on (which may break other PRs)? Do we wait on it and get some other things in first?
[15:14] <mhwood> Gonna have to fix what's broken sooner or later. And the longer it's unmerged, the more broken things we write.
[15:14] <tdonohue> helix84: it *is* supported. You just need to tweak the config-definition.xml or your local.cfg as documented in that FAQ.
[15:15] <tdonohue> helix84: Unfortunately, Commons Config doesn't allow for a dynamically named "local.cfg" (similar to how we could have [env].properties with the build.properites concept). But, it supports it through the config-definition.xml
[15:15] <helix84> I suggest calling a vote early to make sure we have buy-in. But it might be better to merge only after most other PRs are tackled, because I'm sure Tim will be around to make the config PR work, but others may not be so quick to update their PRs (and we should be fair to them and not break their work).
[15:16] <mhwood> So what sort of breakage will we see? And do we have a good idea of where to look?
[15:17] <tdonohue> helix84: the one downside to merging late is I fear we may limit our testing of the PR. I will admit, Commons Config has a *slightly different* Properties format for cfg files. This means it's *possible* that some configs may be broken for some DSpace features. I've tried my best to test a lot of them, but the only way to get to all of them would be with more help
[15:17] <helix84> re env: This use-case is somthing virtually everyone will want to use. So we should make it available out of the box - the -Ddspace.env=dev approach seems good enough for that.
[15:19] <tdonohue> So, it's obvious we need to vote on the Enhanced Configuration Scheme (on dspace-devel). That's one TODO here
[15:19] <helix84> tdonohue: I understand that concern. We have a lot on our plate, but first order of business is merging PRs, then we'll be focusing on making them work. I know it's far from ideal. Take it only as a suggestion.
[15:20] <helix84> s/making them work/fixing bugs/
[15:20] <kompewter> helix84 meant to say: tdonohue: I understand that concern. We have a lot on our plate, but first order of business is merging PRs, then we'll be focusing on fixing bugs. I know it's far from ideal. Take it only as a suggestion.
[15:20] <tdonohue> So, maybe we look to the results of that vote, before we decide on when to merge it? Obviously, I'd personally like to have it merged quickly (as I'm excited to use it instead of the old hacky build.properties process)
[15:22] <helix84> right, first things first
[15:23] <tdonohue> Ok. Are we satisfied enough with that for now? I'll call a vote on the Enhanced Configuration Scheme feature, and we'll see what happens there? (Are there any more major concerns or questions still outstanding regarding this feature?)
[15:25] <tdonohue> (I'll take the silence as approval on these next steps)
[15:25] * roeland (~roeland@85.234.195.109.static.edpnet.net) has joined #duraspace
[15:25] <tdonohue> Are there other 6.0 PRs that we feel need to be discussed here early on (especially if they are large, or may require a vote)?
[15:26] <helix84> well, let's take the opportunity of roeland and KevinVdV being here and ask about their import framework
[15:26] <helix84> DS-2876
[15:26] <kompewter> [ https://jira.duraspace.org/browse/DS-2876 ] - [DS-2876] Framework to better support metadata import from external sources - DuraSpace JIRA
[15:27] <tdonohue> Sounds good. roeland & KevinVdV, you've likely seen the comments on 2876 from helix84 & I
[15:27] * dyelar (~dyelar@31.158.45.66.cm.sunflower.com) has joined #duraspace
[15:27] <roeland> not yet
[15:28] <tdonohue> We'd like to better understand the perceived goals/relationship between 2876 & the existing BTE feature.
[15:29] <tdonohue> Namely, one of the key strategic goals of DSpace in the coming years is to *reduce* duplicative functionality...and 2876 seems to be duplicating BTE to some extent, and it's not clear what the plan is here
[15:29] <helix84> in my opinion, the import functionality is something we want, it's just the perceived duplication and next steps that desires more explanation
[15:29] <tdonohue> helix84++
[15:30] <tdonohue> I think my concern is that I don't know if 2876 is aligning itself as a *replacement* for BTE (in which case we need to look towards backwards compatibility) OR if there's some perceived difference between it and BTE
[15:32] <tdonohue> If this is not something you all are prepared to discuss today, that's OK too. It's possible we just need more explanation (in docs or emails) of the relationship between BTE and 2876, and future plans
[15:33] <tdonohue> (Without that explanation, it'd be hard to approve 2876, even though it looks like a useful feature to add to 6.0)
[15:35] <tdonohue> Ok, in the essence of time, it sounds like we may just want to move on here?
[15:35] <helix84> seems so, let's have that discussion in the comments
[15:36] <roeland> I’ am sorry, I can’t give this meeting the concentration it deserves
[15:36] <tdonohue> Ok. are there any other PRs/tickets we want to move to the front of the queue here? Otherwise we'll just start from the list of PRs
[15:36] <tdonohue> roeland: no problem. We put you on the spot here. Just feel free to add your thoughts to the 2876 ticket later on. thanks!
[15:37] <helix84> tdonohue: we might want to skip the bugfixes and concentrate on new features / improvements
[15:37] <helix84> tdonohue: because bugfixes can go in later, too
[15:37] <tdonohue> helix84: true. unless anyone *really* needs a particular bugfix in (i.e. it's really broken their ability to test part of master)
[15:38] <helix84> tdonohue: also, what about scheduling another backlog review between regular DevMtgs focused on 6.0?
[15:38] <tdonohue> helix84: you mean on a different day of the week?
[15:39] <helix84> tdonohue: yes
[15:39] <tdonohue> (We also can use the backlog hour post-meeting to do 6.0 reviews as well)
[15:40] <tdonohue> I'd be fine with scheduling additional backlog mtgs, but I'd want to find a day/time that'd work well for others.
[15:40] <tdonohue> For me, it tends to be that Mondays and Thursdays are the most "open". I can find some time on other days, but those generally have the fewest meetings for me
[15:41] <tdonohue> Anyone else interested in this idea & willing to participate in additional PR review meetings?
[15:41] <mhwood> I will try.
[15:42] <tdonohue> So, we could do a 15UTC or 20UTC Thurs (tomorrow)... or a 15UTC or 20UTC Mon (23rd)
[15:43] <helix84> tdonohue: Let's bring it up in an email to -devel or -commiters. I suggest to make it regular until feature freeze (currently Dec 11).
[15:44] <mhwood> I have a commitment at 1500 Thu and again Mon.
[15:44] <tdonohue> helix84: right, I was just hoping to narrow down some days of week / times ;)
[15:45] <tdonohue> But, it's really quiet here today, so perhaps we'll have to do that via email too
[15:45] <tdonohue> So, let's just jump into PRs. We'll concentrate on features/improvements...but if someone sees a bugfix of high importance, just let us know: https://github.com/DSpace/DSpace/pulls?q=is%3Aopen+is%3Apr+milestone%3A6.0
[15:45] <kompewter> [ Pull Requests · DSpace/DSpace · GitHub ] - https://github.com/DSpace/DSpace/pulls?q=is%3Aopen+is%3Apr+milestone%3A6.0
[15:45] <tdonohue> (we'll continue PR reviews for JIRA backlog today too, if folks can stick around)
[15:46] <tdonohue> As the bottom of the list is mostly merge conflicts (which we'll need resolved), I'm going to just start at the top
[15:46] <tdonohue> DSPR#1173
[15:46] <kompewter> [ https://github.com/DSpace/DSpace/pull/1173 ] - DS-2894:REST Call to return Optimized Hierarchy by terrywbrady
[15:47] <tdonohue> this is brand new...looks like hardy & peter have been pinged on it
[15:48] <mhwood> All new code.
[15:48] <tdonohue> I'm going to assume they will respond on this one, as they were interested in this (in IRC discussions). The idea/concept of this PR seems good to me though. It's more improvement than feature
[15:49] <helix84> I haven't had time to deeply look at this yet. It concerns me a little that there's a separate endpoint for something that we perhaps could already do using the expand expand parameter; and that it's bypassing regular authorizations (why not just log in as admin?)
[15:49] <tdonohue> helix84: yes, this config seems odd to me too: https://github.com/DSpace/DSpace/pull/1173/files#diff-a9f8bb21850852f7ed98e33220c0876cR10
[15:49] <kompewter> [ DS-2894:REST Call to return Optimized Hierarchy by terrywbrady · Pull Request #1173 · DSpace/DSpace · GitHub ] - https://github.com/DSpace/DSpace/pull/1173/files#diff-a9f8bb21850852f7ed98e33220c0876cR10
[15:49] <tdonohue> Not sure I like the idea of bypassing Auth in REST
[15:49] <KevinVdV> I need to run sadly, I’ll try to review what I can, until next week
[15:50] * KevinVdV (~kevin@85.234.195.109.static.edpnet.net) Quit (Quit: KevinVdV)
[15:50] <helix84> it looks very ad-hoc, but it's not my final judgement
[15:50] <mhwood> Nor I. We have too much that depends on ad-hoc switches rather than permissions already.
[15:51] <helix84> I'll copy in this discussion. Perhaps we should move on?
[15:52] <tdonohue> yep, I added a comment to the ticket asking about the "bypassing auth" need
[15:52] <tdonohue> next up: DSPR#1171
[15:52] <kompewter> [ https://github.com/DSpace/DSpace/pull/1171 ] - DS-2796 Solr core auto-discovery (static cores) by helix84
[15:53] <tdonohue> oh wait, that's a code task
[15:53] <tdonohue> we'll skip this, sounds like it needs more work
[15:53] <helix84> yes
[15:53] <tdonohue> next up, DSPR#1166
[15:54] <kompewter> [ https://github.com/DSpace/DSpace/pull/1166 ] - DS-2888: JSPUI: Add language tags to submission edit metadata step. by pnbecker
[15:55] <helix84> The idea is good. I don't feel qualified to consider the implementation.
[15:55] <tdonohue> Sounds reasonable. I'm worried this might affect XMLUI and need to be changed there as well....as it changes a fair bit of shared configs/code
[15:55] <mhwood> That was my concern as well.
[15:56] <pbecker> hi
[15:56] * hpottinger (~hpottinge@mu-160086.dhcp.missouri.edu) has joined #duraspace
[15:57] <helix84> hi
[15:57] <tdonohue> I added a comment to 1166
[15:57] <pbecker> I tested XMLUI poorly, but as far as I did, it did not affected XMLUI at all.
[15:57] <tdonohue> oh, hi pbecker. I didn't know you were here :)
[15:57] <pbecker> just came in. ;-)
[15:57] <pbecker> But I'm not able to stand long. Just started our big migration today. But I'll be here whenever you ping me...
[15:58] <hpottinger> early meetings always throw me for a loop, sorry to wander in at the end
[15:58] <helix84> Since we're now going by PRs, rather than Jira issues, we're not assigning issues to commiters for review. We might want to start doing that (so far we've been only skipping).
[15:58] <tdonohue> So, I think 1166 sounds good, but it might need more testing before we can think of merging. Anyone willing to test/try this one out? It also might be rather simply to port to XMLUI (in all honesty)
[15:59] <tdonohue> helix84: yes, it's easier to go at PRs..but I agree, we should assign volunteers where we can
[15:59] <mhwood> PRs can also be assigned.
[15:59] <helix84> just to set good example, I'll take this one :)
[15:59] <tdonohue> thanks helix84
[16:00] <pbecker> thanks helix84 :-)
[16:00] * tdonohue notes that we are at one hour. But, as the next hour is JIRA Backlog Review, I'm going to just continue the PR Review here in #duraspace. (If you need to leave, no worries)
[16:01] <tdonohue> Next feature/improvement: DSPR#1165
[16:01] <kompewter> [ https://github.com/DSpace/DSpace/pull/1165 ] - DS-2886: Making sort option for metadata browse indexes configurable by pnbecker
[16:01] <pbecker> I hope it is clear what we want to achieve with this pr.
[16:01] <tdonohue> This seems to reference the wrong ticket? DS-2886 doesn't look related?
[16:02] <kompewter> [ https://jira.duraspace.org/browse/DS-2886 ] - [DS-2886] REST API documentation data types are wrong - DuraSpace JIRA
[16:02] <tdonohue> What ticket does this go with?
[16:02] <pbecker> yes it does, sorry. I'm looking for the right one.
[16:02] <tdonohue> aha DS-2885
[16:02] <kompewter> [ https://jira.duraspace.org/browse/DS-2885 ] - [DS-2885] Making sort option for metadata browse indexes configurable - DuraSpace JIRA
[16:02] <pbecker> thanks tdonohue
[16:03] <pbecker> tdonohue: you're fast today, even fixed the PR. thanks!
[16:05] <tdonohue> hmm... remind me again, this BrowseIndex stuff is used by Discovery? Or is this just for Lucene?
[16:05] <tdonohue> s/Lucene/DB-Browse/
[16:05] <kompewter> tdonohue meant to say: hmm... remind me again, this BrowseIndex stuff is used by Discovery? Or is this just for DB-Browse?
[16:06] <pbecker> we use it with discovery in JSPUI (on a highly patched DSpace 4)
[16:06] <pbecker> works fine.
[16:06] <pbecker> we never used the old lucene/db browse
[16:06] <tdonohue> Ok. Yea, I'm now realizing that master has DB Browse removed, so...BrowseIndex must work with Discovery. :)
[16:07] <helix84> I configured additional indexes with this and it still works on 5.x
[16:07] <helix84> (I don't remember the details _how_ it works with Discovery, though)
[16:08] <tdonohue> This idea/concept sounds reasonable enough. I don't know the code in this area well... there may be some conflicts here with Commons Config PR (since this changes configs some), but nothing we cannot work around
[16:08] <pbecker> the only important thing is to run index-discovery -b after changing the browse configuration (don't forget that parameter!)
[16:08] <tdonohue> Anyone willing to give 1165 a test run / trial?
[16:09] <pbecker> I did already. ;-)
[16:10] <tdonohue> Ok, well, I don't have objections to this. We'll just need to ensure the configuration change is documented, so that anyone using it knows to update their configs
[16:12] <pbecker> I didn't dare to add documentation before I don't have at least one +1. The comments in dspace.cfg are already changed by this PR.
[16:12] <tdonohue> (it's really quiet here...are folks looking that this still?)
[16:12] <pbecker> And yes, we'll need some documentation in the wiki.
[16:14] <tdonohue> I think with 1165, I don't have strong opinions on this. I'd still +1 it though, but it'd be nice to have a sanity check from another Committer (just to get two committers having tested this). Beyond that it looks fine to me
[16:15] <tdonohue> No one else has an opinion here on 1165? (or did everyone get pulled away on something more pressing) :)
[16:17] <mhwood> 1165 looks reasonable. I'd guess that the code is quite similar to that for the other index.
[16:19] <pbecker> The only thing that is a little bit odd is the order of the configuration keys.
[16:19] <pbecker> but I did it this way to be backward compatible so no one has to change his existing config.
[16:19] <tdonohue> pbecker: yea, I'd prefer backwards compatibility here, even if the order of configs is a bit odd for now
[16:19] <hpottinger> re 1165, I like the idea, and I trust pbecker
[16:20] <pbecker> for the item index you define the default option first and then the order for the metadata it is the opposite (IIRC).
[16:20] <pbecker> tdonohue: me too.
[16:21] <tdonohue> ok, so sounds like 1165 has general approval here to me
[16:21] <pbecker> great, so it just needs a tester.
[16:21] <pbecker> But we can let this open for know, there are so many PRs to test in the next weeks.
[16:22] <tdonohue> yea, unfortunately I'm not gonna have time myself to test this (rapidly trying to get my UI Prototype entry done by Dec 4). But, may have time in later weeks
[16:22] <hpottinger> since 1165 is an improvement, it needs +2 before it can be merged, yes?
[16:23] <tdonohue> I'm going to move along now..testers welcome for 1165. I added a comment on what is left for 1165
[16:23] <pbecker> hpottinger: yes, but I would say I have tdonohue's and your vote if someone tested it. ;-)
[16:24] <tdonohue> hpottinger: yes, but it sounds like it'll get to +2 as soon as it's tested by someone else
[16:24] * pbecker goes back to his migration until someone pings him.
[16:24] <tdonohue> moving along, DSPR#1164
[16:24] <kompewter> [ https://github.com/DSpace/DSpace/pull/1164 ] - [DS-2883] Resurrect TabFileUsageEventListener by mwoodiupui
[16:24] <mhwood> I like this one. :-)
[16:25] <hpottinger> me too, it has my +1 already, and it's just a revert, right?
[16:25] <mhwood> Well, I did polish it up a bit.
[16:25] <tdonohue> "someone found a use for this"? What's the use?
[16:25] <hpottinger> OK, so a revert + polish
[16:25] <pbecker> mhwood: could you please add a better description to the ticket?
[16:26] <mhwood> The use is not relying on a cache (Solr) for permanent storage.
[16:26] <tdonohue> I'm ok with restoring provided we understand what use case is gone now
[16:26] <hpottinger> tdonohue: it could be the basis of a permanent usage statistics solution
[16:26] <mhwood> I will look over the ticket and try to describe it more fully.
[16:26] <tdonohue> mhwood: don't we already have Solr Statistics backup tools from aschweer though? How's this different?
[16:26] <roeland> with some added fields it may be a nice disaster recovery tool
[16:26] <hpottinger> I can chime in on the ticket, since Peter and I egged you on to do this
[16:27] <helix84> this one sounds good to me (have't run it, though)
[16:27] <pbecker> no I get the idea and I like it!
[16:27] <hpottinger> roeland: yes... disaster recovery, or even handing off the job of stats analysis to something outside of DSpace altogether
[16:28] <mhwood> +1 using a real stat package.
[16:28] <tdonohue> Yes, more explanation on the ticket is warranted. It probably should also be (re-)Documented (it's not anywhere in our docs)
[16:29] <hpottinger> I'd prefer to document it if we choose to enable it by default
[16:29] <tdonohue> But once that's in place, it seems like it has the votes to be merged immediately
[16:29] <mhwood> The ticket notes that documentation is needed, and I'll work on that.
[16:30] <pbecker> +1 by code inspection, haven't tested it.
[16:30] <tdonohue> hpottinger: it needs some sort of documentation to let folks know it *exists*. Currently there's nothing in the Wiki Docs, and very minimal info in the Class comments itself.
[16:30] <pbecker> hpottinger: did you test it?
[16:30] <tdonohue> So, sounds like we can move along here, as this looks to just need docs/JIRA ticket cleanup
[16:30] <hpottinger> pbecker: I didn't, it's a revert + polish
[16:31] <tdonohue> moving along to DSPR#1163
[16:31] <kompewter> [ https://github.com/DSpace/DSpace/pull/1163 ] - DS-2880: Pubmed integration into XMLUI submission by rradillen
[16:32] <tdonohue> oh, this is an example of the new Import API (which was mentioned briefly in our mtg)
[16:34] * peterdietz (uid52203@gateway/web/irccloud.com/x-jhtieupaxwtqqtvu) has joined #duraspace
[16:35] <tdonohue> So, until we hear more about the Import API (DS-2876) vs. BTE, this might need to be set aside for now. The concept sounds fine, but we need to understand the goals here with respect to BTE
[16:35] <kompewter> [ https://jira.duraspace.org/browse/DS-2876 ] - [DS-2876] Framework to better support metadata import from external sources - DuraSpace JIRA
[16:35] <mhwood> There is some discussion of that in one of the commits here, as a starting place.
[16:36] <tdonohue> Skipping the next PR (1162), as it's also related to 2876
[16:37] <tdonohue> mhwood: PR 1163 seems to include the README / code from DSPR#1160 (which is the one for 2876). The discussion of BTE in that README is a bit vague and doesn't fully explain why we need duplicative import frameworks
[16:37] <kompewter> [ https://github.com/DSpace/DSpace/pull/1160 ] - DS-2876 Framework for importing external metadata by jonas-atmire
[16:37] <tdonohue> next up, DSPR#1161
[16:37] <kompewter> [ https://github.com/DSpace/DSpace/pull/1161 ] - DS-2879: Adds configuration to suppress notifications on returned tasks. by pnbecker
[16:40] <tdonohue> This idea seems reasonable to me. Though I guess I'd wonder why folks wouldn't want to be notified about a returned task?
[16:40] <mhwood> What he said.
[16:40] <tdonohue> (i.e. the code seems fine. The use case is slightly vague)
[16:41] <pbecker> It was a highly requested feature here.
[16:41] <pbecker> tdonohue: what is unclear at the use case?
[16:41] <pbecker> I made it configurable because the default behavior was to send those mails.
[16:41] <mhwood> Did they say why it was highly requested?
[16:41] <pbecker> You even got a mail for a task you returned yourself. ;-)
[16:41] <tdonohue> pbecker: I guess I don't understand why it was "highly requested"? What made them want to *not* be notified if a task was unclaimed by someone else
[16:42] <pbecker> mhwood: I don't understand your question, sorry. My colleagues requested this again and again until they got it.
[16:42] <tdonohue> pbecker: that implies a different solution though...maybe you shouldn't get emails for tasks *you* return, but you should get emails for tasks returned by *others*
[16:43] <tdonohue> Again, the code is perfectly fine. It just seems odd to me to allow tasks to go back into a general "pool" without notifying *anyone*. I worry those tasks would just sit around in the pool waiting for someone to finally stumble on them
[16:43] <mhwood> I was just hoping that one of them said "we want this *because* the current behavior causes such-and-such a problem for us."
[16:44] <pbecker> If someone claims a task it cannot be unclaimed by anyone else for good reason. If someone claims a task and is out of office no one can work on the task. so we put tasks back in the pool if we for example wait for an answer of an email to the submitter or for any other reasons.
[16:44] <pbecker> So tasks are often put back into the pool with out the need of a new notification.
[16:44] <pbecker> The persons working here on tasks knows which open tasks exists and claims the task only to be sure that no one else changes metadata in the same time they do.
[16:45] <pbecker> the current behavior creates a lot of emails they don't need. They want to now if there is a *new* task in the sense of "a task that noone has ever looked yet."
[16:46] <pbecker> does this explanation helps?
[16:46] <tdonohue> Ok, that's a bit clearer. I still do wonder if there's a different solution to turning these notifications off completely (system wide). But, this seems OK for now.
[16:46] <pbecker> s/helps/help
[16:46] <kompewter> pbecker meant to say: does this explanation help?
[16:46] <pbecker> imagine to people sharing one office.
[16:47] <tdonohue> pbecker: yea, that's a unique scenario in my mind. Often they are not all in the same office, but I can see how this would help :)
[16:47] <pbecker> :-)
[16:47] <mhwood> I think there are two personality types at work here. One likes to go and check things on his own schedule; the other wants to be notified without having to remember to go look for work.
[16:47] <pbecker> to be honest we have three offices with 6 persons working on tasks. ;-) but every office has his specified case (research data, theses, articles)
[16:47] <tdonohue> If you could paste in your more detailed explanation into the ticket, I'd appreciate it. Beyond that though, the code looks good to me +1
[16:48] <pbecker> tdonohue: will do.
[16:49] <mhwood> I have no actual objection. I do wonder if, in the longer term, we should not pull a lot of things like this out into a much richer, finer-grained set of curation tasks that can be triggered by workflow transitions *if attached*.
[16:49] <tdonohue> yea, I agree mhwood. I think our Workflow system might need an overhaul at some point
[16:50] <tdonohue> I added a +1 to the PR though. Seems small & obvious once the use case was explained
[16:51] <tdonohue> moving along to peterdietz's big one... DSPR#1159
[16:51] <mhwood> Same here.
[16:51] <kompewter> [ https://github.com/DSpace/DSpace/pull/1159 ] - DS-79 Assetstore to support different implementations, including S3 by peterdietz
[16:52] <mhwood> Good to see this again. I wrote an XAM backend for Richard's original patch so long ago that we no longer have the hardware to test it with. :-(
[16:52] <pbecker> tdonohue: https://jira.duraspace.org/browse/DS-2879?focusedCommentId=46953&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-46953
[16:52] <tdonohue> ooh..nice...also includes a "bitstore-migrate" command to migrate your bitstreams between *any* assestores
[16:52] <kompewter> [ https://jira.duraspace.org/browse/DS-2879 ] - [DS-2879] Make it configuarble if reviewers should be notified about tasks returned to the review pool - DuraSpace JIRA
[16:52] <kompewter> [ [DS-2879] Make it configuarble if reviewers should be notified about tasks returned to the review pool - DuraSpace JIRA ] - https://jira.duraspace.org/browse/DS-2879?focusedCommentId=46953&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-46953
[16:55] <tdonohue> I like what I see out of 1159. Obviously needs some testers
[16:56] <hpottinger> sorry, wandered off a second, re: workflow overhaul, there was a nice discussion session at OR15 in Indianapolis, lead by Andrea Schweer, with lots of good input about what works and what needs improvement with workflows
[16:57] <mhwood> Well, the front-end will be tested by *everyone* as soon as it is in master, and S3 testers are likely to be scarce.
[16:57] <tdonohue> mhwood: right, I'm more concerned immediately about front-end, backwards compatibility.
[16:59] <hpottinger> re 1159, I think it might help if peterdietz can say whether this code is in production already?
[17:01] * pbecker (~pbecker@ubwstmapc098.ub.tu-berlin.de) Quit (Remote host closed the connection)
[17:01] * pbecker (~pbecker@ubwstmapc098.ub.tu-berlin.de) has joined #duraspace
[17:02] <tdonohue> I added a comment to 1159 of some other minor things to do around this PR. Overall it looks good to me though.
[17:02] <peterdietz> hi all
[17:02] <mhwood> I just line-commented one small change: a second, S3 store is configured by default.
[17:02] <peterdietz> Yes, we have the 5.x version of this in production for 3 clients
[17:03] <peterdietz> I'm going to spend time looking at Kevin's fixes to checksum
[17:03] <peterdietz> , break that off, and hope to merge that in first
[17:04] <peterdietz> Bitstore Migrate could be useful to move from /dspace/assetstore1 to /dspace/assetstore2
[17:05] <peterdietz> This could use some testing with registering bitstream, in both local, and S3. It passes the unit test
[17:05] <peterdietz> SRB support is yanked out. Nobody responded to a call-for-users. Though, searching the mailing list, has shown some people asking "hey, who uses SRB"
[17:05] <peterdietz> ...one hit per year or so
[17:06] <peterdietz> oops, yes. S3 shouldn't be autowired
[17:06] <tdonohue> peterdietz: I think SRB should be removed. It's unsupported as-is. I also like the new "bitstore-migrate" CLI tool as a feature in itself
[17:07] <hpottinger> that "hey, who uses SRB" could be from devs looking at old docs at the request of management, and dutifully asking if SRB really exists before they attempt to implement it
[17:07] <mhwood> We needed it moved out of the stock config. anyway.
[17:07] <peterdietz> bitstore-migrate -p, prints out your current number of bitstreams. store[1] == S3BitStore, which has 2 # of bitstreams.
[17:07] <peterdietz> I english speak not well
[17:08] <hpottinger> wat?
[17:08] <tdonohue> petedietz: KevinVdV's notes about wiring this up via Spring (instead of "bitstore.*.class") is worth thinking about. It might be a ton easier to just ship with a Spring config for an "s3Bitstore" and a "dsBitstore"
[17:08] <peterdietz> 2 # of bitstream, reads awkward today
[17:08] <mhwood> Two pounds of bitstreams. Is that a lot?
[17:08] * pbecker (~pbecker@ubwstmapc098.ub.tu-berlin.de) Quit (Quit: Leaving)
[17:08] * pbecker (~pbecker@ubwstmapc098.ub.tu-berlin.de) has joined #duraspace
[17:09] <tdonohue> (It seems unlikely that anyone should/would ever change the values of "bitstore.*.class", which implies they could just be in Spring)
[17:09] <mhwood> Yeah, any place where we configure a classname probably ought to be done in Spring.
[17:10] <peterdietz> Well.. One of the use cases that I was allowing to continue. Was to allow multiple instances of a S3Bitstore, and multiple instances of a DSBitstore
[17:10] <peterdietz> Also, I'm tongue in cheek calling DSBitstore Directory Scatter Bitstore
[17:10] <mhwood> :-)
[17:11] <tdonohue> peterdietz: why can't that use-case be achieved in Spring?
[17:11] <peterdietz> yeah. Probably should get on the services boat. Since it might be awkward, since 900 other services are all factory, impl, dao
[17:11] <mhwood> Yes. Autowire a List of stores, each one configured.
[17:11] <mhwood> Eek, forget Auto.
[17:12] <hpottinger> mhwood: o.O autowiring assetstore...
[17:12] <mhwood> Fingers faster than brain. *Wire* a List of stores, explicitly.
[17:13] <peterdietz> I'm not sure how widely adopted S3 will get, but if your going to have your instances in AWS, or some other cloud that offers a simple storage service, then this is the way to go
[17:13] <tdonohue> peterdietz: yea, it'd be nice to move this slightly in the direction of Services. It seems that it shouldn't be *hard* to do... they seem like minor changes (unless I'm overlooking something). I'm sure KevinVdV (or others) could be conscripted to help if you need it.
[17:14] <tdonohue> peterdietz++ (agreed that S3 might not get "wide" adoption...but it's a good feature to have. And this PR is much more than just S3 support..it's a nice refactoring of the Storage layer + remove SRB + bitstream-migrate CLI + S3)
[17:14] <tdonohue> Overall, I'm very much +1 this feature. Just some minor cleanup would be nice
[17:14] <mhwood> Ditto.
[17:14] <hpottinger> I'm also +1 this feature
[17:15] <hpottinger> (though my institution won't ever put bistreams "in the cloud")
[17:15] <mhwood> Writing an iRODS store might be a fun weekend project. If anyone is pining for SRB-like-ness.
[17:16] <tdonohue> As we are now well over our hour for PR Reviews, we'll stop there for today. I'll still be around if anyone needs me, but need to concentrate on other tasks for today :)
[17:16] <mhwood> I'd like to tinker with an HPSS store for low-use, large-sized stuff.
[17:17] <peterdietz> There was some group in the UK that I had a phone call a few months ago. They sold some appliance that you add to your data center, which then takes care of replicating all of your content to their 100.000000% guarantee data center. I was thinking.. You could put a web service on your storage center, and then write a DSpace implementation like this to
[17:17] <peterdietz> connect
[17:22] <hpottinger> mhwood: low use large size stuff... how do you get it in and out?
[17:25] <mhwood> HPSS takes care of that. It's a giant tape robot, a small disk farm, and some boxes to shuffle stuff back/forth and grant access. I need to study the available protocols some more -- so far I've only used it via scp in some overnight cron jobs. But I think it should be possible to give reasonable performance for large unpopular objects.
[17:25] <mhwood> Or did you mean, in/out between DSpace and the end user?
[17:26] * hpottinger points at his nose, yup.
[17:26] <hpottinger> doesn't matter the size of the pipes in the house, but the size of the straw in the cup
[17:27] <hpottinger> HTTP is a tiny straw
[17:27] <mhwood> HTTP 2 to the rescue, one of these days....
[17:28] <hpottinger> ...
[17:28] <hpottinger> gotta go, catch you all later, if anyone figure out how to hide facets on the home page, e-mail me?
[17:29] * hpottinger (~hpottinge@mu-160086.dhcp.missouri.edu) Quit (Quit: Leaving, later taterz!)
[17:31] * terry-b (~chrome@75-165-57-218.tukw.qwest.net) has joined #duraspace
[19:03] * cknowles (uid98028@gateway/web/irccloud.com/x-gjjexcredeykexdm) Quit (Quit: Connection closed for inactivity)
[19:35] * hpottinger (~hpottinge@mu-160086.dhcp.missouri.edu) has joined #duraspace
[20:56] * pbecker (~pbecker@ubwstmapc098.ub.tu-berlin.de) Quit (Quit: Leaving)
[22:08] * mhwood (mwood@mhw.ulib.iupui.edu) has left #duraspace
[22:52] * tdonohue (~tdonohue@c-98-226-113-104.hsd1.il.comcast.net) has left #duraspace
[23:25] * hpottinger (~hpottinge@mu-160086.dhcp.missouri.edu) Quit (Quit: Leaving, later taterz!)
[23:50] * peterdietz (uid52203@gateway/web/irccloud.com/x-jhtieupaxwtqqtvu) Quit (Quit: Connection closed for inactivity)

These logs were automatically created by DuraLogBot on irc.freenode.net using the Java IRC LogBot.