Timestamps are in GMT/BST.
[1:48] * helix84 (~firstname.lastname@example.org) Quit (*.net *.split)
[1:54] * helix84 (~email@example.com) has joined #duraspace
[6:47] -sendak.freenode.net- *** Looking up your hostname...
[6:47] -sendak.freenode.net- *** Checking Ident
[6:47] -sendak.freenode.net- *** Found your hostname
[6:47] -sendak.freenode.net- *** No Ident response
[6:47] * DuraLogBot (~PircBot@ec2-107-22-210-74.compute-1.amazonaws.com) has joined #duraspace
[6:47] * Topic is '[Welcome to DuraSpace - This channel is logged - http://irclogs.duraspace.org/]'
[6:47] * Set by cwilper!ad579d86@gateway/web/freenode/ip.220.127.116.11 on Fri Oct 22 01:19:41 UTC 2010
[6:57] * kostasp (5e45fcdf@gateway/web/freenode/ip.18.104.22.168) has joined #duraspace
[7:28] * kostasp (5e45fcdf@gateway/web/freenode/ip.22.214.171.124) Quit (Quit: Page closed)
[9:22] * pbecker (~firstname.lastname@example.org) has joined #duraspace
[12:26] * tdonohue (~email@example.com) has joined #duraspace
[12:53] * peterdietz (~firstname.lastname@example.org) Quit (Quit: peterdietz)
[12:56] * mhwood (email@example.com) has joined #duraspace
[14:03] * kdweeks (~Adium@2001:468:c80:a103:b44d:7913:fbad:51c9) has joined #duraspace
[16:39] * hpottinger (~firstname.lastname@example.org) has joined #duraspace
[17:00] * misilot (~email@example.com) Quit (*.net *.split)
[17:00] * misilot (~firstname.lastname@example.org) has joined #duraspace
[18:31] * hamslaai1 (~email@example.com) has joined #duraspace
[19:13] * terryb (~firstname.lastname@example.org) has joined #duraspace
[19:36] * peterdietz (~email@example.com) has joined #duraspace
[19:48] * KevinVdV (~KevinVdV@d5153D041.access.telenet.be) has joined #duraspace
[19:50] * KevinVdV (~KevinVdV@d5153D041.access.telenet.be) Quit (Client Quit)
[19:51] * KevinVdV (~KevinVdV@d5153D041.access.telenet.be) has joined #duraspace
[20:00] * robint (5eaf588c@gateway/web/freenode/ip.126.96.36.199) has joined #duraspace
[20:01] <tdonohue> Hi all, it's time for our weekly DSpace Devel Mtg. Rough Agenda: https://wiki.duraspace.org/display/DSPACE/DevMtg+2014-09-17
[20:01] <kompewter> [ DevMtg 2014-09-17 - DSpace - DuraSpace Wiki ] - https://wiki.duraspace.org/display/DSPACE/DevMtg+2014-09-17
[20:01] <tdonohue> Again, we'll kick off with the usual reminders...
[20:02] <tdonohue> 1) Our 5.0 Feature PR deadline is rapidly approaching! Just about 2 weeks, on October 6.
[20:03] * mdiggory (~firstname.lastname@example.org) has joined #duraspace
[20:03] <tdonohue> 2) If we have anyone who would like to jump onto the 5.0 Release Team, there are still spots available...I know they'll need our help more and more in the coming weeks anyhow
[20:03] <hpottinger> please?
[20:03] <hpottinger> :-)
[20:04] <tdonohue> Basically, 5.0 is starting to "take final shape" in the coming weeks here. If you have anything still outstanding, be sure to get a PR created soon and/or talk with the 5.0 RT (and us) about what is still coming
[20:04] <peterdietz> Can I join?
[20:05] <tdonohue> join? you're leading! :)
[20:05] <tdonohue> in any case, the rest of today's meeting is set aside to talk about 5.0 Features (especially any with PRs already)
[20:05] <tdonohue> or any other 5.0 related topics/questions you may have
[20:05] <hpottinger> oh, yes, can *I* joint the RT, too?
[20:06] <hpottinger> s/joint/join/g
[20:06] <kompewter> hpottinger meant to say: oh, yes, can *I* join the RT, too?
[20:06] <robint> That makes 4 on the team now - Peter, Peter, Hardy, and Hardy
[20:06] <tdonohue> nice! we've got a great team shaping up now!
[20:06] <hpottinger> excellent, and we also have all the former RCs
[20:06] <mhwood> But each is only half-time.
[20:08] <tdonohue> So, all kidding aside, I figured it'd be good to revisit DS-1582 / DSPR#629 today, as it has the potential to affect other PRs coming in the pipeline
[20:08] <kompewter> [ https://github.com/DSpace/DSpace/pull/629 ] - Support Metadata On All DSpaceObjects by KevinVdV
[20:08] <kompewter> [ https://jira.duraspace.org/browse/DS-1582 ] - [DS-1582] All DSpaceObjects should have metadata support - DuraSpace JIRA
[20:09] <tdonohue> So, the first question is, has anyone been able to pull away some time to do a review/test? Looks like mhwood did add some minor code comments to the PR
[20:09] <tdonohue> And I actually just noticed that KevinVdV asked that we not merge the PR quite yet.
[20:10] <mhwood> He's addressed all of my noise about isOracle() too.
[20:10] <tdonohue> nice.
[20:11] <KevinVdV> Yeah the pull request requires a rebase & the unit test for oai were still failing.
[20:11] <KevinVdV> But it is good to test
[20:11] <tdonohue> I'll admit, this is still on my "todo" to give it a test run (by trying to upgrade an existing DB to one with Metadata on all)...but I haven't gotten to it yet
[20:12] <robint> Likewise, just struggling to find testing time
[20:13] <tdonohue> Anyone else out there able to help try and give this a look in the next few days/week?
[20:13] <KevinVdV> Has anybody else tested it yet ? Because I really would like some other people confirming it doesn’t break anything
[20:14] <mhwood> Hm, actually there are still three instances of db.name.
[20:15] <tdonohue> I'm guessing that silence = nobody has tested it yet.... which means we really need testers
[20:15] <tdonohue> mhwood: yea, I see those too...there's still three instances that need to change to DatabaseManager.isOracle()
[20:15] <KevinVdV> Indeed but these changes I left because I didn’t alter those calls.
[20:16] <KevinVdV> Don’t want to invalidate the pull request from mwood
[20:16] <mhwood> Ah, I see, thanks!
[20:17] <mhwood> I'm trying to think how to actually test this properly.
[20:17] <KevinVdV> The metadata for all ?
[20:17] <mhwood> Yes.
[20:18] <KevinVdV> Well the best way to setup a simple test instance, add some communities, collections, items, ....
[20:18] <KevinVdV> & migrate the data with the pull request & test the dspace
[20:18] <tdonohue> My idea for testing would be: (1) Take an copy of an existing DB, (2) Pull down the PR#629, (3) Upgrade/Migrate the existing DB, (4) Rebuild with PR in place....(5) Redeploy and test that everything still seems to work
[20:18] <mhwood> It's that "everything" that worries me.
[20:19] <mhwood> But, basically just a smoke test.
[20:19] <KevinVdV> A good start would be to check the current metadata
[20:19] <KevinVdV> IS everything still there
[20:19] <KevinVdV> Next create a new community/collection/item/bitstream/group/eperson
[20:19] <tdonohue> Yea...a smoke test...browse around, make sure the data is there. Try editing a few things, try creating a new Item, etc.
[20:19] <KevinVdV> & see if that still works
[20:20] <tdonohue> basically make sure nothing *obvious* breaks. It'll get an even more thorough test during Testathon, but we want to ensure that it doesn't massively break the codebase all over
[20:20] <hpottinger> if you're so moved to actually script those tests...
[20:20] <tdonohue> I would *love* it if we could script such tests ;)
[20:21] <KevinVdV> I’m afraid nobody can guarantee that EVERYTHING is bugfree, but that is what a Testathon is for
[20:21] <mhwood> I will try to clear some time within a week to spin up a patched instance and try it.
[20:21] <robint> KevinVdv: agreed, this should just be a sanity check
[20:21] <tdonohue> thanks mhwood!
[20:22] <hpottinger> VagrantDSpace spins up PRs pretty quickly, and can populate itself from a pile of AIPs
[20:22] <tdonohue> I'll try to do the same...although I still have several tasks on my plate before this one that I need to try and get through quickly
[20:23] * hamslaai1 (~email@example.com) has left #duraspace
[20:23] <hpottinger> REST-API has a nice machine-readable interface...
[20:23] * hpottinger is sure he can goad *someone* into writing a better smoke test...
[20:25] <tdonohue> If we can find a few other folks to help give it a test run, that'd be great. Another option here would be to spin up a central test (perhaps even on demo.dspace.org if there's enough memory) which we can all bang on a bit
[20:26] <tdonohue> In any case, we have at least a few folks stepping forward, so hopefully we can get this reviewed/tested sooner rather than later
[20:27] <tdonohue> (and if I can get to it, I might see if demo.dspace.org has enough space for a test version up there to bang on...we'll see)
[20:27] <KevinVdV> Agreed, all feedback is welcome
[20:27] <tdonohue> Are there any other 5.0 PRs to discuss today?
[20:27] <KevinVdV> I will be finishing the pull request
[20:28] <tdonohue> thanks KevinVdV...definitely let us know when the final PR is up too...even though the existing should be testable
[20:28] <tdonohue> Oh, another thing to note in general... We are still having Travis CI issues (not sure if anyone has noticed much)
[20:29] <tdonohue> They popped back in place with DS-2133
[20:29] <kompewter> [ https://jira.duraspace.org/browse/DS-2133 ] - [DS-2133] Mirage2 and Travis CI build - DuraSpace JIRA
[20:29] <tdonohue> I've since tried to fix them with DSPR#648, but even that one fails randomly
[20:29] <kompewter> [ https://github.com/DSpace/DSpace/pull/648 ] - More Travis CI build fixes related to DS-2133 by tdonohue
[20:30] <robint> tdonohue: How are you initiating the Maven build? I mean, what Maven profiles?
[20:30] <tdonohue> From what I can tell, TravisCI doesn't like our final "assembly" (where the 'dspace' module actually creates the 'dspace-installer' directory).... I'm suspecting it uses too much memory or CPU, especially with the number of separate modules we are now combining/assembling
[20:31] <tdonohue> robint: Take a look at the changes here. https://github.com/DSpace/DSpace/pull/648/files#diff-354f30a63fb0907d4ad57269548329e3R29
[20:31] <kompewter> [ More Travis CI build fixes related to DS-2133 by tdonohue · Pull Request #648 · DSpace/DSpace · GitHub ] - https://github.com/DSpace/DSpace/pull/648/files#diff-354f30a63fb0907d4ad57269548329e3R29
[20:32] <robint> tdonohue: yes, looking at the PR, bit slow off the mark today :)
[20:32] <tdonohue> it's two steps...(1) "mvn install -Dmaven.test.skip=false" (which Always works..our Unit tests always succeed)
[20:33] <tdonohue> Followed by (2) "mvn package -Dmirage2.on=true -Dmirage2.deps.included=false" (from [src]/dspace/).... (This is what randomly fails on the building of /target/dspace-installer/)
[20:34] <tdonohue> I've tried a million different options to try and encourage Travis along. But, I'm starting to suspect that we may not be able to run the full "assembly" (of /target/dspace-installer) on Travis CI...instead, we can just do Unit Tests and individual module assemblies
[20:34] <mhwood> I wonder if there is a problem with m-assembly-p itself, perhaps addressed by a newer version.
[20:34] <tdonohue> we're on the latest version of m-assembly-p. Tried that too :)
[20:36] <tdonohue> I *haven't* tried reverting to older versions of m-assembly-p. But, yes, the problem here is EITHER m-assembly-p itself, OR m-assembly-p just uses a lot of memory/CPU when packing up 11 separate modules (which is what we have)
[20:36] <hpottinger> do you have a link for the fail message?
[20:37] <tdonohue> hpottinger: here's an example of a failed build...it just shows a "Killed" message after 'dspace-installer': https://travis-ci.org/DSpace/DSpace/builds/35161660#L6400
[20:37] <kompewter> [ Travis CI - Free Hosted Continuous Integration Platform for the Open Source Community ] - https://travis-ci.org/DSpace/DSpace/builds/35161660#L6400
[20:38] <tdonohue> Even if Travis CI will do a "retry" after that failed build....all the retries will also fail with "Killed" messages (at other random points)
[20:38] <tdonohue> According to TravisCI docs...that "Killed" message means you are exceeding the bounds of their memory (3GB): http://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error
[20:38] <kompewter> [ Travis CI: Common Build Problems ] - http://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error
[20:39] <tdonohue> The initial "Killed" message *always* occurs on the "dspace-installer" build step. Any additional ones just occur randomly elsewhere.
[20:40] <tdonohue> So...I mostly just wanted to make everyone aware of this issue. It's affecting all PRs. I'm planning to merge DSPR#648 (as it splits out the Unit Tests to show they succeed, even if the "dspace-installer" fails)...but even that PR won't solve the problem entirely
[20:40] <kompewter> [ https://github.com/DSpace/DSpace/pull/648 ] - More Travis CI build fixes related to DS-2133 by tdonohue
[20:41] <robint> So it is just coincidence that it is the Mirage2 build then? We were already sailing close to the wind in terms of memory?
[20:41] <tdonohue> robint: yes, I think it's coincidence that it started with Mirage2. I think Mirage2 was just "one more module to package up", and became the straw that broke the camels back.
[20:42] <tdonohue> I've since tried disabling the Mirage2 build, and it still happens even without Mirage 2 being built/compiled. So, I'm 80-90% certain that it's just related to m-assembly-p and our building of 'dspace-installer'
[20:43] <tdonohue> But, none of this is very easy to "test/debug" in Travis CI...so there's always the chance I've overlooked something
[20:43] <hpottinger> I just ran du -sh . in my working copy, it weighs in at 3.5GB
[20:43] <tdonohue> it's not 3GB of space....it's 3GB of memory (to do the build/test)
[20:44] <tdonohue> According to TravisCI docs if your build/test exceeds 3GB of memory, it will just be "Killed". But I'm not sure if there's other ways to see that same "Killed" message, or if it *always* means you've exceeded 3GB of memory
[20:45] <robint> Any other alternative CI sites out there?
[20:46] <tdonohue> None that are free and integrate as awesomely with GitHub :) Honestly, I love Travis CI for its GitHub integration, but we definitely seem to be breaking the bounds of it
[20:47] <tdonohue> but, if anyone else comes up with something, I'm honestly all ears. Until then, I may end up going as far as *disabling* the final "dspace-installer" step, and simply running Unit Tests & individual module builds in Travis CI
[20:47] <robint> For testing purposes could we try dropping some modules eg LNI, to see if it then builds?
[20:48] <robint> I realise this is dependent on finding time
[20:48] <tdonohue> yea, we could disable some modules from building ("-P !dspace-lni" does that)...and see what happens
[20:48] <peterdietz> we can shrink DSpace. Major refactoring in these last few weeks, Remove everything except for dspace-api, dspace-services, and dspace-rest. Kick out the UI's, and refactor OAI/sword to live within one of the other modules.
[20:50] <tdonohue> haha...Good luck with that ;) In all seriousness though, I do suspect longer term we may wish to avoid our massive proliferation of modules... It'd be nice to start to merge some. As a basic example even, could dspace-api/dspace-services/dspace-rest just become ONE modules (which builds a JAR and a WAR for REST)
[20:51] <tdonohue> There may be some situations where separate modules are warranted...but the concept of "lets just throw in a new module" may need to be rethought if we want to be able to keep our build/test process more reasonable
[20:51] <robint> We could drop one of the UI's!
[20:51] <robint> :)
[20:52] <hpottinger> notice: robint is joking, do not be alarmed
[20:52] <mhwood> Maven does not like producing more than one artifact per project.
[20:52] <hpottinger> I wonder if we might be able to ask the Travis folks for help?
[20:53] <hpottinger> they seem to want to be helpful for open source tools
[20:53] <tdonohue> In any case, I'll keep tinkering around (when I find time), to see if I can find a way to fix Travis issues.
[20:53] <hpottinger> maybe we can get special dispensation to let our builds cross the line?
[20:53] <hpottinger> because we're just really friendly folks?
[20:54] <tdonohue> hpottinger: yea, I probably should put in a ticket to see if they can give more feedback on what "Killed" means. But, I doubt we'll get special dispensation, as I've seen other tickets where their response was "sorry, our limit is 3GB...you seem to be going over that"
[20:55] <tdonohue> But, I'll log a ticket with them to see what they say. Just need to spin up another breaking build or two as good examples.
[20:55] <hpottinger> I wonder if we capped Maven's memory at, say, 3GB if Maven could "figure it out?"
[20:55] <tdonohue> I've tried setting MAVEN_OPTS=-Xmx1024M ... and even 2GB...doesn't seem to matter
[20:55] <hpottinger> that's odd
[20:56] <KevinVdV> I need to run, until next time !
[20:56] <mhwood> Assembly doesn't fork a separate process, does it?
[20:56] <tdonohue> It's unclear though whether m-assembly-p obeys those memory limits or not
[20:57] <tdonohue> mhwood: dunno. I *do* know that m-surefire-p forks a separate process (and therefore ignores MAVEN_OPTS)...but I don't know about m-assembly-p
[20:57] <hpottinger> http://stackoverflow.com/questions/23322984/how-to-fix-maven-permgen-out-of-memory-error-on-travis-ci
[20:57] <kompewter> [ java - How to fix Maven PermGen out of memory error on Travis-ci? - Stack Overflow ] - http://stackoverflow.com/questions/23322984/how-to-fix-maven-permgen-out-of-memory-error-on-travis-ci
[20:58] <hpottinger> looks like you can specify memory usage for tests
[20:59] <tdonohue> hpottinger: yep, but tests aren't the problem. They always succeed. Memory failure happens on m-assembly-p step.
[20:59] <robint> Likewise I had to specifically allocate extra memory for the Javavdoc step
[20:59] <robint> It didn't care about MAVEN_OPTS
[21:00] <tdonohue> In any case...we're nearly out of time here. I just wanted to let you all know that Travis CI is still having issues with our build. I'll still tinker with it a bit, and likely also reach out to Travis staff too.
[21:00] <mhwood> 25-30 Issues in Needs Code Review state.
[21:00] <tdonohue> Until then, the only thing I can recommend is to "restart" a build if you see a "Killed" message on your PRs. Sometimes you need to rerun it 2-3 times before it'll finally succeed.
[21:01] * KevinVdV (~KevinVdV@d5153D041.access.telenet.be) Quit (Ping timeout: 255 seconds)
[21:01] <tdonohue> mhwood: Good point. Next week perhaps we can just start to go down the list of any tickets which are "scheduled for 5.0 and in 'Needs Code Review'"....we have plenty to still review
[21:02] <robint> Got to run. Cheers all
[21:03] * robint (5eaf588c@gateway/web/freenode/ip.188.8.131.52) Quit (Quit: Page closed)
[21:03] <tdonohue> yes, since we are now overtime, we'll close up the meeting for today.
[21:04] <tdonohue> I'll still be around for a bit, as needed. Sorry for stealing a good portion of the meeting talking about Travis CI (but it's stolen a bit of my time over the last week, and just wanted to make sure you all were aware it's still not working right)
[21:09] * mhwood (firstname.lastname@example.org) has left #duraspace
[21:11] * terryb (~email@example.com) Quit (Ping timeout: 272 seconds)
[21:16] * hpottinger (~firstname.lastname@example.org) has left #duraspace
[22:13] * tdonohue (~email@example.com) has left #duraspace
These logs were automatically created by DuraLogBot on irc.freenode.net using the Java IRC LogBot.