#duraspace IRC Log

Index

IRC Log for 2010-09-22

Timestamps are in GMT/BST.

[0:52] * ksclarke (~kevin@adsl-235-49-191.clt.bellsouth.net) has joined #duraspace
[5:41] * Tonny_DK (~thl@130.226.36.117) has joined #duraspace
[5:43] * Tonny_DK (~thl@130.226.36.117) Quit (Client Quit)
[5:45] * Tonny_DK (~thl@130.226.36.117) has joined #duraspace
[6:45] -barjavel.freenode.net- *** Looking up your hostname...
[6:45] -barjavel.freenode.net- *** Checking Ident
[6:45] -barjavel.freenode.net- *** Found your hostname
[6:45] -barjavel.freenode.net- *** No Ident response
[6:45] [frigg VERSION]
[6:45] * DuraLogBot (~PircBot@duraspace.org) has joined #duraspace
[6:45] * Topic is 'Welcome to DuraSpace - This channel is logged - http://duraspace.org/irclogs/'
[6:45] * Set by cwilper on Tue Jun 30 20:32:05 UTC 2009
[6:45] -barjavel.freenode.net- [freenode-info] please register your nickname...don't forget to auto-identify! http://freenode.net/faq.shtml#nicksetup
[9:07] * grahamtriggs (~graham@62.189.56.2) has joined #duraspace
[12:21] * mhwood (~mhwood@2001:18e8:3:171:218:8bff:fe2a:56a4) has joined #duraspace
[13:25] * Tonny_DK (~thl@130.226.36.117) Quit (Quit: Leaving.)
[13:25] * ksclarke (~kevin@adsl-235-49-191.clt.bellsouth.net) Quit (Ping timeout: 240 seconds)
[13:32] * tdonohue (~tdonohue@c-98-228-50-55.hsd1.il.comcast.net) has joined #duraspace
[13:36] * ksclarke (~kevin@adsl-235-49-191.clt.bellsouth.net) has joined #duraspace
[15:36] * scottatm (~scottatm@adhcp136.evans.tamu.edu) Quit (Ping timeout: 276 seconds)
[16:04] * grahamtriggs (~graham@62.189.56.2) Quit (Quit: Leaving.)
[18:52] * grahamtriggs (~grahamtri@cpc2-stev6-2-0-cust333.9-2.cable.virginmedia.com) has joined #duraspace
[18:57] * robint (5229fd08@gateway/web/freenode/ip.82.41.253.8) has joined #duraspace
[19:15] * robint (5229fd08@gateway/web/freenode/ip.82.41.253.8) Quit (Ping timeout: 252 seconds)
[19:56] * cccharles (~user@131.104.62.55) has joined #duraspace
[19:57] <tdonohue> Hi all -- will be starting the DSpace Devel mtg in a few minutes. Today's agenda: https://wiki.duraspace.org/display/DSPACE/DevMtg+2010-09-22
[19:58] * hardy_pottinger (80ce8627@gateway/web/freenode/ip.128.206.134.39) has joined #duraspace
[19:59] * cbeer (~chris_bee@198.147.175.203) has left #duraspace
[19:59] * robint (5229fd08@gateway/web/freenode/ip.82.41.253.8) has joined #duraspace
[20:00] <tdonohue> For those just arriving, here's today's agenda: https://wiki.duraspace.org/display/DSPACE/DevMtg+2010-09-22 We'll start off with some time reviewing JIRA issues
[20:01] <tdonohue> Here's a list of recent issues (we'll be starting our review at DS-616): http://jira.dspace.org/jira/secure/IssueNavigator.jspa?reset=true&&pid=10020&resolution=-1&created%3Aprevious=-18w&assigneeSelect=&sorter/field=created&sorter/order=ASC
[20:01] <tdonohue> Make it possible to guarantee config file values are filled thru the ConfigurationManager API : http://jira.dspace.org/jira/browse/DS-616
[20:01] <mhwood> Looks like 616 was discussed. Needs more?
[20:01] <tdonohue> whoops -- you are right mhwood. My mistake :)
[20:01] <tdonohue> So, we'll start with the next one
[20:02] <tdonohue> Recommended versions of prerequisites becoming outdated : http://jira.dspace.org/jira/browse/DS-618
[20:02] <tdonohue> DS-618 is already "in progress" for 1.7.0 I know. any other comments on it?
[20:02] <mhwood> What criteria do we want to use in making recommendations?
[20:03] <PeterDietz> +1 Oracle Java 6
[20:03] <tdonohue> I'd say we obviously want to avoid recommending anything that is no longer supported
[20:03] <PeterDietz> jk, I would say the latest should be part of our recommendations
[20:04] <tdonohue> so, we need someone to do a quick review of all our recommendations, and make a suggestion (which we can then vote on or discuss)
[20:04] <mhwood> Some distro.s *never* have the latest. Testing takes time.
[20:05] <tdonohue> I wasn't recommending we say the latest -- we just need to not be recommending outdated, obsolete versions of software
[20:05] <mhwood> Since I brought it up, I can write something up on the Jira ticket for discussion/voting.
[20:05] <tdonohue> thanks mhwood
[20:06] <tdonohue> assign DS-618 to mhwood -- we will re-review once some suggestions are made
[20:06] <tdonohue> Item exports not deleted on deletion of eperson : http://jira.dspace.org/jira/browse/DS-619
[20:06] * mdiggory (~mdiggory@ip72-199-217-116.sd.sd.cox.net) has joined #duraspace
[20:07] <tdonohue> DS-619 comes with a patch (that's always a good thing) -- any volunteers to test/review & commit?
[20:07] <robint> ok, i will take it
[20:08] <tdonohue> assign DS-619 to robint for testing/reviewing etc
[20:08] <tdonohue> Exceed maximum while uploading files got the user stuck should lead to a friendly error page : http://jira.dspace.org/jira/browse/DS-620
[20:10] <PeterDietz> The bad answer is that the admin will be the person uploading large bitstreams, and should know the limit
[20:10] <mhwood> Why would the admin necessarily be involved, or even aware of nascent submissions?
[20:11] <mhwood> This should be fixed, I just don't know how at the moment.
[20:11] <tdonohue> Yea -- might need more review from someone working with JSPUI -- not sure how easy or hard this is for JSPUI specifically
[20:11] <grahamtriggs> Unless you've set the limit stupidly small, users are more likely to get bored, or kicked off by a networking device, than ever see an error message
[20:12] <PeterDietz> if its simple in that dspace internals does a check with a file that has come in and over the limit, then the resolution to this does seem too difficult, just displaying the check that java found in a friendly jsp view doesn't seem too bad
[20:12] <tdonohue> any volunteers to look at this & make a suggestion, or should we just leave DS-620 open & unassigned for now?
[20:12] <PeterDietz> s/does seem too difficult/doesn't seem too difficult/
[20:13] <PeterDietz> If thats the case, then I can do DS-620
[20:14] <tdonohue> Ok -- Assign DS-620 to PeterDietz for review & suggestions.
[20:14] <kshepherd> hi all
[20:14] <tdonohue> Export cleanup : http://jira.dspace.org/jira/browse/DS-621
[20:14] <tdonohue> hmm.. DS-621 is related to DS-619 (though they aren't linked in JIRA right now)
[20:15] <kshepherd> same issue basically..
[20:15] <tdonohue> thoughts? Should we really be cleaning up exports via a cron job (thus making DS-619 obsolete before it's even committed)?
[20:15] <mhwood> Yes
[20:15] <robint> Yes, I wish I hadn't volunteered earlier now, doh ! assign to me.
[20:15] <mhwood> Oops, yes they seem related.
[20:16] <grahamtriggs> 619 specifically cleans up when Eperson is removed, but 621 is the general - 'do we really want this stuff hanging around even when the eperson remans'
[20:16] <tdonohue> Right, but if we implement DS-621, we don't need DS-619, correct?
[20:16] <mhwood> OTOH if 621 is done by cron then it can handle 619 as well?
[20:17] <tdonohue> robint -- are you volunteering to implement DS-621 instead of DS-619? (just want to clarify)
[20:17] <grahamtriggs> depends... the fix for 621 may depend on the eperson still existing.
[20:17] <mdiggory> why not just cache the exports in the temp directory and let a steriotypical tmp cleanup manage them?
[20:17] <robint> I'll review qnd come back with a suggestion. oIs that ok ?
[20:18] <robint> s/oIs/is
[20:18] <grahamtriggs> don't we need to guarantee that the exports will be available for a certain period?
[20:18] <tdonohue> robint -- sounds good to me
[20:18] <tdonohue> grahamtriggs -- yes, but we probably should just let institutions decide on that period, and schedule their cron job appropriately
[20:18] <robint> mdiggory: Thats topical for me anyway as that is what Sword does.
[20:19] <tdonohue> ok -- assign DS-621 to robint for review/suggestions. This may or may not make DS-619 unnecessary
[20:19] <tdonohue> text direction in tepmplates : http://jira.dspace.org/jira/browse/DS-622
[20:20] <mhwood> Does Java language support not already handle that? (Haven't dealt with it yet.)
[20:20] <mdiggory> text direction it something that should be theming driven in xmlui
[20:21] <tdonohue> No idea -- never dealt with this either
[20:21] <mdiggory> meaning, its something you place in your html/css
[20:21] <mhwood> Why in the theme? Sometimes a block of text combines RTL and LTR text, as in English quoting Arabic or v.v.
[20:22] <mdiggory> right, but in that case its still html centric
[20:22] <PeterDietz> it also modifies the site layout somewhat, as arabic/hebrew sites have the navigation all on the right hand side
[20:22] <mdiggory> why in the language file... its language and fragment specific
[20:23] <mdiggory> I.E. if you create html descriptions that are in mixed languages, you'd encode the html with rtl and ltr attributes
[20:24] <tdonohue> so, mdiggory, do you want to respond to this & just close it immediately? The suggestion from the reporter seems odd to me and it seems like you have a good idea of the alternative
[20:24] <mdiggory> if the page in general were in arabic vs engish... then you would want to write that logic into the theming template so that when it detected that language, the proper direction was also added as an attribute to the html
[20:25] <tdonohue> is this a candidate to just close immediately? thoughts?
[20:26] <mdiggory> I wouldn't close it...
[20:26] <mdiggory> because directionality is important.
[20:26] <kshepherd> heres the talk from the folk who did the original RTL work for persian language at OR09, if it heolps: http://smartech.gatech.edu/handle/1853/28464
[20:26] <mhwood> I think we want to make certain that we support all scripts properly.
[20:26] <kshepherd> i specifically remember they talked about phrases with both LTR and RTL
[20:26] <tdonohue> kshepherd -- aha -- thanks!
[20:27] <mdiggory> yea, but such phrases should be properly coded (in html) by the author.
[20:28] <mdiggory> I'm trying not to bite... but suppose I'm someone who knows about this stuff...
[20:28] <mdiggory> I'll respond
[20:29] <kshepherd> ok, i'm just thinking of common cases like title: "the title", where the field is displayed in RTL because the locale is persian, but "
[20:29] <tdonohue> ok -- DS-622 : mdiggory will respond. Leave issue open & add link to Persian Language talk from OR09
[20:29] <kshepherd> the title" is the english version of the title being viewed, or so on
[20:29] <tdonohue> (One last one) multi-calendar : http://jira.dspace.org/jira/browse/DS-623
[20:30] <tdonohue> very broad request -- support for all calendars?
[20:30] * sandsfish (~sandsfish@dhcp-18-111-13-229.dyn.mit.edu) has joined #duraspace
[20:31] <kshepherd> yeah.. actually, this was another thing the EIAH customised
[20:31] <kshepherd> (persian calendar)
[20:31] <mhwood> A representative range of contemporary calendars, anyway. I wish the ticket gave examples of where this is not happening.
[20:31] <tdonohue> aha -- is it in the same talk then kshepherd? (you have a good memory!)
[20:31] <mhwood> It is.
[20:31] <kshepherd> yep, the PDF covers it too
[20:32] <mdiggory> URI: http://purl.org/dc/elements/1.1/date
[20:32] <mdiggory> Label: Date
[20:32] <mdiggory> Definition: A point or period of time associated with an event in the lifecycle of the resource.
[20:32] <mdiggory> Comment: Date may be used to express temporal information at any level of granularity. Recommended best practice is to use an encoding scheme, such as the W3CDTF profile of ISO 8601 [W3CDTF].
[20:32] <mdiggory> References: [W3CDTF] http://www.w3.org/TR/NOTE-datetime
[20:32] <tdonohue> ok -- DS-623. Send a similar response as DS-622. Add link to OR09 talk which also covered persian calendar. (volunteer to respond?)
[20:32] <PeterDietz> I remember helping someone in #dspace who had an institutional mandate to store items in roman, and in their local system, and items would be input in one or the other date system. And they wanted it all to be output in both roman and local time.
[20:32] <mdiggory> can't much deviate from that without examples
[20:33] <PeterDietz> My idea to them was to store it all in the DB as roman, and convert in ingest, and on output
[20:34] <tdonohue> Ok -- we'll stop there. I can always paste in our comments to DS-623 & send along that OR09 link
[20:34] <tdonohue> shall we move on to DSpace 1.7 updates?
[20:35] <PeterDietz> This is a good half way point. Any hot issues regarding 1.7?
[20:35] * keithg (~keith-noa@lib-kgilbertson.library.gatech.edu) has joined #duraspace
[20:35] <tdonohue> (note: just 4 weeks till 1.7 feature freeze)
[20:36] * kshepherd needs to get a move-on with virtual sets/collections but is suffering a lack of time to do it in!
[20:36] <PeterDietz> So far 1.7 has been the time-driven features / fixes / etc. As far as arch cleanup, I don't have much input for that
[20:36] <robint> I need to speak to Richard Jones about Sword Client stuff. Should have more to report next week.
[20:36] <tdonohue> kshepherd -- you can always ask for help or see if anyone else can volunteer to take over
[20:37] <sandsfish> I'm patching our 1.6.0 instance to avoid the White-Screen issue. I'm also following up on the actual Cocoon bug that is still open in the Cocoon JIRA to see if they'll actually cut a new version with that fix in it before we put out 1.7.
[20:38] <sandsfish> mdiggory's patch for this issue in the JIRA ticket patches cocoon-servlet-services-impl but it's the 1.1.1 version and current DSpace uses 1.2 so I'm redoing that patch now.
[20:38] <tdonohue> sandsfish -- glad you were finally able to track that one down!
[20:38] <sandsfish> But I did have a question. Didn't the patch to Cocoon end up causing other issues in the end?
[20:38] <mhwood> I think it uncovered a worse problem.
[20:38] <mdiggory> hey whats this?
[20:39] <kshepherd> speaking of patches to Cocoon... graham/anyone else: did you have contributions to submit re: the memory leaks and so on mentioned in the "tomcat reporting memory leak" dspace-tech thread?
[20:39] <kshepherd> i confess i'm not enough of an xmlui/cocoon expert to understand most of those problems
[20:39] <mdiggory> sandsfish: is this the one I published for us
[20:39] <sandsfish> yes, you named it 1.1.1-DS-SNAPSHOT
[20:40] <sandsfish> i can dig up the link
[20:40] <sandsfish> http://maven.dspace.org/snapshot/org/dspace/cocoon/cocoon-servlet-service-impl/1.1.1-DS1-SNAPSHOT/
[20:40] <mdiggory> Yes, and I did this because the release 1.2 version completely breaks the cocoon bundle behavior
[20:40] <sandsfish> So #1, I want to make sure the patch is just synchronizing that LinkedList, and #2, what was the worse problem, if any.
[20:41] <mdiggory> what I just said
[20:41] <sandsfish> (Also, in addition to this fix, I will patch DSpaceCocoonServletFilter in the same patch to report errors better.)
[20:41] <grahamtriggs> kshepherd: yes, I have some things to submit - but I also haven't yet completely 'cleaned' xmlui, so it might be that what's hidden is also in Cocoon
[20:41] <mdiggory> sorry bundle = block
[20:41] <sandsfish> hm, ok.
[20:42] <mdiggory> Using 1.2 Cocoon Blocks can not be used / deployed, which we utilize
[20:42] <sandsfish> it wasn't clear on the ticket if there were still other side-effects from this patching.
[20:42] <sandsfish> http://jira.dspace.org/jira/browse/DS-253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
[20:43] <mdiggory> the formal release was here
[20:43] <mdiggory> http://maven.dspace.org/release/org/dspace/dependencies/cocoon/dspace-cocoon-servlet-service-impl/
[20:43] <sandsfish> ok, well if there are no other serious changes between 1.1.1 and 1.2, it would seem that it would be ok to use that regressed version.
[20:43] <tdonohue> mdiggory & sandsfish -- is this something you two can work together on (it sounds like you both know part of the story and need to learn the other part?)
[20:44] <sandsfish> sounds like it. will confer offline and maybe ask other questions if we need community input.
[20:44] <mdiggory> can you point me to that 1.1.1
[20:45] <sandsfish> http://svn.apache.org/repos/asf/cocoon/tags/cocoon-servlet-service/cocoon-servlet-service-impl/cocoon-servlet-service-impl-1.1.0/ (perhaps you incremented the last digit for your change)
[20:45] <mdiggory> yes, I did...
[20:45] <mdiggory> that happens in release
[20:46] <sandsfish> Google Scholar functionality is being tested here at MIT, and we're adding some logic to select a representative PDF for the metadata (Google would rather not bother if they don't have a PDF to point to). I will share with the community as soon as I can.
[20:46] <mdiggory> which threw the numbering off
[20:46] <mdiggory> next should have been 1.0.2
[20:47] <tdonohue> FYI around 1.7.0 AIP work -- I'll be "tagging" a version of this code later this week for the DuraCloud Pilot. So the AIP export/import will get some heavy testing from DuraCloud pilot partners in coming weeks (so it receives an "early testathon" in coming weeks)
[20:47] <sandsfish> and i have a few other minor patches I found while digging for the white-screen, so i'll submit those in the next week. that's all i've got for 1.7
[20:48] <tdonohue> jumping back to kshepherd's earlier comment: did we have any other thoughts re: memory leaks discussions on dspace-tech? Anyone have time to try and help start tackle/analyze the issues (or even help us come up with a better testing solution)?
[20:50] <mhwood> Can we document a recommendation to place the JDBC and pooling JARs in the servlet container's common library (assuming it has one)? That would be a cheap start.
[20:51] <mdiggory> I would love to see that tied into the database service
[20:51] * stuartlewis (~stuartlew@gendiglt02.lbr.auckland.ac.nz) has joined #duraspace
[20:51] <tdonohue> If anyone has test data that'd also be appreciated -- I'd be willing to try and find time to help with this, but I have a complete lack of test data to work with (maybe we need to have a 'donate your data' request)
[20:52] <sandsfish> i don't think i'll have time to do that type of testing before the freeze, but i'm trying to improve detection of database connection leaks, so i'll have a patch of DatabaseManager out too.
[20:53] <grahamtriggs> test data is not the issue - the problems that I've found and worked through can be determined without any data in the repository (in fact, that is the environment I've been using)
[20:54] <tdonohue> grahamtriggs -- well then, I'd ask for what your tests are that you're running -- so that we can all replicate them and investigate when we have the time :)
[20:54] <grahamtriggs> it's very simple - hook a profiler up to Tomcat, access the dspace application, stop the dspace application - do the classes unload.
[20:54] <tdonohue> (i.e. the more we all become aware of the issues, the quicker we can hopefully come to a resolution)
[20:55] <grahamtriggs> if it doesn't (which it won't without a bit of care and attention), then you are leaking strong references, and that starts flagging up all sorts of potential issues.
[20:58] <tdonohue> ok. well, we are unfortunately running short on time here. One thing I wanted to get in quickly is that in coming weeks (hopefully even next week) I'm going to investigating migrating our DSpace JIRA to the central DuraSpace JIRA. I'll send an email out at least a few days before I actually do the migration (just so you know when it may be temporarily down)
[20:58] <stuartlewis> I think there is also a PR exercise here, and we need to be careful. We need to turn this effort into a positive thing, not a "dspace doesn't scale - go elsewhere' message.
[20:59] <mhwood> "DSpace pursuing improved scalability"
[20:59] <tdonohue> stuartlewis -- yea makes perfect sense.
[20:59] <stuartlewis> We've been harmed by that message in the past - and we don't want to go down that route again.
[20:59] <stuartlewis> Could we move it off of tech and onto devel to start?
[21:00] <PeterDietz> was the last scale route when stuartlewis was throwing prunes at a server?
[21:00] <tdonohue> sure -- feel free to do so -- I think we've just continued discussion on tech cause that's where it started
[21:01] <tdonohue> it'd probably make sense to move to devel anyways, as we really need to start sharing more specific tech notes so that we can start to narrow down the issues (and what is causing them)
[21:01] <grahamtriggs> I would argue it's not even a scalability issue - there are some fundamental questions of 'does it even work' at stake here. And scalability works in both directions too - you can design an architecture that will scale to millions of items (with enough hardware thrown at it), but it's footprint and requirements will probably make it impossible for anyone just starting out
[21:01] <kshepherd> yeah, i found the tone of the thread very negative too, and with no solutions being openly offered, i think it infers "dspace is broken. stay on 1.4 or find a new product"
[21:02] <robint> stuartlewis: I agree but I also think we need to be careful that we dont just sweep it under the carpet.
[21:03] <kshepherd> nobody wants to sweep it under the carpet, just try and frame it more constructively so that we're actually trying to fix/improve the software instead of telling people not to use it
[21:03] <mhwood> Well, there were solutions for some of the leaks. And every leak plugged keeps the system up a bit longer. Now we can chase down the other leaks.
[21:05] <grahamtriggs> kshepherd: well, potential can of worms labelling anything broken... but, if it was broken, we aren't going to have any solutions unless we accept that it's broken.
[21:06] <tdonohue> I'm all for switching to a more positive tone/discussion :) (and if my response seemed a negative tone I apologize. didn't mean it that way. I was just pointing out why sometimes things don't end up happening right away in our DSpace world, and that we really need to find some dedicated volunteers to work towards resolutions to these issues.)
[21:06] <stuartlewis> tdonohue: No - your response was spot on (as always!). It was responses from some other parties that were negative. (no one here!)
[21:07] <kshepherd> grahamtriggs: yeah, agreed.. maybe i'm not explaining myself well...
[21:07] <mhwood> I think there are a couple of volunteers already working on it, no?
[21:08] <tdonohue> mhwood -- I'm not sure there are actual developer volunteers? Although obviously grahamtriggs has been looking into many of the issues
[21:08] <mhwood> One thing that sites need to understand is that, with the new testing framework and new reporting, we are going to be discovering "broken" things all over which were formerly more obscure, so lots of bugs will turn up in a short time. That means we know what to fix, not that quality has declined.
[21:09] <stuartlewis> Perhaps we need to be proactive, and invite the interested parties to form a working group to look at this specifically?
[21:09] <tdonohue> mhwood +1 (agreed!)
[21:09] <kshepherd> yes, mhwood++
[21:10] <tdonohue> stuartlewis -- good idea, and maybe that could be the reason behind moving the thread to 'dspace-devel' (and inviting folks to take part there in more detailed discussions)
[21:10] <stuartlewis> If we form a scalability working group (and say that it is for those users such as BMC and Cambridge who specially run large and complex installations) then we put the emphasis on those people to tackle the problems specific to them.
[21:11] <stuartlewis> So we're being proactive, positive, and it tells good stories (we have big complex users, we facilitate them to discuss their unique set of problems etc)
[21:11] <mhwood> Hmmm. It *looks* like scalability but what it really is is correctness. (I think someone already said that.)
[21:12] <tdonohue> yea -- only one big question -- in order for that group to actually "happen", we probably need someone to volunteer to lead it or at least help them to semi-organize themselves :)
[21:12] <stuartlewis> Yes - but the correcteness only shows itself when pushed at scale.
[21:12] <PeterDietz> So the problem is that memory leaks, and this can be proven by unclean start/stop of tomcat with dspace deployed. This is due to us having lots of code, or that the introduction of backported code contributed to memory issues?
[21:12] <mhwood> We need attention to scalability anyway, and if the WG wants to include "hours before it falls over" then so be it.
[21:13] <mhwood> Lots of correct code will not cause leaks.
[21:13] <robint> I would be concerned that setting up a seperate group appears to give the rest of us a pass to develop without concern for these issues.
[21:13] <hardy_pottinger> I'm not "an actual developer" :-) but am interested in helping, as we're looking at running multiple instances in a single container, and have to do the proactive reboot every night; would love an excuse to dig into the testing aspect more
[21:14] <tdonohue> hi hardy_pottinger -- glad you could join us (and thanks for your comments in the thread).
[21:14] <stuartlewis> hardy_pottinger: That would be great - one of our biggest problems is lack of data, and lack of profiling information. You'd be a great source of that source of info if you're willing to work with us on that.
[21:14] <hardy_pottinger> happy to help
[21:15] <grahamtriggs> PeterDietz: it doesn't entirely prove there are no memory leaks (you can still leak heap memory during operation, even with clean app start/stop), but unclean applications point to leaked data, which also hints potential cross-request, cross-session and even cross-application contamination
[21:15] <tdonohue> well, I can put out a call to try and "organize" (even informally) around this issue. But, I agree with robint that this shouldn't give us all a "free pass". Mostly, we need someone(s) to get a better grasp on the issues here and bring back analysis/explanation that we can all work to resolve
[21:16] <stuartlewis> Based on experience around the docs, probably a direct contact with the interested parties would be more effective than a generic call.
[21:16] <stuartlewis> (no one ever volunteers to open calls :( )
[21:17] <tdonohue> true true true. I just don't have the time to end up "leading" this right now (unfortunately, though I am interested) :)
[21:18] <stuartlewis> grahamtriggs: Interested in becoming our 'performance and scalability czar', and leading a group around that?
[21:18] <PeterDietz> We're migrating to xmlui at the end of the year, and my biggest worry about that, due to our size, and amount of traffic, is that we could stall out. Is there a main place to look at for improvements? dspace-api / dspace-xmlui
[21:19] <mhwood> Things like memory leaks tend to crop up here and there all over, before becoming troublesome enough to prompt a bug-hunt to squash them.
[21:19] <grahamtriggs> stuartlewis: do I get some official robes with that? And a staff. (I mean the kind I can whack round people's heads when I need to)
[21:20] <robint> I think some of the testing that Graham has undertaken needs to become standard practice for all of us.
[21:21] <sandsfish> PeterDietz: along those lines, perhaps a page on the wiki to collect tuning / performance tips would be a good documentation move. There are plenty of things I think sites do individually based on their instance(s) that could be good advice, but that would be maybe too specific for the official docs.
[21:22] <sandsfish> For instance, we've been playing with our Postgres config + max_connections in dspace.cfg recently.
[21:22] <mhwood> Sorry, but I have to go. 'bye all!
[21:22] <robint> Once some major development is commited its too late in the process to be identifying significant performance problems.
[21:22] <tdonohue> robint +1 -- yea, I'd like to see some example process(es) get documented to the Wiki -- so, that at the very least we can follow them during testathons
[21:22] <sandsfish> "Scalability Tips" or somesuch.
[21:22] * mhwood (~mhwood@2001:18e8:3:171:218:8bff:fe2a:56a4) has left #duraspace
[21:23] <tdonohue> sandsfish -- good idea, feel free to start up a page and start sharing (and we can direct others towards it to add their own hints/tips) :)
[21:23] <kshepherd> yep, sounds good
[21:24] <tdonohue> sandsfish -- fyi there is some rather old, possibly very outdated content here: https://wiki.duraspace.org/display/DSPACE/SystemAdministrators
[21:24] <robint> Got to go. Pity, good conversation:)
[21:24] <sandsfish> tdonohue: ok, will do.
[21:25] * robint (5229fd08@gateway/web/freenode/ip.82.41.253.8) Quit (Quit: Page closed)
[21:26] <PeterDietz> I'd like some documentation refresh. Toss in best practices, tips, tricks, features in development, articles on projects in motion at your institution that maybe don't/won't get kicked back into the main dspace due to them being too local of interest
[21:27] <tdonohue> So, it sounds like we still need to find more "highly motivated parties" to help tackle this (and someone to lead it, unless graham is interested?) -- seems like so far we have grahamtriggs, hardy_pottinger, possibly PeterDietz (since they are worried if it will affect them soon), and obviously Tom De Mulder (from email thread).
[21:27] <kshepherd> PeterDietz: yeah, i had a go at doing some of that around xmlui theming/templating a while back (a "cookbook" on teh wiki), but eventaully ran out of time and enthusiasm :/
[21:28] <sandsfish> tdonohue: i'll lend a hand in testing if i can, but i've got a lot to get in for 1.7 and this white screen of death to prevent immediately in production, so i maybe good for little to none. but, you know, i'll try. :)
[21:28] * achelous_ (~tom@216.160.210.8) has joined #duraspace
[21:29] <PeterDietz> re: tips documenting. With confluence we get commenting, so hopefully non-publicly-active-repository-admins will kick in their two cents as well. I feel myself constantly referencing a personal blog post, or an old dspace-tech thread, when it should be more in line with community documentation
[21:29] <kshepherd> yep
[21:29] <tdonohue> sandsfish -- understandable! we all have a lot going on (and I also want to lend a hand, but don't have immediate time to do so)
[21:30] <PeterDietz> ok, Assign all community feedback + performance tweaks to jtrimble
[21:31] <tdonohue> haha :) jtrimble is going to show up at your home and/or workplace PeterDietz (he's not that far from you geographically, do you really want to get him mad at you?)
[21:32] <hardy_pottinger> I was actually thinking, you might get more performance tips if you frame it in a question to the dspace-tech list... wikis are great, but they can present a bit of a barrier to information (one more step, pressure to make it look right, etc.)
[21:33] <sandsfish> hardy_pottinger: perhaps querying the lists and shuttling that information over to the wiki page will net the information and get it out there for those searching for it in the future.
[21:35] <sandsfish> gtg all. cheers!
[21:35] <tdonohue> Since discussion is slowing down, here's what we could try. (1) I'll send a call (with targeted message to our 'highly motivated parties') for volunteers to help, (2) we'll try and keep things active and encourage everyone to share what they have found or are experiencing, so that we can try to find more interested parties and common issues out there (Thoughts on whether to continue this on a new thread on dspace-tech, or move t
[21:36] * ksclarke (~kevin@adsl-235-49-191.clt.bellsouth.net) Quit (Ping timeout: 264 seconds)
[21:37] <PeterDietz> I wonder if the Global Outreach crew might be of assistance to help figure out what users will find especially valuable in the docs. (HowTo's, ...)
[21:38] * sandsfish (~sandsfish@dhcp-18-111-13-229.dyn.mit.edu) Quit (Quit: sandsfish)
[21:39] <tdonohue> PeterDietz -- you are welcome to ask them -- it's possible you could find someone interested (I can also try to send a note to Val to see what she thinks)
[21:40] <tdonohue> Ok -- I think most everyone else has gone off to do their work. I'll stick around here a bit, just in case discussion starts back up. Thanks for sticking through a much longer than normal mtg -- discussion was great so I didn't want to stop it :)
[21:43] <hardy_pottinger> PeterDietz: I actually have on my todo list to send a note out to the dspace-tech list to ask people to show and tell their Apache mod_proxy (those that use it) and Tomcat context configs, What we have works, but I'm willing to bet it could be a lot better.
[21:44] <tdonohue> hardy_pottinger -- sounds like a great idea. Whatever you discover would be worth capturing on the wiki as a future reference for "best practices around using mod_proxy"
[21:45] * hardy_pottinger needs to figure out how to make the wiki go
[21:45] * kshepherd is gone now
[21:46] * keithg (~keith-noa@lib-kgilbertson.library.gatech.edu) has left #duraspace
[21:47] <tdonohue> hardy_pottinger it shouldn't be too hard for you :) You just need to click the "Signup" link in the top right to create a login, and then create/update content. If need be though any of us can always paste your mod_proxy discoveries into a new wiki page for future reference.
[21:52] <hardy_pottinger> todonohue: already signed up, have new content gathered and ready to go (RHEL install instructions update)... will try to get to it by the end of the week, will send note out to dspace-tech earlier
[21:53] * ksclarke (~kevin@adsl-235-49-191.clt.bellsouth.net) has joined #duraspace
[22:00] <tdonohue> hardy_pottinger -- sounds great. any additions/updates you can make are welcome! :)
[22:05] * hardy_pottinger (80ce8627@gateway/web/freenode/ip.128.206.134.39) Quit (Quit: Page closed)
[22:10] * tdonohue (~tdonohue@c-98-228-50-55.hsd1.il.comcast.net) has left #duraspace
[23:19] * grahamtriggs (~grahamtri@cpc2-stev6-2-0-cust333.9-2.cable.virginmedia.com) Quit (Quit: grahamtriggs)
[23:26] * stuartlewis (~stuartlew@gendiglt02.lbr.auckland.ac.nz) Quit (Quit: stuartlewis)
[23:58] * achelous_ (~tom@216.160.210.8) Quit (Ping timeout: 240 seconds)

These logs were automatically created by DuraLogBot on irc.freenode.net using the Java IRC LogBot.