What is your lab's "Data Management" workflow?

Printer-friendly version

 

A number of groups, from libraries and universities and academic projects are striving to implement flexible data management systems in order to harness the latest and greatest in semantic web technologies striving to integrate and facilitate breakthrough interdisciplinary analysis.

It is obviously know that every lab, every individual research group (regardless of the discipline) has developed internal data management systems that “work” (i.e. literature & data collection > excel > stats > graphing > word processer) but what has your lab found useful and what are your biggest frustrations?

Please feel free to comment below, or join the discussion on ResearchGate. or on NIF Blog @ http://blog.neuinfo.org/index.php/essays/lab-data-management-practices

Comments

 

"It is obvious that every lab... has developed internal data management systems" -- perhaps with scarce quotes around "systems." The tissue engineering lab I work in performs only ad hoc data management and none of the several other academic labs I've worked in at this and other institutions have been much better. Judging from Justin Kiggins' tweeted reaction (http://twitter.com/neuromusic/status/328983928683773953) I think I'm not alone.

I worked at a small company that had a system that was considerably more laborious but not considerably more useful, where all files were to be tucked away into a file folder in a deep and broad hierarchy of folders with assigned numeric prefixes. Trying to find someone else's data or understand what process they had worked through in Excel to produce an analysis tended to be an exercise in frustration. Institutional memory was "managed" by archiving Powerpoint slides from lab meetings, which were encouraged to be reasonably comprehensive and tended to include relevant parameters; they were considerably better than nothing.

My own personal Shangri-La contains a cross-platform network file system with robust support for tagging (manually or by introspection) files and directories, but maybe I'm speccing too narrowly.

 

Hey Tim - I appreciate the response - 
 
in a tweet back to @neuromusic & @carlystrasser the idea of throwing this out there is to get a handle on how crazy/divergent they actually are & if any on-going attempts to standardize have a fighting chance (i.e. ISA Tools http://isatab.sourceforge.net/ or BioScholar https://code.google.com/p/bioscholar/). 
 
In my opinion, the most important consideration is integrating into the research group's workflow as seemlessly as possible - without introducing additional burden and adding value with knowledge discovery/semantic relationship building - but the question is how this can be best approached. Does anyone have thoughts on this? 

Essentially, Data Management Technology refers to (generally big) databases, write my assignment for me jointly with software that enclose business explanations of that data (data glossary), and some unique accessing architectures, such as "trade intelligence" and data storehouses.