FORCE2015

Session Abstracts

Valuing the Diversity of Scholarly Impact in a Networked World

"Reconciling between novel and traditional ways to publish in the Social Sciences "

Session: Valuing the Diversity of Scholarly Impact in a Networked World

Presenter: Kayla Ueland, University of Lethbridge

Presenter CV:

Interest in novel ways to disseminate research that is more responsive to new technologies is not shared by all disciplines

Abstract:

There is a push to develop new ways to disseminate research beyond the traditional journal model to make scholarly communication more responsive to new technologies; however, it seems this goal is not shared by all disciplines. n a unique opportunity as a graduate student, I have been hired as the managing editor for an up and coming journal through the Journal Incubator at the University of Lethbridge. My tasks have involved researching the possible issues in establishing a low resource, open access, Social Sciences journal. To help the editors of the journal choose possible models, I have looked at several webpages of top ranking Sociology journals, with the assumption that journals market themselves in ways that portray the discipline's priorities in publishing. My research has found that Sociology journals do not value novel communication forms. This finding agrees with evidence from open access debates that says that not all disciplines share the same sense of importance for innovative ways to disseminate research. For example, I have found that many of the journals value innovative ideas, rigorous double blind peer review, prestigious editorial boards, and open access. While the latter does appear to be a shift towards new ways of publishing, further research revealed that these journals do not conceptualize open access in a way that offers free dissemination between scholars. This resistance to novel communication forms leads me to wonder how to reconcile between traditional and contemporary ways of publishing within the Social Sciences.


"Citizen Science and Education in the Kyrgyz Mountains"

Session: Valuing the Diversity of Scholarly Impact in a Networked World

Presenter: Aline Rosset, University of Central Asia

Presenter CV:

The challenges of scholarly communication and Open Science approaches in Kyrgyzstan and Central Asia.

Abstract:

From the perspective of a project realized within the framework of the Open and Collaborative Science in Development Network (OCSDNet), the presentation will highlight the specific challenges of scholarly communication in a widely unknown and poorly linked region of the world. In line with the "Kyrgyz Mountains Environmental Education and Citizen Science project", the presentation will also address pathways to involve remote audiences within Kyrgyzstan in scientific communication processes, highlighting that one dimension of scholarly communication responsibilities also lies in communicating with local audiences who are often left behind global networking efforts. Through the introduction of citizen science-based courses into school curricula the Kyrgyz project intends to involve local students in environmental data generation, on the one hand in order to improve environmental education, on the other hand to address knowledge gaps in local-level environmental data. As such the project aims to build a bridge between local citizen scientists and national, regional and international scholars, which requires the establishment of a specific set of communication channels. At the same time, being part of the global OCSDNet offers many opportunities to pool resources and use an international platform for communicating local-level results.


"Open and collaborative science (OCS); a tool for the conservation and development of local ecosystems"

Session: Valuing the Diversity of Scholarly Impact in a Networked World

Presenter: Najat Saliba, American University of Beirut

Presenter CV: http://www.aub.edu.lb/fas/chemistry/faculty/Pages/ns30.aspx

Abstract:

Priorities of international funders have shaped temporary local actions that are limited by the duration and the amount of funding. These temporary fixes have created, among local citizens, a sense of short-term development strategies and detachment towards local challenges. In response, several bottom-up approaches have been initiated by local active researchers, one of which, is the public participatory, or also called the open and collaborative, approach. Public participation has the advantage of encompassing the scale, contexts, heterogeneity, management intensities, and other social and economic co-benefits, tradeoffs, and costs affecting local citizens.

The team at the American University of Beirut - Nature Conservation Center (AUB-NCC) has experienced the benefits of the open and collaborative methodology to develop a village "Green Map" database. With this approach the advantages and limitations of the method for stock taking of baseline information for landscape planning was evaluated and significant sustainable management systems of communal lands and their spatial associations in relation to the mapped landscape components were identified. In addition scholarly produced materials of communication were translated into local languages presented in the forms of brochures and maps where translated icons were used to identify all landscape assets in the villages.

Considering the success of the developed model and the trust that was built between the Center and the communities, we continue to build the local "Green Map" database by adding two new components; air and water pollution. In this conference, a summary of the communication models adopted by the proposed micro-scale methodology will be presented.


"Inclusive and Active Openness in Local Communities of Latin America: what does it mean in practice?"

Session: Valuing the Diversity of Scholarly Impact in a Networked World

Presenter: Josique Lorenzo, CATIE

Presenter CV: http://www.bosquesmodelo.net/wp-content/uploads/2014/06/Short-CV-Josique-v.2-eng.pdf

Abstract:

Like in other regions of the Global South, there is a strong need in Latin America to find new pathways to communicate with local communities and advance more inclusive tools in order to increase the impact and practical application of scholarly outputs. Even more so when it comes to research for "development" and to data related to grand challenges such as climate change, which need to involve a greater number of smaller-scale and locally crafted solutions. The knowledge and data produced within scientific projects can sometimes be effectively translated, communicated and 'tied' to local problem-solving via already active networks and platforms created from the bottom-up, such as Model Forests, that are legitimate, inclusive, voluntarily, multidisciplinary and generally build on existing social capital.

This 5-minute talk is based on a project which adheres to the principles of the Open and Collaborative Science in Development Network (OCSDNet), with the aim of creating a collaborative culture and promoting new modes of scholarly communications, within academic institutions and among local communities. The project explores the challenges and motivations related to opening up (democratizing) research processes related to climate change adaptation for non-expert citizens within Model Forests by increasing participation in agenda-setting, data collection and interpretation and by establishing a more interactive set of communication tools. The goal is ultimately to improve the impact of research by making it more meaningful and sustainable, while speeding up the pace of innovation.


Vision Session & Great Ideas!

"Signposting the Scholarly Web"

Session: Vision Session & Great Ideas!

Presenter: Herbert Van de Sompel, Los Alamos National Laboratory

Presenter CV: http://public.lanl.gov/herbertv/

Explores minimal, yet meaningful, interoperability across diverse web-based scholarly assets aligned with HATEOAS principles.

Abstract:

Increasingly, the scholarly process, its outcomes, and its actors (creators and consumers) exist on the web. The wide range of artifacts used or created in the scholarly endeavor are resources distributed across the web, under the custodianship of a multitude of parties. Actors, both human and machine, are entities with web identities. The result of this perspective is a scholarly web that consists of interlinked artifacts and actors, embedded within the web at large. To an extent, this scholarly network exists. But, where it does, it is largely geared towards human navigation. Efforts aimed at improving machine access to this network have been conducted in recent years. Many of those have appropriately followed semantic web and linked data principles. But, somehow unexpectedly, adoption of the guidelines resulting from these efforts has been slower than anticipated. This presentation explores a complementary approach to make this scholarly network friendlier to machines, as a means to increase the value of its constituents, and to support the emergence of novel applications geared at both humans and machines. The approach is based on "follow you nose" principles, more formally known as "Hypermedia as the Engine of Application State" (HATEOAS, see http://en.wikipedia.org/wiki/HATEOAS). The approach yields less expressiveness than Linked Data approaches but is more straightforward to implement and can result in a meaningful level of interoperability across the scholarly network. The presentation explores this approach to address common requirements such as identification, description, aggregation, versioning, notification, authorship.


"Publishing and crediting different shaped research objects the GigaScience way"

Session: Vision Session & Great Ideas!

Presenter: Scott Edmunds, GigaScience/BGI Hong Kong

Presenter CV: http://orcid.org/0000-0001-6444-1436

Moving beyond the static dead tree publications, GigaScience demonstrates publishing of different shaped research objects.

Abstract:

Traditional methods disseminating research through untransparent static journal publications are bottlenecks in systems that are struggling to cope with increasing data volumes, and there is an increasing reproducibility gap and numbers of retractions. New more interactive platforms for disseminating and rewarding release of data, results and computational tools and workflows are required to maximize knowledge discovery from these precious resources. These need to be made freely and conveniently available to the global community, and incentive systems need to be redirected to credit these efforts. GigaScience is an open-access, open-data journal utilizing the data handling infrastructure of the BGI. Linking manuscript publication with integrated platforms that hosts all associated data and provides data analysis tools and computing resources such as the popular Galaxy workflow and OMERO imaging management systems. Making publishing more transparent, accountable and open by also integrating open review, all of the supporting data, workflows and methods, are independently discoverable and citable with DOIs. Utilising ISA-TAB, curators assist data producers enriching and making metadata interoperable, and have also experimented working alongside researchers in a "Bring Your Own Data" hackathon format. Many large biological datasets have been released to the global community, and software, workflows and virtual machines from a number of scientific papers have been archived and shared in as open, reproducible, transparent and usable form as possible. Pre-publication release of data from novel technologies such as Oxford Nanopore and disease outbreaks has also been experimented with to enable crowdsourcing and more rapid scientific communication.


"Building blocks: existing scholarly infrastructure as a base for new tools"

Session: Vision Session & Great Ideas!

Presenter: Euan Adie, Altmetric

Presenter CV: http://www.altmetric.com

A look at how the existing infrastructure is crucial to build systems that help researchers get credit where credit is due

Abstract:

New indicators of the attention and wider engagement surrounding research outputs have become a hot topic in the last few years. Their promise is that we may be able to leverage new, online data sources that help us understand the impact of both articles and non-traditional outputs. But these indicators and the tools that provide them didn't appear out of thin air. They've been made possible by the continuing investment by the entire community in scholarly communications infrastructure: in common identifiers, metadata standards, repositories and indexing services. In this talk we'll discuss how the existing infrastructure is crucial to build systems that help researchers get credit where credit is due, how initiatives like ORCID and FundRef will fit in and what systems we're still missing.


"Goodbye static, it’s time for interactive and dynamic publishing"

Session: Vision Session & Great Ideas!

Presenter: Rebecca Lawrence, F1000 Research Ltd

Presenter CV: https://uk.linkedin.com/pub/rebecca-lawrence/3/787/732

Using digital technology to change the concept of an article from a static object to a living, up-to-date document.

Abstract:

Articles published in traditional journals are still based around the restrictions of the print era. Articles and data are static, figures and text are often restricted, and data are regularly omitted or filed away as supplementary files in an unusable format. This is both unnecessary in the digital era and highly problematic for scientific progression. Research doesn't conclude with article publication; small updates are often discovered post-publication but may be too small to warrant a new article and are therefore lost forever. Figure limits lead researchers to cherry-pick the 'best' figures, leading to publication bias, and text limits usually shorten methods sections, where detail is crucial for potential replication of the findings. Finally, lack of (reusable) data makes replication and data reuse difficult. F1000Research has developed a new way of publishing that actively encourages inclusion of all figures, underlying data, and detailed protocols, and can be frequently updated, enabling them to remain up-to-date. Data is viewable in-article through embed widgets, and we have several tools that enable simple in-article data manipulation. These tools allow referees and readers to quickly assess the data and check it matches the paper's conclusions. We have now started publishing next-generation figures including real-time figures that are either generated on-the-fly from the data plus code, or 'living' figures that can be updated with new data from other labs as they attempt to replicate the paper's findings. This presentation will discuss these new opportunities and how they could change what it means to publish an article.


"The simplest credit of all? Sharing and spreading the value generated by researchers."

Session: Vision Session & Great Ideas!

Presenter: Dan Morgan, University of California Press

Presenter CV: http://www.linkedin.com/pub/dan-morgan/2/9a3/6a4/

The next attribution challenge: UC Press is proposing to involve researchers in deciding what happens to the value they generate

Abstract:

The presentation will provide an overview of, and highlight the questions raised, by the following: University of California Press will launch a broad scope Open Access journal in 2015 with unique operational and financial features. The journal is a new concept aiming to operationally manifest and spread the value donated by the academic and professional research community back to the same community. Editors and reviewers generate tokens for journal tasks completed. A proportion ($250) of the APC of $875 is paid into an account, funds which are then made available to Editors and Reviewers in arrears for all work on the journal, matched to their tokens, regardless of the ultimate accept/reject decision. Editors and Reviewers will decide whether to receive cash payment, or pay this sum forward to either the journal's APC "waiver fund", or pay this sum forward to another Open Access fund e.g. at their institution. By assigning a certain percentage of the APC for reviewers and editors, UC Press aims to tangibly and operationally show the value of this work. UC Press wishes to enable the research community to decide what to do with this value that it generates. This is only the first step for an established publisher and new journal whose missions are to create a true partnership with the research community, and drive progressive change in e-scholarship business models. (Please see: http://blog.scholasticahq.com/post/100004713143/a-pay-it-forward-approac...) [Formal URL will follow after launch in December/January]


"Multimedia Interactive Policy Research: What's That?"

Session: Vision Session & Great Ideas!

Presenter: Javier Guillot, Hertie School of Governance - Berlin

Presenter CV: http://www.linkedin.com/pub/javier-guillot/4b/73b/a48

Bridging the gap between researchers and practitioners in the field of public policy: A vision and a prototype

Abstract:

Authors: Javier Guillot, Bruno Paschoal, Caio Werneck

A large number of research projects in the interdisciplinary field of public policy analyze pressing challenges currently faced by human societies and suggest feasible policy strategies to address them. Taken together, research in this field has the potential to shape and even provoke social change --but only if it effectively reaches the ears (and minds) of the people that can bring about that change. These are policy makers in government to begin with, but also leaders in the private and social sectors, and in more than a few cases, citizens at large. In the broadest sense of the word, these are the practitioners: those capable of transforming research into practice.

Our project aims to reduce the distance between policy researchers and practitioners -- a distance we believe now is unacceptably large. The goal is to broaden the audience of policy research by (1) combining and developing existing technologies to reduce the costs of access, dissemination, and use of policy research objects; and (2) exploring how 'research leftovers' --the huge amounts of raw and processed data that are never released--can be recycled by enabling truly open access.

To achieve this, we are now conducting a qualitative study on sustainable agrarian communities and Brazilian land reform, based on innovative multimedia methods. This will become the first prototype for an online platform where users will visualize and interact with a variety of policy research objects via multimedia; and also interact with each other to give feedback to and support research projects they like.


"Research Cases"

Session: Vision Session & Great Ideas!

Presenter: Chris Chapman, Pentandra Research Solutions, Inc.

Presenter CV:

Research cases make knowledge creation explicit on the Web.

Abstract:

Research is a process--it's not the papers or other outputs that are published at the end of that process. Research is made of little building blocks that are put together in various ways as the process progresses. Right now we don't have a way to preserve this process. It's difficult to share how and why all the pieces fit together the way they do in our heads. We can write about it, but how can we explain it to a machine? Or another researcher?

Research Cases preserve the history of knowledge creation and allow research to be shared openly across platforms and research disciplines. They're designed for the researcher first, machine second. A research case represents the research context, starting from the question and keeping track of evidence and analysis as the researcher works towards a conclusion. Research Cases are not just a digitalization of the traditional publication process. They change the paradigm of research, allowing knowledge to be born right on the Web. Research cases give direction for future research and stand as a unified model for peer review as well.


"Why are we so attached to attachments? Let's ditch them and improve publishing"

Session: Vision Session & Great Ideas!

Presenter: Kaveh Bazargan, River Valley Technologies

Presenter CV: http://www.rivervalleytechnologies.com

Online platforms can improve the speed and the quality of scholarly publishing. So it's time to ditch attachments.

Abstract:

There is a lot of truth in the cliche that the basic model for the production of scholarly publishing has not changed in decades (or even centuries). This becomes clear if we regard email attachments as analogous to the physical manuscripts, author proofs, figures, and the final printed copy. I argue that it is precisely the ubiquitous use of attachments that has held up progress in publishing. We have the technology right now to allow the author to write online and have the file saved automatically as XML. All subsequent work on the "manuscript" (e.g. copy editing, QC, etc) can also be done online. At the end of the process the XML is automatically "rendered" to PDF, Epub, etc, and delivered to the end user, on demand.

This system is quicker as there are no emails or attachments to hold it up, cheaper as there is no admin involved, and more accurate as there is only one definitive file (the XML) which is the "format of record".


Credit where Credit is Due

"Roll the Credits: Valuing Differential Contributions to Knowledge in the Big Data Era"

Session: Credit where Credit is Due

Presenter: Eric Meyer, Oxford Internet Institute, University of Oxford

Presenter CV: http://www.oii.ox.ac.uk/people/meyer/

A discussion of how contributors can be recognized at all stages of the knowledge creation process in the big data era.

Abstract:

The emergence of big data as a resource for research forces us to consider how systems of attribution and academic credit might be remodelled to suit a more complex, networked scholarly environment. For many years, rates of co-authorship and the mean number of collaborators have been on the increase across all branches of academia, and big data is likely to accelerate this trend still further, for at least three reasons. First, big data research demands sophisticated computational skill in conjunction with traditional disciplinary training, encouraging collaboration between, for example, computer scientists and social scientists. Second, in many cases big data is not originally collected for research purposes, but more often obtained for secondary (and tertiary) use, which provokes the question as to whether those who originally collect, curate and prepare data should be credited alongside those who analyse it. Third, and relatedly, many big datasets of potential scholarly interest are held by non-academic bodies - whether this is social network posts, supermarket loyalty card records or online petition activity. The involvement of non-academic proprietors in the analysis of and even experimentation with these forms of data suggests a need for more diverse forms of attribution, above and beyond traditional academic metrics like publications. In this talk I will discuss these trends in more depth, and propose alternative approaches to attribution and credit - drawing on ideas from domains as diverse as cinema and online communities.


"The Building Blocks of Science"

Session: Credit where Credit is Due

Presenter: Derek Groen, University College London

Presenter CV: https://drive.google.com/file/d/0B_sOoPr_ZcN2ZmtISFNqREZfNjA/view?usp=sharing

I will propose 7 building blocks that contribute towards good science work, and a diversified credit system based on them.

Abstract:

As an active researcher in a range of computation-driven disciplines including computer science, physics, astrophysics, biomedicine and materials science. Between these disciplines I have observed distinct differences in how scholarly contributions are valued, and which contributions are deemed to be particularly important in one's career. In particular, there are major inconsistencies in the way publications and software are rewarded between these fields, with software frequently being undervalued across the board. In this talk I will first show a few archetypical examples of how a credit system can influence the career of existing scientists, how some scientists are able to make the system work for them, and how other major contributions seem to elude the existing metrics systems. It is widely known that the existing dual-axis credit system (primarily geared towards gaining citations and money) has major flaws, and researchers diminish their actual scientific contributions by adapting to it. In conclusion, I will propose to break down scientific merit not in 2, but in 7 building blocks of contribution: science resources, theoretical methods, scrutiny, organisation, vision, public exposure and academic exposure. In addition, I will argue to diversify the mechanisms for attracting funding, positions and awards, taking into account a subset of these building blocks, and introduce weighting if need be.


"We are the 92% - valuing the contribution of research software"

Session: Credit where Credit is Due

Presenter: Neil Chue Hong, Software Sustainability Institute

Presenter CV: http://www.linkedin.com/in/neilchuehong

Reexamining the evidence for research software credit and software as a first class research output

Abstract:

In a recent survey conducted by the Software Sustainability Institute of UK research-intensive universities, 92% of researchers said they used research software and 68% said their research would be impossible without software. Yet only 4% of jobs advertised were software related, and we have seen many issues trying to establish career paths that recognise the development of research software as a valued contribution to the community. In this session, I will summarise some of the various initiatives that the Software Sustainability Institute has participated in to raise the profile of software, from public campaigns to better citations, and from training to software journals. Ultimately, I will argue that it is only be reexamining the way that credit is currently given for scholarly communication that we can change the culture around software, by showing that reuse is as significant as novelty or journal impact factor.


"The Open Science Framework: Infrastructure to Enhance Research Transparency"

Session: Credit where Credit is Due

Presenter: Courtney Soderberg, Center for Open Science

Presenter CV:

The OSF is a free, open-source web application aimed at facilitating transparency around the research process and products.

Abstract:

Researchers produce a variety of materials during their research process: data, code, and other materials that may never actually appear in the research "product" (publication) itself. Sharing those materials - and being transparent about the research process and its contributors - is desirable but not often incentivized or facilitated. The non-profit Center for Open Science (COS) seeks to both facilitate and incentivize these practices by building infrastructure, fostering communities, and by conducting metascience research on the overall process. This talk focuses on the infrastructure - The Open Science Framework (OSF; http://osf.io): a free, open-source web application that manages the entire research lifecycle and enhances transparency in the process. The OSF facilitates the sharing of data, code, and materials, and provides persistent, unique, citable identifiers so that credit can be given to researchers. Version control and logging ensures transparency about the research process and the researchers' contributions. Through add-on connections with services that researchers already use, like Dropbox, Github, and Dataverse, day-to-day workflow challenges are simplified and all research products (code and data, etc.) can be presented together in one place - enhancing discoverability and value to others looking to build on the work. The OSF provides a place for researchers to deposit research products and to discover research products that may not fit the scope of publications - and properly give or be given credit.


"First thing we do, let's kill all the authors. On subverting an outmoded tradition."

Session: Credit where Credit is Due

Presenter: Daniel O'Donnell, University of Lethbridge

Presenter CV: https://www.dropbox.com/s/w5ls8bazw317yyl/CV.pdf?dl=0

The author has long been dead in literary studies. It is time for "him" to go in science and scholarship as well.

Abstract:

Outside of academia, the definition of authorship is quite straightforward. As the OED puts it, an author is "the writer of a book or other work." Things get a little complicated with ghost authors, perhaps, but on the whole there isn't much ambiguity. Authors are people who write. Within academia, however, things are more complicated. There are authors who don't write and writers who are not authors. Determining exactly who gets to be called an author and who doesn't has become something of a cottage industry for ethicists, administrators, and hiring committees. This is because, in academia, authorship carries real rewards: it is the main currency in determining promotion, bonuses, status, and accountability. The problem is that this repurposes a historical term for ends it cannot possibly support. In a world of large team-based science (and increasingly scholarship), the use of "authorship" as a career token is impossible to maintain, especially when it is being used to distinguish between those who deserve credit and rewards from those who do not. The solution is to get rid of authors altogether. Placing the debate in context of contemporary literary theory on authorship and textual creation, this paper examines what what would happen if we replaced the current binary system of "authors" and "contributors" and, in particular, how we could hack current attribution practices in order to more fairly represent actual responsibility and intellectual credit.


"Clarifying Contributorship through Digital Badges"

Session: Credit where Credit is Due

Presenter: Amye Kenall, BioMed Central

Presenter CV:

This session explores author contributorship using Mozilla digital badges.

Abstract:

Research is increasingly reliant on a wide variety of skills ranging from data curation to programming and writing. Unfortunately, the current reward system in academia often fails to give credit for contributions beyond authorship, and the link between authorship and role within a study is rarely transparent. Recently, there has been work to move toward a system that recognises the diversity of roles in a research group. On the technical side, the Mozilla Open Badge Infrastructure is a system to recognise expertise or accomplishments without traditional means of recognition. In terms of taxonomy, Project CRediT--led by Digital Science, the Wellcome Trust and others--has developed a working taxonomy of research contributorship. In partnership with ORCiD, BMC, PLoS, Center for Open Science, and Mozilla Science Lab, we have been exploring a way to implement this taxonomy through a digital badge infrastructure. More transparency around study contributorship using digital credentials, like badges, might help both researchers and employers demonstrate their skills and find the right candidate, respectively. This project has appeared in blogs and in Science, and we hope to present a rough prototype at Force 11 to collect community feedback.


"Who's Sharing with Who? Acknowledgements-driven identification of resources"

Session: Credit where Credit is Due

Presenter: David Eichmann, University of Iowa

Presenter CV: http://www.icts.uiowa.edu/Loki/research/browseResearch.jsp?browse=E&id=950712

This presentation explores creation of semantic data on resource sharing between non-coauthor investigators.

Abstract:

While ontologies characterizing biomedical research have substantially developed in recent years, the population of those ontologies has for the most part remained a manual task. This is particularly true for the relationships between investigators that do not bubble up to the level of coauthorship. This presentation describes my recent work in semantic analysis of the acknowledgement section of biomedical research articles, specifically the sharing of resources (instruments, reagents, model organisms, etc.) between the author articles and other non-author investigators. The resulting semantic graph complements the knowledge currently captured by research profiling systems, which primarily focus on investigators, publications and grants. My approach results in much finer-grained information, at the individual author contribution level, and the specific resources shared by external parties. The long-term goal for this work is unification with the VIVO-ISF-based CTSAsearch federated search engine, which currently contains research profiles from 60 institutions worldwide.


Sponsors

Digital Science CrossRef Royal Society of Chemistry Europe PMC River Valley Technologies Overleaf Elsevier and Mendeley
Science Exchange National Science Foundation (NSF) Gordon and Betty Moore Foundation Alfred P. Sloan Foundation Thomson Reuters Pensoft Publishers Ltd SSI
Oxford University Press Wiley Oxford e-Research Centre PLOS International Society for Biocuration