A Framework for Understanding Online Training for Open Science/Research

Printer-friendly version

Why would REPO need “a framework”?

The REPO project is concerned with understanding how open science and research training can be best provided in an online environment. 

During the covid pandemic an unprecedented number of organizations have scrambled to provide online programming. This move to virtual meetings has sparked a renewed and pressing interest in digital pedagogy and an outpouring of experiment in online learning. 

The REPO project aims to take stock of this experimental moment to consider how available resources and best practices in virtual facilitation can be leveraged to support the open science community in this moment of transition. The project is also about doing this efficiently. Ideally we will capture as much of the experience of these shifts as we can, before we forget them in our continuing scramble. 

Given the scale of these changes, it is not just about mitigating problems, but taking opportunities to do better, more effective and more inclusive training that can reach more people. What works, why, and for whom are therefore central questions. We believe a framework or theory can help us to answer those questions more rapidly and to gather information on a larger scale.

What we mean by “a framework” is a way of organizing, collating and integrating information. Case study approaches have a lot of value in exploring a space. But they have limits on scale (each one is time-consuming to do) and the ability to apply what you learn in new contexts. A framework of some sort is necessary to compare situations and cases with each other, to organise insights from different contexts, and to effectively gather information from many settings. It can also potentially provide a basis to compare and evaluate case studies and integrate survey data derived from different disciplines. 

Our goal is to apply an existing framework to the issues that REPO is tackling, and to use this to help us design surveys and evaluation tools for instructors and participants to improve the future design of training.

Introducing the Community Participation Model from the CSCCE

We examined a range of theoretical and applied frameworks and identified the Community Participation Model (CPM) developed by the Center for Scientific Collaboration and Community Engagement (CSCCE) as a starting point. The CPM is appealing because it addresses the question of how members within a given community can act. In particular it focuses on “modes” of interaction, which ones are possible for a specific community, and therefore what might need to be provided as “scaffolding” to develop a capacity for different modes. The modes of interaction mirror the goals of activities that we often pursue in training and the provision of “scaffolding” sounds quite a lot like good pedagogy. Overall the model also has strong links with the concept of “situated learning” which is an important theoretical background for thinking about training and communities with relevance to open science groups.

 

                         The CSCCE Community Participation Model © Copyright 2020, Center for Scientific Collaboration and Community Engagement, used with permission. From: Center for Scientific Collaboration and Community Engagement (2020). The CSCCE Community Participation Model – A framework to describe member engagement and information flow in STEM communities. Woodley and Pratt doi:10.5281/zenodo.3997802 which is licensed under a Creative Commons CC BY-NC-ND 4.0 license. 

There are two ways we may need to adapt or enrich the CPM. The CPM starts from a position assuming the community of interest is already defined. We need to ask which communities are relevant, and how “community” is being defined. The participants in a specific training process are a group, but not yet a community themselves. We believe that one of the distinctive characteristics of “open science training” is that it is about forming or enhancing communities. But to apply the framework we need to know which community. For example, while the Carpentries programs train individual people, a central goal of the programs is to create a community that shares basic principles in scientific coding or data management. Not all participants in training programs will go on to become a part of the relevant community, but its growth and development is a central goal of Carpentries activities.

This brings us to a second difference to the initial framing of the CPM. The CSCCE is quite specific that the model is not “normative”. That is, that it doesn’t take a view that a community that is further left or right in the diagram is in any sense “better”. Instead, the emphasis is on matching the modes that are supported in a given community to the goals of the community and the capacity of the members to engage in those modes. This is achieved by providing “multi-modal programming” or activities.  In the context of REPO, we think that the intent of training is to teach individuals skills which in turn enhance, grow, or develop communities of practice that are intentionally moving towards the right of the model. In fact, this might be what defines open science training; that it is focussed on building a community of contributors rather than exclusively on individual skills.

What we teach within the Carpentries, and at schools and training run by the Research Data Alliance, FORCE11 Scholarly Communications Institute, DARIAH, and the Australian Research Data Commons along with other organisations, are the various capacities required to contribute to communities and assets that are defined by what is shared. If those communities are more capable of co-creating and co-design because of training then we have been successful.

Applying the Community Participation Model in REPO

How will we use the CPM? That is a work in progress, but part of our goal is to use the model to shape a set of questions that are relevant, starting with “what is the relevant community of practice” for this training? 

Defining the community to benefit is a key first step in determining what success looks like. Is it a bigger community of people able to contribute? Is it providing the additional programming and scaffolding that helps a known group of people move from contributing to co-creation? How are those activities modelled in the training itself? How would we tell if the training is successful?

We can use the framework to look at technology options, and to assess how well they support learners in achieving a particular mode of interaction. How can a video-based lecture be improved to provide options for contribution? What tools work at supporting co-creation, for a group that knows each other, or a group that does not? Are the tools that the training is about helping with this scaffolding? What can we learn from training to improve the tools and systems themselves (co-design)?

Using the model helps us to draw on a large pool of knowledge in social learning. We can also use it to help support instructors in understanding what they are ready to do. What kind of mastery of a tool is needed to be able to use it in training? How do you know when to abandon one tool in favour of another?

Next steps and goals

Right at the moment we have lots of questions, both questions we think we want to ask of instructors, participants and tools; and of the framework itself. Our next goal is to see how those questions fit into the framework and how we might need to expand its dimensions or scope to apply it specifically to training. The experiences and inputs from WG1 and WG2 in the project will be key here. 

The concrete deliverable specified in the project proposal was a survey instrument. We think this is possible, and alongside it we hope to be able to produce some tools for evaluation. These might include a course evaluation, but tool evaluation, or instructor self-evaluation, as well as planning guides may also be possible. We will scope that out within the current project.

Overall we expect to end with more questions than answers. That’s not a bad thing. Good frameworks help us ask better questions.

 


About Fiona Murphy

Fiona is one of the Co-founders of MoreBrains Cooperative Consulting, a scholarly communications and research consultancy that aims to model the values of openness and transparency while supporting other organizations to build capacity in Open Research. 

After completing a DPhil in English Literature, Fiona held a range of scholarly... More

View Profile