Objectives

We are developing a set of software quality guidelines to support the ESIP Technology Evaluation Framework. We’re starting with the criteria developed by the Software Sustainability Institute, which gives us a starting place of some 200 criteria across 20 categories (more info). We’d like to update criteria (code moves fast!) to support ESIP software evaluation efforts as a mentoring tool for both science software developers and for PIs and project managers. Additionally, we want to consider how these criteria fit into readiness and reuse levels for a variety of stakeholders.

We would like input from the domain experts and stakeholders within ESIP, including:

  • PIs and project managers interested in how their projects may be assessed;
  • Project managers interested in improving the development practices of their group;
  • Developers interested in improving the development practices of their group;
  • CS researchers interested in developing assessment metrics.
  • We encourage participation from current developers who don’t mind a bit of meta-thinking – we’re looking for a balanced overview of research/science software.

How to Participate

There are two ways to participate in the sprint:

  • Provide feedback on one or more software-evaluation category.
  • Provide high-level feedback on the complete draft guidelines.

We’re asking that you pick one or more, depending on your time and interest, categories to review at some point during April. We’ll do another round in May to cover any unreviewed or unresolved categories.

Please pass this along to anyone who may have interest in software evaluation. This is one of those things that benefits from the diversity of the ESIP community.

If you’d like to help draft the guidelines document in June or help with the organization of a remote code sprint activity (we’re interested in learning from the process, too!), let me know.

Registration: Please sign-in to this Google Form by Monday, April 11
Questions? Please contact: sorenscott@gmail.com