Introduction
As part of the ESIP Data Stewardship Committee’s (the DS Committee) core collaboration areas, the DS Committee is focused on developing and fostering best practices that can help to ensure continued and reliable information content, quality, accessibility, and usability of Earth Science data for as long as the data is deemed to be of value. As a result, the “Scientific Data Stewardship Maturity Matrix” (Data Maturity Matrix) and the “Provenance and Context Content Standard” (PCCS) are two new standards that are currently being evaluated by the DS Committee in order to assess their applicability and to determine the next development steps. Ultimately, the DS Committee would like to promote the use and the adoption of these standards to a wide community, so that the standards could help facilitate long-term preservation, stewardship, and access of Earth Science data in various formats and from diverse sources.
Being the current student fellow for the DS Committee, I am very interested in understanding the details and the usability of these standards as well as to contribute to the development of the DS Committee’s effort. Consequently, I volunteered to evaluate these standards, and I will summarize the standards and my experience of the evaluation as follows.
Scientific Data Stewardship Maturity Matrix
The “Scientific Data Stewardship Maturity Matrix” (Data Maturity Matrix) is a framework for measuring stewardship practices applied to individual digital datasets of Earth Science data products. The Data Maturity Matrix followed the similar approach used by the product maturity assessment model previously described by Bates and Privette (2012), and it was developed jointly by the National Oceanic and Atmospheric Administration (NOAA)'s National Climatic Data Center (NCDC), now National Centers for Environmental Information (NCEI), and the NOAA’s Cooperative Institute for Climate and Satellites, North Carolina (CICS-NC). The current revision of the Data Maturity Matrix has nine key categories that the dataset will be evaluated against, and these categories are: preservability, accessibility, usability, production sustainability, data quality assurance, data quality control/monitoring, data quality assessment, transparency/traceability, and data integrity. Under each of these categories, the dataset will also receive a rating that is aimed to reflect the dataset’s maturity level. Each of the five maturity levels has an assigned value and are named as follows: Ad Hoc (1), Minimal (2), Intermediate (3), Advanced (4), and Optimal (5). Additionally, each of these maturity levels has specific, defined criteria for each of nine evaluation categories mentioned above. As a result, a dataset could potentially achieve the optimal stage of stewardship maturity for all nine categories, and thereby scoring the highest, possible total score of 45. In contrast, a dataset could just have an ad hoc stewardship maturity level for each category resulting in the minimum possible score of 9 (assuming no N/A or non-applicability is assigned).
In order to facilitate the evaluation process and the ease of recording of the evaluation result, a template for the Data Maturity Matrix has also been developed. The template helps provide the evaluation categories and the maturity level definitions as well as space for noting questions or justifications regarding the maturity rating decisions. The template also provides a metadata section for documenting the basic provenance information, such as the evaluator’s name and affiliation, the dataset names and identifier, and the evaluation revision. The details of the background and the category definitions for the Data Maturity Matrix can be found in the paper by Peng, Privette, Kearns, Ritchey, and Ansari (2015). The direct link to this paper, along with the Data Maturity Matrix template and the answers to frequently asked questions, can also be found on the following wiki page created and maintained by the DS Committee: http://wiki.esipfed.org/index.php/Data_Maturity_Matrix
In order to evaluate and apply the Data Maturity Matrix, I used two different types of datasets: a climate reanalysis dataset from the National Center of Atmospheric Research (NCAR) and a water sample dataset from the Long Term Ecological Research (LTER) Santa Barbara Coastal (SBC) site. I decided to use these two datasets because they represented contrasting dataset characteristics. In other words, while the NCAR dataset consisted of a collection of netCDF files with a large volume (more than 180,000 files) and covered a long temporal range (21 years), the LTER SBC dataset was a single CSV file capturing data from one specific sampling. In addition, even though the Data Maturity Matrix is supposed to reveal stewardship maturity on the dataset level, since data management practices are different between NCAR and LTER SBC due to organizational structures, the evaluation could also reveal the difference in the organizational stewardship practices that are applied to their respective datasets. Finally, since these two datasets are outside of the original scope of the data types that the Data Maturity Matrix is created for, the evaluations of these datasets could also help in providing feedback regarding the current revision of the Data Maturity Matrix and in expanding the scope of the Data Maturity Matrix.
For the evaluation process, I used the descriptions in the template as the main source for referencing the evaluation criteria and the template’s default space for recording the evaluation result. However, whenever necessary, I also referred back to the paper for further clarifications on the definitions of the evaluation categories and their associated maturity level criteria.
As a result of undergoing the evaluation process, the main observation I made was that while the definitions for each category and their associated maturity level criteria seemed straightforward, the evaluation decision could be quite difficult to make when the definitions were applied to datasets that were outside of the data types of the original scope. In other words, since the definitions in the current revision of the template were constructed with NOAA as the initial use case, some definitions, such as Algorithm Theoretical Basis Document (ATBD) and Operational Algorithm Description (OAD), might not be readily applicable to dataset types from other research disciplines or organizations. In addition, since some of the evaluation categories and terms used, such as “community,” “accessibility,” and “usability,” could be viewed differently depending on the evaluator’s background, it might also be necessary to refine the definitions further in order to capture more stewardship granularity and specificity during the evaluation. These two observations reflects the difficulty in developing a set of best practices that need to be applicable to a broad community — it is a challenging process of balancing the needs between being relevant to actual situations and being able to remain as a general framework. Finally, although the process of Data Maturity Matrix evaluation should be a collaborative process involving scientists, data managers, and additional experts who might also have detail knowledge of the dataset, some evaluation categories and terms could still be unfamiliar to the evaluators. An example would be “quality metadata,” which refers to documentation of the quality of the data quality metadata section. In this case, it would be difficult for the evaluators to provide effective and accurate maturity ratings without a solid understanding or seeing good examples of the evaluation category. As a result, feedback was provided to the Data Maturity Matrix creators to add high-level descriptions and examples for each key category in the template to improve its effectiveness and usability.
Provenance and Context Content Standard
As for the “Provenance and Context Content Standard” (PCCS), it is a guideline used to help in reviewing and determining the data and the related items that should be preserved from missions, projects, or investigations. In other words, by assisting in the identifications of specific content items and the recording of the associated rationales for preserving these items, the PCCS aims to focus on elucidating “what” should be preserved instead of “how” it should be preserved.
The PCCS is being developed by the DS Committee with input provided by NOAA and the National Aeronautics and Space Administration (NASA), and similar to the Data Maturity Matrix, the current version of the PCCS is presented in a table form. The table contains eight major categories for content items that should be reviewed, and these categories are: Preflight/Pre-Operations, Products (Data), Product Documentation, Mission Calibration, Product Software, Algorithm Input, Validation, and Software Tools. Within each of these categories, additional content types have also been defined. Moreover, the table provides information regarding criteria describing how good the content should be, the priority for preservation of the content item, source of the content item during the data lifecycle, and the project phase for capturing the content item. Additional background information and further detail information regarding the eight categories can be found in the conference paper by Ramapriyan, Moses, and Duerr (2012). As the evaluation is being performed for the content items from a mission, project, or investigation, the table can also be used both as a reference and as the document for recording the decision regarding the preservation decision for each content item type, just like the Data Maturity Matrix. The PCCS table, along with the presentation and documents providing further details regarding the PCCS, can be found under the following wiki page created and maintained by the DC Committee:http://wiki.esipfed.org/index.php/Provenance_and_Context_Content_Standard
Since the PCCS involves many more categories and details to evaluate than the Data Maturity Matrix, I used only the NCAR dataset to evaluate the PCCS, so that I could provide better details for each content category. During the evaluation process, since the PCCS table provided the category definition and documentation space, I could use the table as my main source for reference as well as for recording my evaluation result, just like the Data Maturity Matrix. However, unlike the Data Maturity Matrix, since the PCCS table provided much more detail regarding the definitions for each content category and the related content type, I found myself not having to refer as much to the other PCCS resources, such as the introductory notes and previous presentations, for clarification.
As a result of the evaluation process, I noticed that in a similar way to the Data Maturity Matrix, since the PCCS was created with specific user communities as the initial baseline, the syntax or terminologies used also inherited the context from these specific use communities. As a result, as the PCCS continues its development, it would also be important to determine how the user community-specific language could be modified, so that the PCCS can be described in such a way that is applicable to a broader community.
Looking Forward
Despite the user specific scope that I experienced with the current versions of the Data Maturity Matrix and the PCCS, I found both standards to be very helpful in organizing and revealing the areas that might need further attention when providing data stewardship and preserving data provenance. As many have noted, digital scientific data have been increasing rapidly in terms of format, source, and volume. In order to optimize and sustain the value of scientific data, it is important to support and participate in development of best practices, such as the Data Maturity and the PCCS standards. As I continue to participate in the DS Committee’s activities, I look forward to engaging and contributing to the discussions and enhancement of these standards.
For additional information about the DS Committee and its activities, including the Data Maturity Matrix and the PCCS, please visit the DS Committee’s wiki page: http://wiki.esipfed.org/index.php/Preservation_and_Stewardship
References
Bates, J. J. & Privette, J. L. (2012). A maturity model for assessing the completeness of climate data records. EOS, 93 (44), pp 441. doi: 10.1029/2012EO440006
Peng, G., Privette, J. L., Kearns, E. J., Ritchey, N. A., & Ansari, S. (2015). A unified framework for measuring stewardship practices applied to digital environmental datasets. Data Science Journal, 13, 231-253. doi:10.2481/dsj.14-049
Ramapriyan, H. K., Moses, J., & Duerr, R. (2012). Preservation of data for Earth System Science – Towards a content standard. In Proceedings of the Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International (pp. 5304-5307). doi: 10.1109/IGARSS.2012.6352411