Abstract. Previous studies have shown that hydrological models can be parameterised using a limited number of streamflow measurements. Citizen science projects can collect such data for otherwise ungauged catchments but an important question is whether these observations are informative given that these streamflow estimates will be uncertain. We assess the value of inaccurate streamflow estimates for calibration of a simple bucket-type runoff model for six Swiss catchments. We pretended that only a few observations were available and that these were affected by different levels of inaccuracy. The level of inaccuracy was based on a log-normal error distribution that was fitted to streamflow estimates of 136 citizens for medium-sized streams. Two additional levels of inaccuracy, for which the standard deviation of the error distribution was divided by 2 and 4, were used as well. Based on these error distributions, random errors were added to the measured hourly streamflow data. New time series with different temporal resolutions were created from these synthetic streamflow time series. These included scenarios with one observation each week or month, as well as scenarios that are more realistic for crowdsourced data that generally have an irregular distribution of data points throughout the year, or focus on a particular season. The model was then calibrated for the six catchments using the synthetic time series for a dry, an average and a wet year. The performance of the calibrated models was evaluated based on the measured hourly streamflow time series. The results indicate that streamflow estimates from untrained citizens are not informative for model calibration. However, if the errors can be reduced, the estimates are informative and useful for model calibration. As expected, the model performance increased when the number of observations used for calibration increased. The model performance was also better when the observations were more evenly distributed throughout the year. This study indicates that uncertain streamflow estimates can be useful for model calibration but that the estimates by citizen scientists need to be improved by training or more advanced data filtering before they are useful for model calibration.
Abstract Crowd‐based hydrological observations can supplement existing monitoring networks and allow data collection in regions where otherwise no data would be available. In the citizen science project CrowdWater, repeated water level observations using a virtual staff gauge approach result in time series of water level classes (WL‐classes). To investigate the quality of these observations, we compared the WL‐class data with “real” (i.e., measured) water levels from the same stream at a nearby gauging station. We did this for nine locations where citizen scientists reported multiple observations using a smartphone app and at 12 locations where signposts were set up to ask citizens to record observations on a paper form that could be left in a letterbox. The results indicate that the quality of the data collected with the app was better than for the forms. A possible explanation is that for each app location, a single person submitted the vast majority of the observations, whereas at the locations of the forms almost every observation was made by a different person. On average, there were more contributions between May and September than during the other months. Observations were submitted for a range of flow conditions, with a higher fraction of high flow observations for the locations were data were collected with the app. Overall, the results are encouraging for citizen science approaches in hydrology and demonstrate that the smartphone application and the virtual staff gauge are a promising approach for crowd‐based water level class observations.
Abstract While hydrological models generally rely on continuous streamflow data for calibration, previous studies have shown that a few measurements can be sufficient to constrain model parameters. Other studies have shown that continuous water level or water level class (WL‐class) data can be informative for model calibration. In this study, we combined these approaches and explored the potential value of a limited number of WL‐class observations for calibration of a bucket‐type runoff model (HBV) for four catchments in Switzerland. We generated synthetic data to represent citizen science data and examined the effects of the temporal resolution of the observations, the numbers of WL‐classes, and the magnitude of the errors in the WL‐class observations on the model validation performance. Our results indicate that on average one observation per week for a 1‐year period can significantly improve model performance compared to the situation without any streamflow data. Furthermore, the validation performance for model parameters calibrated with WL‐class observations was similar to the performance of the calibration with precise water level measurements. The number of WL‐classes did not influence the validation performance noticeably when at least four WL‐classes were used. The impact of typical errors for citizen science‐based estimates of WL‐classes on the model performance was small. These results are encouraging for citizen science projects where citizens observe water levels for otherwise ungauged streams using virtual or physical staff gauges.
Streamflow data are important for river management and the calibration of hydrological models. However, such data are only available for gauged catchments. Citizen science offers an alternative data source, and can be used to estimate streamflow at ungauged sites. We evaluated the accuracy of crowdsourced streamflow estimates for 10 streams in Switzerland by asking citizens to estimate streamflow either directly, or based on the estimated width, depth and velocity of the stream. Additionally, we asked them to estimate the stream level class by comparing the current stream level with a picture that included a virtual staff gauge. To compare the different estimates, the stream level class estimates were converted into streamflow. The results indicate that stream level classes were estimated more accurately than streamflow, and more accurately represented high and low flow conditions. Based on this result, we suggest that citizen science projects focus on stream level class estimates instead of streamflow estimates.
Citizen scientists keep a watchful eye on the world's streams, catching intermittent streams in action and filling data gaps to construct a more complete hydrologic picture.
Hydrological observations are crucial for decision making for a wide range of water resource challenges. Citizen science is a potentially useful approach to complement existing observation networks to obtain this data. Previous projects, such as CrowdHydrology, have demonstrated that it is possible to engage the public in contributing hydrological observations. However, hydrological citizen science projects related to streamflow have, so far, been based on the use of different kinds of instruments or installations; in the case of stream level observations, this is usually a staff gauge. While it may be relatively easy to install a staff gauge at a few river sites, the need for a physical installation makes it difficult to scale this type of citizen science approach to a larger number of sites because these gauges cannot be installed everywhere or by everyone. Here, we present a smartphone app that allows collection of stream level information at any place without any physical installation as an alternative approach. The approach is similar to geocaching, with the difference that instead of finding treasure-hunting sites, hydrological measurement sites can be generated by anyone and at any location and these sites can be found by the initiator or other citizen scientists to add another observation at another time. The app is based on a virtual staff gauge approach, where a picture of a staff gauge is digitally inserted into a photo of a stream bank or a bridge pillar, and the stream level during a subsequent field visit to that site is compared to the staff gauge on the first picture. The first experiences with the use of the app by citizen scientists were largely encouraging but also highlight a few challenges and possible improvements.
Macroplastic pollution ($>$ 0.5 cm) negatively impacts aquatic life and threatens human livelihood on land, in oceans and river systems. Reliable information on the origin, fate and pathways of plastic in river systems is required to optimize prevention, mitigation and reduction strategies. Yet, accurate and long-term data on plastic transport are still lacking. Current macroplastic monitoring strategies involve labor intensive sampling methods, require investment in infrastructure, and are therefore infrequent. Crowd-based observations of riverine macroplastic pollution may potentially provide frequent cost-effective data collection over a large geographical range. We extended the CrowdWater citizen science app for hydrological observations with a module for observations of plastic in rivers. In this paper, we demonstrate the potential of crowd-based observations of floating macroplastic and macroplastic on riverbanks. We analyzed data from two case studies: (1) floating plastic measured in the Klang (Malaysia), and (2) plastic on riverbanks along the Rhine (the Netherlands). Crowd-based observations of floating plastic in the Klang yield similar estimates of plastic transport (2,000-3,000 items hour$^{-1}$), cross-sectional distribution (3-7 percent point difference) and polymer categories (0-6 percent point difference) as reference observations. It also highlighted the high temporal variation in riverine plastic transport. The riverbank observations provided the first data of macroplastic pollution on the most downstream stretch of the Rhine, revealing peaks close to urban areas and an increasing plastic density towards the river mouth. The mean riverbank density estimates are also similar for the crowd-based and reference methods (573-1,033 items km$^{-1}$). These results highlight the value of including crowd-based riverine macroplastic observations in future monitoring strategies. Crowd-based observations may provide reliable estimations of plastic transport, density, spatiotemporal variation and composition for a larger number of locations than conventional methods.
Data quality control is important for any data collection program, especially in citizen science projects, where it is more likely that errors occur due to the human factor. Ideally, data quality control in citizen science projects is also crowdsourced so that it can handle large amounts of data. Here we present the CrowdWater game as a gamified method to check crowdsourced water level class data that are submitted by citizen scientists through the CrowdWater app. The app uses a virtual staff gauge approach, which means that a digital scale is added to the first picture taken at a site and this scale is used for water level class observations at different times. In the game, participants classify water levels based on the comparison of the new picture with the picture containing the virtual staff gauge. By March 2019, 153 people had played the CrowdWater game and 841 pictures were classified. The average water level for the game votes for the classified pictures was compared to the water level class submitted through the app to determine whether the game can improve the quality of the data submitted through the app. For about 70% of the classified pictures, the water level class was the same for the CrowdWater app and game. For a quarter of the classified pictures, there was disagreement between the value submitted through the app and the average game vote. Expert judgement suggests that for three quarters of these cases, the game based average value was correct. The initial results indicate that the CrowdWater game helps to identify erroneous water level class observations from the CrowdWater app and provides a useful approach for crowdsourced data quality control. This study thus demonstrates the potential of gamified approaches for data quality control in citizen science projects.