Overall system architecture
The Secchi3000 water quality measurement system consists of the following parts: 1) mobile phone application for acquiring and sending observations, 2) Secchi3000 measurement device, 3) algorithms for analysing the water quality values from the images received, and 4) database, where all the data with analysed values are stored. The conceptual architecture of the Secchi3000 system is presented in Figure 1.
The mobile phone application the users can use to gather observations is called EnviObserver (Kotovirta et al. 2012). It is a tool for participatory sensing which utilizes people as sensors by enabling them to report environmental observations with a mobile phone. The current version of the Secchi3000 measurement device consists of a container and two measurement tags that are used in the analysis. The tags are located at different depths in order to derive water turbidity. After filling the container with water, the user takes a picture looking inside the container through a hole in the lid. GPS location and timestamp are automatically retrieved during the measurement. In addition to the picture, users can enter supplementary information, e.g. ID of the measurement site.
Finally, the user sends the data collected to a central server for automatic water quality analysis. The water quality analysis consists of two separate algorithms. After receiving a picture, the first algorithm automatically detects the locations of the tags in the picture and extracts pixel RGB values for the black, grey, and white areas of the two tags. The second algorithm carries out the actual water quality analysis based on the RGB values extracted by the tag recognition algorithm. After the automatic analysis, the result is sent back to the user and stored on a database.
Data collection
The system was tested in Finland during the summer of 2012. The test users were recruited by the Finnish Environment Institute and included people who collect water quality samples professionally, companies, water protection associations and private citizens. In the tests, 100 users sent 1,146 pictures for analysis. The samples were collected from lakes and coastal areas of Finland. The professional water quality experts took the Secchi3000 measurements during their routine measurement and sample collection trips. The other participants took measurements during their leisure activities mostly out of personal interest.
The results of the analysis are described in the next chapter. As the automatic analysis was not completely ready during the test period, the images received with related metadata were only stored in the database, and the analyses were carried out afterwards. The Secchi3000 devices were designed by the Finnish Environment Institute and VTT Technical Research Centre of Finland, and manufactured by a Finnish company. The system is currently being prepared for more large scale activities, including the commercial manufacture of the devices.
Target detection
Detecting accurate positions of white, grey and black targets in the images is challenging, because of many factors not related to the water quality:
-
Varying illumination conditions create over-exposed areas, reflections and shadows in the inner structure of the measurement container. In the images, this translates into artefacts and undesired shapes that can occlude, extend or overlap with the target rectangles. A purely shape-based approach will not always be robust to this kind of artefacts.
-
Standard cameras from average mobile phones deliver mid- to low-end image quality, with noise, blur, varying resolutions (often rather poor). This affects the sharpness of target rectangles in the image and therefore their detectability.
-
Variability during acquisitions made by different human operators using various devices introduces undesired discrepancies; in particular, the acquisition angles of the camera with regards to the scene vary all the time. Because the acquisition system is used in-situ and not in a controlled environment, one cannot assume a priori either the position of targets in the image or their orientation. The target rectangles are not always oriented vertically or horizontally, they appear to be arbitrarily rotated in each image. Unlike in typical industrial computer vision applications, here the relative position and orientation of the camera and the scene cannot be known precisely.
On top of all these challenges, the lower the Secchi depth of the water is, the more difficult the target detection in the image. Because the water properties affect the reflectance and thus the pixel values, especially in cases of humic or very turbid waters, a target detection method based solely on colour or reflectance values in the image would most likely fail. Control points could be used to delineate the target areas; however, in turbid waters it is more reliable to detect larger homogeneous targets than control points.
For method development, a panel dataset of 20 images was manually chosen from a dataset of hundreds of water quality images acquired by users. The panel represents a wide variety of illumination conditions, water turbidity, and target detectability from the most favourable cases to the most difficult ones. Once fine-tuned on the panel, the method was applied to the whole image dataset for accuracy assessment.
The detection process was first to locate the surrounding areas of the lower and upper tags by the template matching method. The following processing is done in these two local tag areas (cf. Figure 2, the red rectangles are the areas detected using template matching).
The second step is to detect the white rectangle shape using a contour-based approach within the red rectangle area (the blue rectangle shape in Figure 2). The processing of the upper tag and lower tag is the same. The grey and black rectangle positions were obtained by similarity transformation. That is, by moving the rotated blue rectangle leftward and rightward we get grey and black rectangles.
A mobile phone camera looking at the water tank through a hole approximates to a central projection in geometry computer vision. A central projection of a plane (tag) onto a parallel plane (image) is also called similarity transformation, which preserves parallelism, concurrence, ratio of division, etc. (Hartley and Zisserman 2004). That is angles between lines are not affected by rotation, translation or isotropic scaling. In particular parallel lines are mapped to parallel lines. The ratio of two lengths is an invariant. Similarly, a ratio of areas is an invariant.
When the image is extremely blurred and the camera is rotated, the white rectangle shape of the lower tag cannot be detected, but four corner image features can be extracted. Using the rectangle formed by these four corner feature points, the other two tags positions can be found, based on the similarity invariant.