Today, the development and testing of methods for fault detection and identification in wastewater treatment research relies on two important assumptions: (i) that sensor faults appear at distinct times in different sensors and (ii) that any given sensor will function near-perfectly for a significant amount of time following installation. In this work, we show that such assumptions are unrealistic, at least for sensors built around an ion-selective measurement principle. Indeed, long-term exposure of sensors to treated wastewater shows that sensors exhibit fault symptoms that appear simultaneously and with similar intensity. Consequently, this suggests that future research should be reoriented towards methods that do not rely on the assumptions mentioned above. This study also provides the first empirically validated sensor fault model for wastewater treatment simulation, which is useful for effective benchmarking of both fault detection and identification methods and advanced control strategies. Finally, we evaluate the value of redundancy for remote sensor validation in decentralized wastewater treatment systems.