Right, but my point is that we're not quite at the point where a computer can reasonably process that much visual input, determine whether or not something is exhibiting the signs of drowning, and alert a lifeguard with a minimum of errors. False positives will lead to personnel confusion, and false negatives will lead to dead swimmers.
This isn't to mention that automation does encourage complacency. One of the primary reasons for the recent Malaysia Air jet disappearances according to investigations is an excessive reliance upon automated systems to fly planes, and insufficient knowledge of flying without such tools. I reckon a similar situation could be a real danger here as well (if not amplified considerably; most lifeguards (at least where I've lived) were usually highschool students doing it as an extracurricular/volunteer activity, summer job, etc., and teenagers aren't exactly known for having an above-average attention span).
That's what I meant by "personnel confusion", yes. A.k.a. the "cried 'wolf'" effect; how can a lifeguard be expected to treat this alarm seriously when all the other ones have been false positives?
This isn't to mention that automation does encourage complacency. One of the primary reasons for the recent Malaysia Air jet disappearances according to investigations is an excessive reliance upon automated systems to fly planes, and insufficient knowledge of flying without such tools. I reckon a similar situation could be a real danger here as well (if not amplified considerably; most lifeguards (at least where I've lived) were usually highschool students doing it as an extracurricular/volunteer activity, summer job, etc., and teenagers aren't exactly known for having an above-average attention span).