General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHomeland Security's 'Pre-Crime' Screening Will Never Work
http://www.theatlantic.com/technology/archive/2012/04/homeland-securitys-pre-crime-screening-will-never-work/255971/Pre-crime prevention is a terrible idea.
Here is a quiz for you. Is predicting crime before it happens: (a) something out of Philip K. Dick's Minority Report; (b) the subject of of a Department of Homeland Security research project that has recently entered testing; (c) a terrible and dangerous idea which will inevitably be counter-productive and which will levy a high price in terms of civil liberties while providing little to no marginal security; or (d) all of the above.
If you picked (d) you are a winner!
The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology, which is some crazy straight-out-of-sci-fi pre-crime detection and prevention software which may come to an airport security screening checkpoint near you someday soon. Yet again the threat of terrorism is being used to justify the introduction of super-creepy invasions of privacy, and lead us one step closer to a turn-key totalitarian state. This may sound alarmist, but in cases like this a little alarm is warranted. FAST will remotely monitor physiological and behavioral cues, like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyze these cues algorithmically for statistical aberrance in an attempt to identify people with nefarious intentions. There are several major flaws with a program like this, any one of which should be enough to condemn attempts of this kind to the dustbin. Lets look at them in turn.
First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let's assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations -- an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes -- which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.
hobbit709
(41,694 posts)I doubt this idiotic idea will even be that accurate.
broiles
(1,370 posts)TalkingDog
(9,001 posts)And they want to know why the Atlantic is being such a "Debbie Downer" about the whole thing.
zipplewrath
(16,646 posts)It's not that the article is wrong, but it operates under some unstated assumptions. The false positive problem is only a "problem" if one responds to the filter in the wrong way. So you get a false positive, now what do you do? Secondary screening? Background check? Put a Marshall in the seat next to them? The false positives only become a problem if the response is to start operating as if these people are now "guilty" of something.
The real problem here is more along the line of the heisenberg uncertainty principal. Trying to detect something, alters it. Suppose you have a real, live, terrorist on your hand. Singling him out, or handling him differently may very likely cause him to abandon his plans. So you get into the problem of identifying potential terrorists, but not being able to do much about it except wait. He just has to keep trying until he isn't detected. Without a "perfect" system, he's not going to have to try too many times before he is successful at being detected. He might even run a few "dry runs" to establish his own patterns so he can sense if he is being singled out.
The real problem here is that there are precious few terrorist acts AT ALL. You are trying to detect something that functionally "never" happens. No matter how "perfect" a system is that you create, it will very likely never get the "chance" to work. And the data upon which it was created will most likely become obsolete before it ever gets the chance to "work".
Bruce Wayne
(692 posts)Ellipsis
(9,124 posts)Bad idea. Period.