Those people who will not be governed by God will be ruled by tyrants.
– William Penn
In mid-2013, the Equal Opportunity Employment Commission launched a lawsuit against Pennsylvania-based Consol Energy Inc., after an employee at one of the company’s West Virginia mines refused to use a biometric hand scanner to log his work hours and was subsequently pressured to take an early retirement.1
The employee’s argument, however, was that using the scanner would be tantamount to accepting the mark of the biblical beast, a violation of his Christian beliefs in Revelation 13:16–17: the beast “forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, so that they could not buy or sell unless they had the mark”(NIV).
Despite the fact that the employee had worked at the company for more than thirty-five years, the company refused to budge on its biometric scanning rule, characterizing it a necessary means of tracking employee attendance and work habits – a major feed into the company’s bottom line.
Head south a few hundred miles to Muscogee County, Georgia, and there students at one school district in August 2013 were given a new option for buying lunch: hand scanners that read the web of veins on their palms. Students making a buy were told to simply hold one of their hands over a reader with infrared technology and wait a second or two. The scanner then analyzed the vein patterns and searched the database to identify and match the person to his or her account. Forget picture identification cards and cash. The process of checking out and paying took fewer than four seconds – the lunch line barely needed to come to a halt.2
Head northwest to Michigan and the creepiness factor increases.
In October 2011, the community of Farmington Hills took federal grant money and invested it in new “Intellistreets” technology from the creative minds of a company called Illuminating Concepts that placed specially designed lampposts with features right from Homeland Security’s top spy wish list – and, simultaneously, from the dreams of environmentalists. The lamps are a curious mix of energy conservation and counterterrorism technology. On one hand, they automatically adjust to natural light patterns, thereby saving on city electricity costs. On the other, they monitor traffic, track pedestrians, and videotape and record passersby and their conversations, frequently without the pedestrians’ knowledge.3
The idea behind the streetlamps – which contain sophisticated radio and speaker systems and high-tech video technology – is to alert authorities of impending dangers via breaking and real-time televised or broadcast information. But the privacy advocates fear, of course, is that government might use the technology for more sinister purposes.4 First, the justification for the spy lamps is to track terrorists. Then, it’s to root out high-profile and violent criminal suspects. And soon, the lampposts are recording and collecting video and conversation snippets on everyone who passes by, all in an ever-encroaching government quest to secure the citizens from attack and harm – for the good of the nation.
The line between security for all and freedom for one has certainly grown dim in recent years. At what point do we say, “Enough is enough; we have lost enough personal privacies”?
At what point do we admit that government cannot guarantee our security and save us from all harm?
The use of some of this technology has really gotten out of hand.
Parents in Polk County, Florida, were shocked to learn in May 2013 that school administrators had been scanning their children’s irises, absent consent or even awareness. The cited reason? Administrators said they needed to give the students and their families some more security on their school buses.5
So they hired a private security company to install the program, which involved taking a quick scan of students’ irises as they entered the school bus, then sending an alert via text or e-mail to parents to let them know their children had safety arrived at their destination. Orwellian factor aside, parents were also outraged that they weren’t all given a heads-up that the program was being considered or implemented. One parent recalled to a local newspaper the complete shock she felt after learning her eight-year-old child was told to stare into a blue light until it changed color as he boarded his bus. An estimated 750 other students were subjected to the same order.6
School administrators said they meant to provide the parents an opt-out notice, but the staffer who was in charge of mailing those letters missed work due to a medical emergency, so the scanning went forth absent any resistance.7 That’s a pretty big snafu. The company in charge of scanning the irises has since said it’s destroyed the database of information, but still – the story is chilling. For parents, it’s a bit scary to realize how quick and easy it is for a government body to take control of children’s privacies. It’s doubly outrageous when that government body is a supposedly trusted source, like a school.
But iris scanning is growing technology in schools around the nation.
A South Dakota company named Blinkspot has made a whole business out of marketing eye-scan technology to school buses. One product targeted at the elementary level has children who are boarding a school bus first peer into a scanner that’s fashioned to look like a pair of binoculars. If they’re on the right bus, the scanner sounds a beep. If they’re on the wrong bus, the scanner honks. Blinkspot also syncs its scanners to a mobile app for parents to track their children’s whereabouts while riding the bus. When the student boards, the parent receives an e-mail, complete with the child’s photograph, a Google map of the child’s entrance on or exit from the bus, and the time and date of bus boarding and departure.8
The nation’s places of higher learning are starting to catch the fever for tracking technology too.
In July 2013, Winthrop University in South Carolina tested out a new scanning technology program on its incoming freshman class, requiring that all new students take part in an eye scan as part of their identification card application process. The school also tested iris scanners at its Macfeat Early Childhood Laboratory School, where students who major in education receive training, and mulled installing scanners at the likes of the Lois Rhame West Health, Physical Education and Wellness Center. The reason for the technology there? University officials said it was inconvenient for students who want to work out or swim to carry their identification cards with them.9
Add to the list of justifications for spy technology: Stop terrorists. Catch criminals. And now, keep inconveniences at bay.
America’s next generation won’t have any concept of what constitutes personal privacy at all.
In 2007, a school system in Nashville, Tennessee, made waves with a first-in-the-nation data program that was billed as remembering the faces of staffers and students, and sounding the alarm on strangers on campus. The school district, which at the time served seventy-five thousand students, installed cameras equipped with facial recognition technology.10 The program required that first, all students and staff, and others with cause to enter school grounds on a near-daily basis, have their photographs taken and uploaded into a database for storage. Whenever one of the many cameras that scoured the school property picked up a face that it could not match in the database, an alert would sound to security officials, who would then launch a query into the reasons for that person’s presence.11
In addition to sounding alarms for would-be criminals, the face-remembering technology was also credited with identifying and alerting school officials to the presence of fired employees and suspended or expelled students. But as civil rights activists decried: What’s to keep the technology from being used to watch students all the time and alert teachers and administrators to those who cut class?
Schools around the nation may be pushing the technology buttons a little too hard. It’s a shame when even those who have nothing to hide are treated to schools that seem little more than jails, overseen by teachers and administrators with access to secret spy cameras and recording devices.
The technology’s not going away anytime soon, though. Police, private entities, federal agencies – they’re all on board the biometric bandwagon.
Facebook in August 2013 announced an update to its user policy to bring almost all of its one billion stored profile photographs into the company’s facial recognition database as a means of speeding up the process of “tagging” friends. Facebook executives assured users that they could opt out of the system, which was intended to basically match faces in pictures with public profile features.12 But privacy advocates were still concerned, and Federal Trade Commission officials announced an investigation into the policy change to see if Facebook was in violation of a 2011 agreement that was forged between the government and the company.13 Why?
The agreement came at the end of a government finding that Facebook had violated the privacy of its users and shared personal data without first obtaining their permission.14
In other words, Facebook – despite its assurances that it has never given any federal entity complete access to its company databases and has supplied private information to intelligence or government agencies only as required by law – does not have a clean track record of upholding individual privacies. While the company may have promised its facial recognition photo database would never be used for purposes other than those known by users, a smart gambler wouldn’t bet all the chips on that claim.
After all, once privacy is compromised, it’s tough to get it returned. If Facebook lost control of its photo database to hackers, or worse, entered a partnership to grant government access to the data, what good is the company’s promise to protect user privacy in the end?
Meanwhile, for the government sector, especially police, facial recognition data is proving a treasure chest.
Some of its use is valid – and necessary. For instance, facial reading technology on the battlefield has proven a substantial aid to US forces in Afghanistan and Iraq, trying to identify insurgents and terrorist leaders. It’s also a proven technology for fighting crime on the domestic front, helping police sift through bleary store videotape images and pulling up matches of better pictures in facial photograph databases.15
But this same tool is ripe for civil rights infractions and privacy invasions.
In Pinellas County, Florida, alone, the sheriff’s office has built a database of searchable facial photographs that includes more than 120 million individuals. Law enforcement’s stated reason for assembling the database was to root out driver’s license fraudsters – but in the mix are plenty of photographs of innocent bystanders. And that’s the problem with the technology: it relies on storage of individual data that’s very often collected absent the provider’s knowledge and without notification to the provider of its intended use. Most people providing photographs think they’re just taking a driver’s license snapshot. But in June 2013, fully thirty-seven states allowed for facial recognition technology to be used in the driver’s license permitting process – and that number is sure to grow in the coming years.16
For those with no criminal backgrounds, that’s an alarming trend. After all, why should an innocent person with no arrest record be used in photograph form in a police lineup? Yet that’s what police now have the technology to perform – digital lineups – in their own patrol vehicles, using portable laptops and a nationwide database of millions of photographs.
The bigger issue, of course, is what happens to that innocent person’s photograph. As detractors of facial recognition technology have argued, these photograph databases are rapidly becoming a de facto national identification card system for government, law enforcement – anybody with a just cause, it would seem – to tap. And all unbeknownst to most Americans.
Against this backdrop, the Department of Homeland Security has been quietly bolstering local police agencies’ ability to access the technology.
In August 2013, Homeland Security officials said they were developing spy technology capable of scanning huge crowds of people and searching for a specific individual based on inputted facial features, then alerting police to the person’s presence. The technology – the Biometric Optical Surveillance System – was first a program of the Pentagon, but was then transferred to Homeland Security for eventual deployment to state and local law enforcement personnel.17 A month later, federal officials sent their BOSS technology into the field for live testing – at the Tri-Cities Toyota Center’s hockey game in Kennewick, Washington. The test was to see if Homeland Security officials could deploy BOSS and successfully locate and identify a handful of preselected faces among the sea of six thousand or so the rink holds. Hockey fans were told not to worry – if their photographs were taken, nobody would have access to them but government officials.18
That hardly seems a comfort.
Are those with privacy concerns to don hoodies or bandannas every time they enter a public space? Another simple way of beating facial technology is to, believe it or not, smile – thus, the no-smile rule while getting photographed for passports and driver’s licenses.19
But even a beauty queen can’t smile past all the Big Brother technology that’s out there. Not all are image-based.
Voice biometrics that measure the tone, pitch, and rhythm of speech have been making gains in the security industry as a means of countering theft and fraud.20 Like other touted security systems that depend on biometrics, the bigger issue is what becomes of the data that’s collected. Biometric data that’s hacked – including individual speech identification patterns – is not so easily recalled or replaced. Unlike a PIN, a user can’t create a new voice, or a new set of palm veins, or an iris.
Other big data advances in recent years: fingerprint technology that can actually capture prints from feet away, from an Alabama company called IDair.21 That means workers entering a secure employment facility need only wave at a scanner to obtain entry. And another emerging field: backscatter vans carrying military-grade technology that’s been used by Transportation Security Agency officials to riffle through individuals’ bags and clothing to search for banned items. Now, that technology is spreading among local law enforcement – giving police departments equipped with American Science & Engineering’s Z Backscatter Vans, or with the more mobile backscatter radiation X-ray machines, the ability to simply look through the scope at nearby cars and individuals and conduct a quick and quiet, secret surveillance operation.22
The Electronic Privacy Information Group sued the Department of Homeland Security to stop TSA’s usage of the technology, which revealed subjects’ bodies beneath their clothes, calling it an abject violation of civil rights.23 Bringing that same technology to the neighborhood and streets of small-town America can’t be good news for privacy protectionists.
But by far the most egregious technological advance to hit the nation for the purpose of ravaging personal privacies is a piece of video surveillance software from BLS Labs called AISight – a tame enough sounding program with downright eerie capabilities and an even eerier mission.
The system itself is intelligence software that’s equipped with behavior recognition technology that actually teaches itself, in time, to recognize deviations in a given sphere of coverage. As BLS Labs puts it, the technology measures structures, shapes and sizes, the speed of objects and the paths they pursue, and a dozen other factors, and records them for constant comparison and evaluation. The software is used in conjunction with video cameras, creating a system that’s ultimately capable of predicting behaviors – and alerting when those behaviors don’t meet expectations.24 Moreover, AISight can be affixed to thousands of surveillance cameras at once. In a short period of time, each of those cameras is able to determine what’s normal for the area it scopes, and what’s not.25
The possibilities for its use are endless. If an AISight-equipped camera at a busy downtown city intersection picks up that at a certain time each day thousands of pedestrians walk through the area covered by its lens without stopping, and then suddenly a man stops that flow by pausing and pointing a gun at a subject, the software will flag that movement as abnormal.
Authorities can be quickly alerted and dispatched to the scene.
But AISight goes even further than issuing alerts. It’s really a law enforcement dream come true – it provides the ability to practically predict crime before it even happens. Since the computer attached to the cameras constantly analyzes behaviors and compares the current findings to actions it previously recorded for identical time periods on different days, the software can also hit on what the human eye might consider as mundane behaviors, but ones that could really be step one of a crime in progress or an alert to a crime that’s being planned.
They could also be innocent behaviors – and that’s part of the problem with the technology.
For instance, in the aforementioned situation, where the AISight technology is attached to a camera that records a well-walked section of a city street, what if a man in the midst of the crowd paused for a split second to drop something on the ground? That simple act could mean radically different things: The man could actually be a terrorist, planting a packaged bomb. Or, he could be a guy on his way to work who dropped a piece of trash. There are dozens of other scenarios that could explain the act as innocent – and dozens more that are laced with criminal elements.
The point is, though, that regardless of advances in artificial intelligence, AISight can’t read minds or intentions. It can only determine deviances in behaviors based on past predictors and past data, leaving open a real potential for civil rights infractions. Police say the technology is working for them. San Diego and other California cities tout the system as a solid crime-fighting tool, while dozens of other law enforcement departments around the nation have applied for Homeland Security grants to get their own versions.26 And AISight isn’t the only option for the so-called predictive policing technology.
A program called CompStat that tracks crime figures in real time and alerts police to the next likely target, and that creates crime patterns that give a larger look at a community’s hot spots, has been making a splash among law enforcement agencies from Los Angeles to New York City.27 The computer programming is based on the same type of model used to predict where earthquake aftershocks will hit, and several cities say it’s been a tremendous aid, lowering burglary rates and other property crime statistics by significant margins in recent years.
IBM, meanwhile, has jumped into the policing game with its owner algorithm-based analysis program that helps police in Memphis, Tennessee, and Charleston, South Carolina, narrow down their likely crime targets and stop the acts before they occur.28 Legal minds have questioned predictive policing policies, pondering their potential to violate the Fourth Amendment and wondering how court testimony might proceed for suspects who are arrested after a computer red-flagged them for crimes, and the police, in turn, initiated the arrests. How do you cross-examine a computer?
But AISight takes even these computer modeling programs to a new level that includes videotaping and analyzing behaviors of people on the street who aren’t even guilty of a crime or suspected of an illegal act. Moreover, like other surveillance technology, it’s rapidly moving from the cause of antiterrorism and anticrime to become yet another data tracking and collection tool – albeit even more intrusive – for government that turns citizens into suspects.
AISight has been installed by San Francisco Municipal Transit Authority to monitor its twelve MTA stations. Louisiana authorities contracted with BLS to use it for port security. And El Paso officials have pursued the technology to keep a constant computer eye on water treatment plants located near the border with Mexico.29
Talk about the meeting of science fiction fantasy with reality. It won’t be long before innocent American citizens are ensnared in a prison of our own making – a cell built by well-meaning individuals who hyped our fears and sold us all on the need for layer upon layer of security. Between technology and politics, America’s beacon of freedom is being slowly snuffed out.