The U.S. military has the hardest job in human resources: evaluating hundreds of thousands of people for their ability to protect the nation’s secrets. Central to that task is a question at the heart of all labor relations: how do you know when to extend trust or take it away?
The office of the Defense Security Service, or DSS, believes artificial intelligence and machine learning can help. Its new pilot project aims to sift and apply massive amounts of data on people who hold or are seeking security clearances. The goal is not just to detect employees who have betrayed their trust, but to predict which ones might — allowing problems to be resolved with calm conversation rather than punishment.
That data will join data from what’s called continuous evaluation, a current effort to monitor life events related to clearance holders, such as getting married or divorced, entering into a lot of debt or getting a sudden windfall, tax returns, arrests, sudden foreign travel, etc. Over a million military personnel are currently enrolled in the existing continuous evaluation system.
In other instances, where managers choose to use monitoring to increase costs for bad performance without offering rewards, future employee monitoring will probably feel truly Orwellian. And there are other concerns. It’s not clear how wide-scale adoption of user activity monitoring would effect, for instance, the ability of officials to communicate with the media on background, or engage in whistleblower activities, or other behavior that is morally and ethically justified but may have an immediate negative effect on the organization.
But none of those outcomes is the direct result of the technology so much as the discretion of future managers. Some bosses are good. Some are bad. In the future, each will be empowered to be more truly what they are.
But that, too, will be impossible to conceal.