Citing "confidential corporate documents and interviews with many of the technologists involved in developing the software," The Intercept is reporting that IBM has been using footage from New York Police Department (NYPD) camera systems installed after the 9/11 attacks to train its image recognition systems. According to the report, the company had "secret access" to the video footage, which allowed it to train artificial intelligence (AI) systems to search for people by hair color, facial hair, and skin tone.
The lengthy report includes statements from IBM and the NYPD about the issue, as well as from individuals concerned about the lack of transparency and the issues raised by training AI in this way. Several noted that the report raises the specter of potential racial profiling.
Rick Kjeldsen, a former IBM researcher, summarized the concerns: “Are there certain activities that are nobody’s business no matter what? Are there certain places on the boundaries of public spaces that have an expectation of privacy? And then, how do we build tools to enforce that? That’s where we need the conversation. That’s exactly why knowledge of this should become more widely available — so that we can figure that out.”