If you only build one leg of a three-legged stool, it’s going to fall. Yesterday’s FTC Big Data conference confirmed this by focusing mostly on the potential for future harms of big data and missing an important opportunity to deep-dive on whether big data is causing real harms, and whether any of those harms — to the extent they exist — fall outside of the scope of existing laws.
It certainly had the chance to build a stable platform for discussion. Chairwoman Ramirez opened the workshop by setting out three goals:
- Identify where data practices violate existing law and identify gaps in current law
- Build awareness of possible discriminatory practices.
- Encourage businesses to guard against bias
Unfortunately most panels focused only on the second – “possible” discriminatory practices, contrived Orwellian futures, and then ignored the rest of the conversation.
Perhaps I should have seen this coming as the title suggested confrontation not conversation: “Big Data a tool for inclusion or exclusion.” None of the Republican-appointed Commissioners appeared. And the panels were heavily weighted in favor of consumer groups and not the businesses using this new technology.
With battle-lines drawn, the event devolved into a discussion of discriminatory practices resulting from the use of big data. We heard stories of how this data might be used to show lower income consumers ads for high interest credit cards. Or how this data might be used to deny credit to those living in less-affluent zip codes. But panelists avoided identifying specific examples of these bad practices – instead relying on the specter of possible misuse. [pullquote]With all the focus on possible harms, there was little discussion about the benefits of big data.[/pullquote]
Moreover, few panelists conceded that even if these specters did exist, existing laws likely address them. Take for example a panelist’s comment that “businesses might use data for purposes other than the ones listed when the data was collected.” Well, this is how data innovation works. And if a company makes a material promise not to reuse data and breaks that promise, this would violate Section 5 of the FTC Act. Similarly, if any use of big data is “unfair,” the FTC can take action under Section 5 if consumers are significantly and unavoidably harmed and there’s no countervailing benefit. Finally, Chairwoman Ramirez and Commissioner Brill both recognized that the Fair Credit Reporting Act might prevent misuse of big data for credit scoring.
With all the focus on possible harms, there was little discussion about the benefits of big data. The City of Boston uses big data from smart phones to identify potholes. Google uses big data to identify flu epidemics. And big data can help establish credit history for the 50 million Americans unable to get a FICO score.
Perhaps the greatest irony of the day came when I realized the best way to avoid the potentially discriminatory practices raised by privacy advocates is to collect more data. With more data and better aggregation, businesses could avoid incorrectly lumping people into categories and instead better identify them as individuals.
Big data is a tool. Like all tools we need to strike a balance between benefits and harms. This is not possible until we have a mature conversation that avoids bombast, identifies problems, and searches for solutions.
I hope the next discussion achieves such maturity and we focus on building every leg of the issue and creating a platform on which we all stand taller.