Impact of Algorithms on Society
Mukund holds expertise in working with product development teams & startups, building teams for taking product vision to market, and mentoring team members for technical and professional growth.
A common dystopian science fiction future is where the robots take over the world. A more realistic future is one where human lives are governed by computer algorithms that have no real validation for fairness or neutrality. Unfortunately, this scenario is becoming a silent reality in many areas. While there are varied degrees of impact in different cases, the number of scenarios where algorithms are used for decision making is increasing.
The most widely known common use case of targeted advertising based on tracking user behaviour on the internet is under widespread scrutiny now. Laws like General Data Protection Regulation, and California Consumer Protection Act are being enacted to fix this problem that people are commonly aware of.
However, there are many other scenarios that exist where more awareness, scrutiny and regulation is required. Here are some popular examples that are not exclusive to the entities mentioned below
Amazon’s warehouse worker tracking system can automatically fire employees who aren’t productive enough without human supervisor involvement. Amazon has said this number has come down and people are educated when they are flagged as not being productive enough. However, the reasons for increased productivity and less layoff appear to have more to do with behaviour change due to coercion. The pervasive focus on extracting productivity also extends to its delivery people who regularly break road safety rules and avoid bio breaks by resorting to extreme measures. It is to be noted that Amazon isn’t the only one doing this.
China has come up with a social credit system that uses a mixture of scoring algorithms based on people’s activities online and offline, as well the activities of others on the person’s network. This social credit system will determine automated reward and punishment to citizens. While it comes into full effect by 2020, the system has already been used to ban people and their children from certain schools, prevent low scorers from renting hotels, using credit cards, and blacklist individuals from being able to procure employment. This social credit
A common dystopian science fiction future is where the robots take over the world. A more realistic future is one where human lives are governed by computer algorithms that have no real validation for fairness or neutrality. Unfortunately, this scenario is becoming a silent reality in many areas. While there are varied degrees of impact in different cases, the number of scenarios where algorithms are used for decision making is increasing.
The most widely known common use case of targeted advertising based on tracking user behaviour on the internet is under widespread scrutiny now. Laws like General Data Protection Regulation, and California Consumer Protection Act are being enacted to fix this problem that people are commonly aware of.
However, there are many other scenarios that exist where more awareness, scrutiny and regulation is required. Here are some popular examples that are not exclusive to the entities mentioned below
Amazon’s warehouse worker tracking system can automatically fire employees who aren’t productive enough without human supervisor involvement. Amazon has said this number has come down and people are educated when they are flagged as not being productive enough. However, the reasons for increased productivity and less layoff appear to have more to do with behaviour change due to coercion. The pervasive focus on extracting productivity also extends to its delivery people who regularly break road safety rules and avoid bio breaks by resorting to extreme measures. It is to be noted that Amazon isn’t the only one doing this.
China has come up with a social credit system that uses a mixture of scoring algorithms based on people’s activities online and offline, as well the activities of others on the person’s network. This social credit system will determine automated reward and punishment to citizens. While it comes into full effect by 2020, the system has already been used to ban people and their children from certain schools, prevent low scorers from renting hotels, using credit cards, and blacklist individuals from being able to procure employment. This social credit
system along with China’s mass surveillance system via internet surveillance, camera surveillance (with facial recognition) raises human rights concerns. While human intervention is possible, due to the sheer volume involved, intervention is likely to happen only in rarest of cases.
Social media manipulation by state actors to influence people’s opinions gives politicians the ability to provoke people’s emotions to win elections. Facebook, YouTube and Twitter decide what shows up on a user’s home feed despite who their friends are. The algorithms match the user with their preferences irrespective of whether it is good or bad. As an example, if an individual is extremely radicalized, they are likely to get friend recommendations or feeds that are in agreement with their radicalized views. This makes the problem worse and doesn’t help to fix the core issue. This positive reinforcement makes society very polarized as ‘right wing’ and ‘left wing’ to the point that people begin to vehemently dislike those with opposing viewpoints.
Several initiatives are currently underway to improve the situation. In some cases, a knee jerk reaction from policy makers triggers events that lead to change. California, for example, has banned use of facial recognition software by police and other agencies. There are other actors looking for an active solution to the problem of unaccountable algorithms and data manipulation.
Research Institutes like Data & Society focus on the social and cultural issues arising from datacentric technological development to raise awareness and debate around algorithmic accountability, media manipulation and disinformation online, and so on.
While deep learning algorithms are relatively inscrutable, traditional machine learning algorithms continue to provide the right solution for many problems. There is a focused effort on ensuring algorithmic accountability on traditional machine learning systems in order to certify a machine learning model as fit for a purpose.
Companies are now focusing on data lineage to ensure the integrity of data that is being used to build machine learning models. Data lineage plays a central role in data warehouses for establishing data integrity and trust. Netflix has built a centralized lineage service to better understand the movement and evolution of data and related data art e facts within the company’s data warehouse, from the initial ingestion of trillions of events through multistage ETLs, reports, and dashboards.
Companies are actively developing monitoring solutions for machine learning systems in production to ensure that the models in production don’t decay over time due to data poisoning or become biased due to bad data.
Fairness, accountability, and privacy are a critical part of our society and social ecosystem. Solutions like machine learning keep increasing in importance and reach and are being used for use cases both good and bad. Rules and expectations around fairness and accountability will increase as more of our daily lives are impacted by algorithms that work purely from data. People, companies and governments need to engage in ethical and fairness debates to evolve legal frameworks and social norms that build fairness and accountability into the systems that impact people’s lives like never before. If the industry, government and members of the public don’t take the initiative, the future will very well be ruled by algorithmic overlords with no easy recourse.
While deep learning algorithms are relatively inscrutable, traditional machine learning algorithms continue to provide the right solution for many problems.
Social media manipulation by state actors to influence people’s opinions gives politicians the ability to provoke people’s emotions to win elections. Facebook, YouTube and Twitter decide what shows up on a user’s home feed despite who their friends are. The algorithms match the user with their preferences irrespective of whether it is good or bad. As an example, if an individual is extremely radicalized, they are likely to get friend recommendations or feeds that are in agreement with their radicalized views. This makes the problem worse and doesn’t help to fix the core issue. This positive reinforcement makes society very polarized as ‘right wing’ and ‘left wing’ to the point that people begin to vehemently dislike those with opposing viewpoints.
Several initiatives are currently underway to improve the situation. In some cases, a knee jerk reaction from policy makers triggers events that lead to change. California, for example, has banned use of facial recognition software by police and other agencies. There are other actors looking for an active solution to the problem of unaccountable algorithms and data manipulation.
Research Institutes like Data & Society focus on the social and cultural issues arising from datacentric technological development to raise awareness and debate around algorithmic accountability, media manipulation and disinformation online, and so on.
While deep learning algorithms are relatively inscrutable, traditional machine learning algorithms continue to provide the right solution for many problems. There is a focused effort on ensuring algorithmic accountability on traditional machine learning systems in order to certify a machine learning model as fit for a purpose.
Companies are now focusing on data lineage to ensure the integrity of data that is being used to build machine learning models. Data lineage plays a central role in data warehouses for establishing data integrity and trust. Netflix has built a centralized lineage service to better understand the movement and evolution of data and related data art e facts within the company’s data warehouse, from the initial ingestion of trillions of events through multistage ETLs, reports, and dashboards.
Companies are actively developing monitoring solutions for machine learning systems in production to ensure that the models in production don’t decay over time due to data poisoning or become biased due to bad data.
Fairness, accountability, and privacy are a critical part of our society and social ecosystem. Solutions like machine learning keep increasing in importance and reach and are being used for use cases both good and bad. Rules and expectations around fairness and accountability will increase as more of our daily lives are impacted by algorithms that work purely from data. People, companies and governments need to engage in ethical and fairness debates to evolve legal frameworks and social norms that build fairness and accountability into the systems that impact people’s lives like never before. If the industry, government and members of the public don’t take the initiative, the future will very well be ruled by algorithmic overlords with no easy recourse.