The Perils of Algorithmic Hiring and Title VII

by Brian Mosich

 

Abstract

Algorithmic hiring works at all levels of employment, from entry level to CEO. When applicants submit a resume, an algorithm will mine all available data about them and then compare it to information provided by the company’s top performs to determine whether to offer an interview. Where one lives, what type of websites one visits, and whether one has been purchasing unscented lotion could have more of an impact than where one went to school or one’s work history. Using algorithmic hiring provides a way for companies to inadvertently exclude applicants based on protected Title VII classes. This paper discusses how the protections offered by Title VII as written are unprepared for the realities of Big Data, and how these protections may be completely circumvented if the EEOC does not act soon.

Keywords

Title VII, Algorithmic Hiring, Big Data

 

Introduction


Without Swift Action from the EEOC, Algorithmic Hiring is Poised to Undermine 
50 Years of Title VII Protections

In the first episode of the series Futurama, Phillip J. Fry awakens in the future. After being subjected to a series of tests, a computer mechanically displays his permanent career assignment of delivery boy. This humorous moment may also prove to have been very prophetic in how technology may impact our daily lives. Hiring quality new employees is historically one of the most difficult problems in maintaining a business. Even if an employer only spends a couple seconds reviewing each resume, the labor quickly adds up. Employers do not want to spend months training new employees only for them to take positions with other companies. Often applicants look perfect on paper, but are not a great fit for the employer’s corporate culture. Modern technology has created a solution with algorithmic hiring. Harvest data from your top performing employees, and use it to hire more people like them.

Title VII of the Civil Rights Act of 1964 makes it an unlawful employment practice for an employer to fail or refuse to hire because of such individual’s race, color, religion, sex, or national origin.[1] These legal protections are grounded in a foundation of if an employer is going to exclude an applicant due to one of these criteria, it must show it is due to a business necessity.[2] Aside from these codified categories, the layman might assume everything else is fair game. That everything else is where the dangers of algorithmic hiring begin to grow. Can an employer use distance from the workplace to disqualify workers?[3] How about favoring applicants who visit websites that provide Japanese Manga?[4] Facially these qualities may seem safe. The Supreme Court has held that not only is overt discrimination proscribed by the Civil Rights Act of 1964, but practices that are fair in form but discriminatory in operation[5] are also proscribed by statute. The problem is algorithmic hiring practices demonstratively work, and if a practice produces proven business results, it may be impossible for a court challenge against it to succeed.

Algorithmic hiring starts when an employer decides to collect information from their most successful employees and use that to find similar applicants. However, if they failed to select a diverse pool of employees to emulate, employers run the very real risk of opening their business to a disparate impact claim.[6] Vague qualities can open a business to a potential lawsuit. For instance, employers may fear local traffic conditions may frequently cause employees to be late for work, and respond by excluding applicants who live more than a certain distance from the workplace.[7] In one case, this variable was responsible for the removal of almost all African-American applicants. The African-American applicants lived in a nearby suburb and were completely excluded from the final applicant pool. This problem was only noticed when the algorithm programmers themselves were testing to data to find out how the final results were created. This one variable alone completely culled African Americans from the applicant pool. Had an applicant had access to this data, whether or not a potential employer would be liable for failing to notice would ultimately depend on the courts.[8]

When a hiring algorithm can take into account hundreds if not thousands of seemingly unrelated variables, the employer or employment bears the ultimate responsibility for ensuring a nondiscriminatory outcome. If an unscrupulous employer is actively seeking a discriminatory outcome, it can easily can they hide behind these same variables. The benefits of hiring algorithms can be massive, both in reducing hiring costs and minimizing turnover costs. Algorithmic hiring programs offer targeted searches, but do not make employees aware of the perils in relying on time-saving methods. However, a potential employer may be left with questions about real-world long-term legal risks, and companies who offer these services are quick to dismiss the legal liability.

Currently, algorithmic hiring remains a legal grey area. It can be impossible for a potential employee to know why their resume was excluded from an interview. Even if they can determine they were excluded due to algorithmic hiring, and had access to the exact algorithm, the legal requirements necessary to win such a case would be almost impossible. Victims of discrimination simply do not have access to know if their resume was excluded by a live person, or by a computer. Even if it was excluded by a computer, there is often no means short of a lawsuit to force companies to reveal the criteria used. Worse even with the data, and the criteria used, the data analysis necessary to prove discriminatory hiring practices is often out of reach.

Ultimately, the Equal Employment Opportunity Commission (EEOC) should be our best resource by giving guidance to employers on how to properly utilize these tools, as well as the best source of hope for victims of discrimination due to systematic disparate impact caused by improperly designed and tested algorithms. By forcing companies who provide or use algorithmic hiring to publicly disclose their criteria and perform rigorous checks on how each data point impacts the makeup of the applicant pool, they may be able to ensure the decades of progress made possible by Title VII of the Civil Rights Act of 1964 are not lost.

 

History of the Problem

“Knowledge is power,” wrote Sir Francis Bacon.[9] It was true in 1597, and it is true now.

In 2013, it was estimated that worldwide we had produced over 4 zettabytes of data.[10] A zettabyte is the equivalent of a trillion gigabytes. This is expected to rise to over 44 zettabytes by the year 2020.[11] People produce a large amount of information every day. From snapping photos on their phones to something sending Tweets or browsing the Internet, little pieces of information add up to create a large amount of information about individuals. Internet marketers have been tapping into this for years. When a person searches for a product on Amazon, they’ll later see advertisements for that product show up on other websites they visit.[12] A person’s shopping preferences and browsing habits are remembered, and used to directly market to them electronically. These criteria are also available for harvesting and use by companies offering algorithmic hiring services.

The digital age and the breadth of information it has made available are both creating new problems and eroding the protections enacted to combat old problems. Former President Obama requested a study on how “Big Data” will transform the lives of Americans.[13] Exploring the dangers and the impacts in the public sector, this study represented President Obama’s forward-thinking nature in understanding a problem that would transcend his presidency. Key areas under government control, such as healthcare, education, homeland security, law enforcement, and privacy law,[14] are all expected to undergo rapid changes due to these tools in the coming years. In the private sector, the benefits and risks of Big Data warned about in the study are quickly coming to pass. The complete secrecy that algorithms operate under has left almost no meaningful ways to identify harm, or allow for way to hold a discriminating decision maker responsible.[15] Moreover, individuals impacted by algorithms are unable to understand, or contest either the information that has been gathered, or what the algorithm suggests about the potential applicant.[16]

One of the key limiting factors to sorting information had been computational power. Data scientists would need a large amount of data and were limited to very specific questions. Now, however, computational capabilities have expanded to the point where “finding a needle in a haystack” is not only possible, but practical.[17] Now that computational power is no longer an issue, the value of large data pools is becoming more prevalent. A researcher at the Broad Institute discovered that a genetic variant linked to schizophrenia was invisible when analyzing 3,500 cases, weakly identifiable when using 10,000 cases, but statistically significant with 35,000 cases.[18]

This information explosion is quickly reinventing a large number of industries, especially in employment. Kelly Trindel, Chief Analyst for the Office of Research, Information and Planning at the EEOC, cautioned big data in regard to employment has different meanings to different people, noting that big data are more than simply very large data sets with a significant amount of rows and columns.[19] The size of the datasets is not what defines big data, but the nature, source, how it is collected, merged, transformed, and utilized. In her testimony, Ms. Trindel suggested that “[i]n the employment context, I would define big data as follows: big data is the combination of nontraditional and traditional employment data with technology-enabled analytics to create processes for identifying, recruiting, segmenting and scoring job candidates and employees.”[20]

Employers have long been used to using traditional employee data to make hiring decisions. Applications and resumes that lead to interviews can only tell an employer so much about a potentially employee. Employers browsing candidate’s social media has been an accepted reality, and now the further rise of nontraditional employee data is raising concerns. Nontraditional employment data are a collection of information “maintained by the employer, public records, social media activity logs, sensors, geographic systems, internet browsing history, consumer data-tracking systems, mobile devices, and communications metadata systems.”[21] Other sources of data, such as combinations of words on resumes, personality test results, facial recognition software, and individual performance ratings on tests can also be considered.[22] Companies may also choose to include “internal company information such as frequency of meetings, locations of meetings, recipients and content of employee emails, and records of employee participation in wellness programs.”[23]

This list of data sources continues to expand with information such as a person’s face and voice being reduced to a string of code.[24] All of this information is gathered, quantified, and combined with other sources for use by employers. Potential employers may gather the data themselves or purchase it from information brokers. From there, the data can be used to uncover underlying patterns for use in predicting outcomes for similarly profiled groups of employees or applicants.[25]

Facial recognition and voice analysis data are likely to prove especially problematic because it may allow companies to potentially sidestep Title VII racial protections by favoring datasets that match certain ethnicities and exclude others. Distilling down a person’s facial features into data points would allow for potential comparison to an “ideal.” A potential employer could ask an algorithm to produce candidates with facial features indicating only those of Asian-American descent, with a voice or speech pattern. These data points would be innocuous amongst a large number of other criteria. Similarly, facial features could also be used to potentially spot genetic disorders in potential applicants and exclude them.[26] While this possibility may potentially run directly into the protections offered under the Genetic Information Nondiscrimination Act into Title VII, no courts have yet faced this issue.[27] While the true extent of the possibilities in terms of facial recognition programs on hiring have yet to fully be explored, it is data in use today and the dangers are real.[28]

Employers collect applicant data and compare it against data gathered from existing or former employees looking for factors that emerge as strong predictors of future success. One algorithm might have over 100,000 individual possible data points that are potentially scorable.[29] However, for many candidates, there will be missing data. An algorithm may score results for each candidate based on only 500 data points; however, those data points may not be the same from candidate to candidate.[30] Once the traditional and nontraditional data are combined, employers can begin to screen passive or active job applicants, or even target how employee training resources are allocated. Once employers develop a profile of an ideal candidate, they can begin to search for either current or potential applicants who will fit into that mold such as the prior example of one company favoring applicants for a programming position who visited websites that provide Japanese Manga.[31] This also allows employers to exclude applicants due to potential for absenteeism, safety incidents, or probability of turnover.[32] Applicants may be excluded not because they might actually have a high rate of absenteeism, but because they fit the company’s profile of someone who might. As an example, the distance an applicant lives from the workplace has been used to disqualify applicants.[33] Losing out on an employment opportunity because a person’s social media posts included a large number of posts about caring for sick relatives during work days is becoming a reality.

The initial push for use of big data analytics in employment has been spearheaded largely by departments comfortable with work with data analysis such as marketing and operations.[34] Marketing departments have long been familiar with segmenting the population and identifying groups of people for targeted advertisements. An employer wants to know its target audience and find effective ways of reaching them.[35] The most famous example is Target’s “Pregnancy Prediction Score,” which allowed Target used to predict if a customer was pregnant, and successfully estimate the due date within a small window.[36] This allowed Target to send coupons to customers at specific stages of pregnancy using nothing more than the shopping habits for items like lotion and cotton balls.[37] Applied to a potential applicant pool, these same innocuous criteria, easily buried in the noise of 100,000 or more possible factors, could be used to completely exclude potentially pregnant women from consideration were a company worried about an applicant taking maternity leave shortly after being hired. In terms of bringing a claim, it would be difficult for an applicant to prove they were discriminated against based on pregnancy at the time the applied, especially if they did not themselves even know they were pregnant at time of application.

Employers will want to use this algorithm power to improve their resources in regards to hiring, retention, and promotion. However, employers must be mindful not to discriminate against employees, or potentially employees based on protected characteristics. Discrimination based on race, color, religion, sex, national origin,[38] disability,[39] genetic information,[40] or pregnancy[41] is forbidden by law. How each criteria applied by the algorithm affects each of these protected classes cannot be ignored, and those coming from a marketing background may not understand the legal ramifications for certain criteria. Imagine a potential employer is heading into a very busy year and does not want to risk losing any of its new hires to maternity leave. Adding a variable to exclude any applicants who have used pregnancy in their social media posts, or who have searched for fertility treatments, could quickly lead to a Title VII violation if discovered. What if the criteria, similar to those used in the Target Pregnancy Prediction Score, were used to exclude applicants? Employees could be excluded from consideration due to pregnancy who either had no idea they were pregnant, or had no intention of becoming pregnant but had preferences that corresponding to the algorithms expectations.

People living with disabilities are especially at risk. Many innocuous behaviors can show a correlation with mental illness. For instance, academic research has shown that certain patterns of usage of social media platforms can be related to mood disorders.[42] In 2014, The Samaritans, a British Suicide-Prevention group, developed an application that would notify users when someone they followed on Twitter posted certain phrases that indicated they might be at risk of killing themselves.[43] While the application was later disabled over privacy and stalking concerns, it highlighted the ability for a potentially employer to review a person’s entire social media history and develop a profile of an employee’s potential mental illness. If a machine can determine, or think it can determine if certain social media posting patterns correlate with heightened absenteeism or desire to change jobs,[44] an employee could experience a loss of promotion or preemptive disciplinary behavior without any understanding of why. For existing employees, the EEOC has offered guidance on Employer Wellness Programs,[45] but these same criteria could be applied against potential applicants.

In her testimony, Kelly Trindel addressed these risks. She noted: “As an example of the type of EEO problems that could arise with the use of these algorithms, imagine that a Silicon Valley tech company wished to utilize an algorithm to assist in hiring new employees who ‘fit the culture’ of the firm. The culture of the organization is likely to be defined based on the behavior of the employees that already work there, and the reactions and responses of their supervisors and managers. If the organization is staffed primarily by young, single, white or Asian-American male employees, then a particular type of profile, friendly to that demographic, will emerge as ‘successful.’ Perhaps the successful culture-fit profile is one of a person who is willing to stay at the job very late at night, maybe all night, to complete the task at hand. Perhaps this profile is one of a person that finds certain perks in the workplace, such as free dry cleaning, snacks, and a happy hour on Fridays preferable to others like increased child-care, medical and life insurance benefits. Finally, perhaps the successful profile is one of a person who does not own a home or a car and rather appears to bike or walk to work. If the decision-makers at this hypothetical firm look to these and other similar results to assist in the recruiting of passive candidates, or to develop a type of screen, giving preference to those future job-seekers who appear to ‘fit the culture,’ the employer is likely to screen out candidates of other races, women, and older workers. In this situation, not only would the algorithm cause adverse impact, but it would likely limit the growth of the firm.”[46]

Ultimately, the relationships among the variables are exclusively correlational in nature. There is no certainty that an individual employee’s distance from work will increase their absenteeism, or that a love of Manga will make an employee a better coder.[47] There is no guarantee that the ideal employee is not amongst the large percentage of resumes discarded by the system. Algorithms are, at their basic level, opinions, biases, and prejudices embedded in code.[48] While they can mimic human decision making if the decision making criteria are fundamentally flaws, those flaws will be present in the final output. The old programmer truism remains as true today as ever. Computers make very fast, very accurate mistakes.

While algorithmic hiring can be a powerful tool for employers, it contains the very real risks of inadvertently or intentionally discriminating against those protected by Title VII. Data analysis can also reveal existing bias inside the company’s hiring practices. Companies have used machine learning and language analysis to analyze job postings. In doing so, have identified phrases that indicate existing gender bias.[49] Language-like “‘top-tier,’ ‘aggressive,’ and sports or military analogies like ‘mission critical’ decrease the proportion of women who apply for a job. Language-like ‘partnerships’ and ‘passion for learning’ attract more women.”[50]

By hiring employees with similar qualities and skillsets to existing employees, employers enjoy a better chance of hiring the right person for the role. On the surface, algorithmic hiring can assist companies with translating stacks of resumes into hiring qualified applicants. This can significantly reduce both the time and costs of identifying which employees to hire. A poor hiring decision can result in the sunk costs of hiring and training an employee who may later be fired due to being unable to perform their duties. Companies that have used algorithmic hiring have reported positive results such as a 38 percent reduction in employee turnover.[51]

Studies have shown that a properly designed analysis of potential employee metadata outperforms human decisions by at least 25%.[52] This effect holds with any situation provided a large number of candidates and across all levels of hiring, including front line, middle management, and upper management.[53]

Unfortunately, there are some downsides to algorithmic hiring practices that carry serious legal complications. In order to create the hiring algorithms, companies rely on their existing worker pool as well as qualities of the ‘ideal worker’ provided by management.[54] If an employer’s worker pool has already been tainted by previously discriminatory hiring practices, they are likely to inadvertently continue to engage in discriminatory hiring practices.[55] A workplace composed of a significant number of a single sex, or a significant number of a single race, may skew their results toward those groups.[56] Algorithms may end up targeting minorities due to facially neutral parameters that can lead to disparate impacts. These disparate impacts may take significant time to be revealed, meaning that either accidental or intentional racism in the programming can have wide ranging and far reaching consequences.

Reputable companies that provide the services of designing the algorithms for algorithmic hiring are aware of the potential legal consequences of violating the law and do take steps to locate and correct any unintentional side effects of their data points. If a particular variable is found to be excluding an abnormally large amount of a single group, it can be refined or discarded. However, even the most innocuous of variables can end up having a disparate impact. For instance, one company noticed that by excluding potentially employees who lived more than a certain distance from the workplace resulted in the loss of a disproportionate amount of African-American candidates. This leads the company to discover that most of the African-American candidates lived in the outlying suburbs of the area. By trying to exclude employees most likely to be late, or miss work due to traffic or transportation problems, this variable had, in effect, excluded a large amount of African-American applicants.

Algorithmic hiring practices, while continuing to prove their time and cost savings to human resource managers across the country, carry with them an ever increasing legal risk. In the past, the benefits of “Big Data” have proved invaluable, but potential legal challenges and regulatory changes may be forthcoming. As technology advances, the disparate impact of current algorithmic hiring based on the past discriminatory hiring practices is going to be easier to prove at times when companies may be under greater scrutiny from the EEOC. The real-world risk of disparate impact have been called into scrutiny by the Equal Employment Opportunity Commission (EEOC) and, in a recent open meeting, were discussed at length. Finally, the EEOC stands ready to step into the issue. On October 13, the EEOC held a meeting titled “Big Data in the Workplace: Examining Implications for Equal Employment Opportunity Law.” The insights from the speakers, experts in their field, as they present their view on this topic to the EEOC allows us a glimpse of the direct information presented to the Commission. The EEOC’s examination of the problems of algorithmic hiring indicates that guidance on these issues might be coming.

If a disparate impact claim based around algorithmic hiring to reach the Supreme Court, it is difficult to predict how such a claim would be resolved. Traditionally, the Court has offered “Chevron Deference” to the interpretations of regulatory agencies. In terms of judicial review, if Congress has not directly addressed the precise question at issue, the court will consider the issue only if the agency’s answer is based on a permissible construction of the statute.[57] Chevron deference has been a powerful tool of regulatory agencies, as it has allowed for the enforcement of the spirit of the law in the face of emerging technological and social changes. However, Chevron is not above criticism, and current Supreme Court Justice Neil Gorsuch has expressed serious doubts about the legality of the legislative branch’s power to delegate power to the executive branch.[58] In terms of the disparate impact of algorithmic hiring, whether the final answer on how to handle these cases comes from the EEOC or the Courts, it is an issue that will someday have to be addressed.

Unfortunately, the EEOC guidelines on hiring practices can be rather nebulous in terms of algorithmic hiring. Hiring procedures that have an adverse impact constitute discrimination unless they are justified.[59] However, when two or more selection procedures are available that serve the user’s legitimate interest, and are substantially equally valid for a given purpose, the user should use the procedure demonstrated to have a lesser adverse impact.[60] In terms of algorithmic hiring practices, a company could run millions of permutations on the data leading very different results in whom to interview. The algorithms are also constantly evolving, and at each stage could prove to be more or less discriminatory to some combination of protected applicants.

The EEOC does mandate that records be kept, especially in regard to potentially impact. Employers should maintain and have available for inspection records or other information which will disclose the impact tests, and other selection procedures have upon employment opportunities by race, sex, or ethnic groups.[61] However, in terms of algorithmic hiring, there are very likely to be a large number of applicants, and the procedures are likely to be administered frequently.[62] Running hundreds or thousands of permutations on data based on various variables in an attempt to cull an employer’s applicant pool to a manageable number means that an employer would only need to retain such information on a sample basis.[63] An employer would not need to maintain the exact algorithm, or the exact dataset used for comparison. This would further limit discovery options, as a plaintiff may have no means of examining the exact qualifications used to disqualify them from employment.

Employers should be running tests on the results to ensure the selection methods themselves are not producing a disparate impact and performing other validation studies.[64] Even if the selection criteria does produce a disparate impact on an individual, the EEOC views this information as showing that “the total selection process does not have an adverse impact, the Federal enforcement agencies, in the exercise of their administrative and prosecutorial discretion, in usual circumstances, will not expect a user to evaluate the individual components for adverse impact, or to validate such individual components, and will not take enforcement action based upon adverse impact of any component of that process, including the separate parts of a multipart selection procedure or any separate procedure that is used as an alternative method of selection.”[65] Effectively negating the holding in Teal that rejected a “bottom line” defense and disregarding the Title VII protections are supposed to allow the individual to compete.[66]

Validating algorithmic hiring practices using validity studies will prove deeply problematic. Validity studies should be based on review of information about the job.[67] The study must be technically feasible,[68] which, with the modern computer power-backing algorithmic hiring, a large number of test become feasible. However, the criteria used should be relevant to the job, or group of jobs in question.[69] They must represent critical or important job duties and work behaviors developed from a review of job information.[70] Bias must also be considered in all areas of criteria selection, application, and subjective evaluation.[71] For companies used to working with marketing data, the legal requirements for using similar tools for employment contexts can be full of potential pitfalls. Attempting to ensure the testing procedure is in full compliance with the requirements of all technical standards for validity studies[72] is daunting. Even if fully successful a selection procedure, even if fully validated against job performance, cannot be imposed upon members of a race, sex, or ethnic group where other applicants have not been subjected to that standard or one that denies the same employment, promotion, membership, or other employment opportunities as have been available to other employees or applicants.[73]

Distilled down to its essence, algorithmic hiring problems are very similar to the issues the Supreme Court faced in Griggs versus Duke Power Co. In Griggs, African-American workers fought against the discriminatory effect of requiring a high school education, as well as satisfactory scores on two professionally prepared aptitude tests. Existing employees could seek promotion into certain departments only after passing these aptitude tests. In the opinion, Chief Justice Burger famously said “The objective of Congress in the enactment of Title VII is plain from the language of the statute. It was to achieve equality of employment opportunities and remove barriers that have operated in the past to favor an identifiable group of white employees over other employees. Under the Act, practices, procedures, or tests neutral on their face, and even neutral in terms of intent, cannot be maintained if they operate to ‘freeze’ the status quo of prior discriminatory employment practices.” Despite this ruling, the reality is modern algorithms that are designed around data points cultivated by existing star employees do freeze the status quo, while simultaneously making legal challenges to their use almost impossible for a potential employee to successfully challenge.

 

Conclusion

Addressing the legal concerns of algorithmic hiring practices in terms of disparate impacts can theoretically come from several sources. It is possible for the legislature to pass law offering individuals more protections in the modern digital world. However, a bill was recently passed in the Senate allowing Internet Service Providers to sell browsing data, indicating in the short term it is unlikely much help will be forthcoming from the legislature. As expected, the President signed this bill, indicating, in terms of digital privacy and information misuse, the new administration is unlikely to offer the same protections and priorities as the prior administration.

In 2016, the UN passed a nonbinding resolution that internet access is a basic human right. President Obama attempted to increase available of internet access to the poor, minorities, and rural areas. Sadly, the FCC disagreed in 2015, arguing that Internet access is not a basic human right. This makes all uses of the internet subject to the whims of the Internet Service Providers, and the private companies managing the websites themselves. How this information is ultimately collected and used is going to be a complicated battle in the years to come.

Employers are already mining all public information about potential applicants. While the EEOC has held a meeting in 2014 to discuss future guidance,[74] checking an applicant’s social media continues to be acceptable practice. Traditionally, a potential employer just usually has a live person review the material. All public information being downloaded into companies’ robust algorithmic hiring divisions servers and resold to other employment agencies looking to perform searches. Algorithms are complicated beasts manufactured out of hopes, dreams, and past results. They are as unreliable as economic models, throwing correlations at the wall to see what sticks. They work, so employers are going to use them. Google already has the capability to sell all of an individual’s search information. Amazon and other sites use an individual’s searching data to feed customer ads. Facial recognition systems and voice analysis can already turn a person’s look and sound into datapoints to be fed into a system. A phone using Google MAP data is already tracking its location, making a person’s driving habits potentially public information.

The best source of change is going to be the EEOC itself. While the EEOC was looking into the issue as of October of 2016, it remains to be seen if the priorities of this regulatory agency survive the change in administration. Under the new administration, regulations promulgated requiring more transparency in employer hiring practices that allow for disparate impact lawsuits will probably be rolled back. The EEOC would need to promulgate new guidelines forcing employers and employment agencies to ensure transparency with their algorithms. Even with completely transparency, there is little use in finding out a potential employee lost a job because they did not read certain websites, or eat at certain restaurants.

The EEOC needs to act, and quickly, to establish clear rules on how algorithmic hiring can and cannot be used to comply with Title VII. A hands-off or half-hearted approach could serve only to eviscerate the 50 years of progress. Employers and employment agencies need to be required to run, document, and maintain data analysis at every stage of the algorithmic hiring process. Variables that are found to exclude a large number of a protected class must be documented and either discarded, or individually justified as directly related to performance of the job. Attempts to conceal the true impact of certain variables should be harshly discouraged. The onus is already on both employers and employment agencies to comply with Title VII, guidance that codifies how that is to be properly done in the age of big data is vital, with delays only making the problem worse. Requiring companies to maintain specific data sets and document how their algorithms impacted the applicant pool in terms of protected classes will be vital in allowing companies to identify disparate impacts before the hiring process proceeds, and allow potential plaintiffs the evidence they need to bring forth lawsuits if companies fail to act upon this data.

 

Works Cited

[1] 42 U.S.C. § 2000e-2 (2012).

[2] El v. Se. Pennsylvania Transp. Auth. (SEPTA), 479 F.3d 232, 245 (3d Cir. 2007).

[3] Written Testimony of Kelly Trindel, PhD, Chief Analyst Office of Research, Information and Planning, EEOC, Equal Employment Opportunity Commission (13 Oct. 2016), https://www.eeoc.gov/eeoc/meetings/10-13-16/trindel.cfm.

[4] Don Peck, They’re Watching You at Work, ATLANTIC (Dec. 2013), http://www.the atlantic.com/magazine/archive/2013/12/theyre-watching-you-at-work/354681/.

[5]Griggs v. Duke Power Co., 91 S. Ct. 849, 853 (U.S. 1971).

[6] 42 U.S.C. Supra note 1.

[7] Trindel, Supra note 3.

[8] 42 U.S.C. Supra note 1.

[9]Sir Francis Bacon, Meditationes Sacrae and Human Philosophy. 1597.

[10] Mary Meeker and Liang Yu, Internet Trends, Kleiner Perkins Caulfield Byers (29 May 2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013/.

[11] Mikal Khoso, How Much Data Is Produced Every Day? (13 May 2016), http://www.northeastern.edu/levelblog/2016/05/13/how-much-data-produced-every-day/.

[12] Drew Barton, How Do Some Banner Ads Follow Me from Site to Site (29 Mar. 2013), https://southernweb.com/2013/03/how-do-some-banner-ads-follow-me/.

[13] Executive Office of The President, Big Data: Seizing Opportunities, Preserving Values (May 2014), https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf [hereinafter Executive Office].

[14] Executive Office, Supra note 13, at 22–32.

[15] Executive Office, Supra note 13, at 46.

[16] GAO, Information Resellers: Consumer Privacy Framework Needs to Reflect Changes in Technology and the Marketplace, GAO-13-663, 2013, http://www.gao.gov/assets/660/658151.pdf.

[17] Executive Office, Supra note 13, at 6.

[18] Manolis Kellis, Importance of Access to Large Populations, Big Data Privacy Workshop: Advancing the State of the Art in Technology and Practice, Cambridge, 3 Mar. 2014, http://web.mit.edu/bigdatapriv/ppt/ManolisKellis_PrivacyBigData_CSAIL-WH.pptx.

[19] Trindel, Supra note 3.

[20] Id.

[21] Id.

[22] Written Testimony of Kathleen K. Lundquist, PhD, Organizational Psychologist, President and CEO, APTMetrics, Inc. Equal Employment Opportunity Commission (13 Oct. 2016), https://www.eeoc.gov/eeoc/meetings/10-13-16/lundquist.cfm.

[23]Lundquist, Supra note 22.

[24] Trindel, Supra note 3.

[25] Id.

[26] Kate Sheridan, Facial-recognition software finds a new use: Diagnosing genetic disorders Statnews (10 Apr. 2017), https://www.statnews.com/2017/04/10/facial-recognition-genetic-disorders/.

[27] 42 U.S.C. § 2000ff-1 (2008).

[28] Lundquist, Supra note 22.

[29] Id.

[30] Id.

[31] Don Peck, Supra note 4.

[32] Trindel, Supra note 3.

[33] Id.

[34] Trindel, Supra note 3.

[35] Id.

[36]Kashmir Hill, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did, Forbes, 16 Feb. 2012, https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did.

[37]Id.

[38] 42 U.S.C. Supra note 1.

[39] 42 U.S.C. § 12112 (2012).

[40] 42 U.S.C. § 2000ff (2012).

[41] 42 U.S.C. § 2000e (k) (2012).

[42] Lin, L. y., Sidani, J. E., Shensa, A., Radovic, A., Miller, E., Colditz, J. B., Hoffman, B. L., Giles, L. M. and Primack, B. A., “Association between social media use and depression among U.S. young adults.” Depress Anxiety, vol. 33, 2016, pp. 323–331. http://onlinelibrary.wiley.com/doi/10.1002/da.22466/.

[43]Natasha Singer, “Risks in using social media to spot signs of mental distress.” New York Times, 26 Dec. 2014, https://www.nytimes.com/2014/12/27/technology/risks-in-using-social-posts-to-spot-signs-of-distress.html

[44] Trindel, Supra note 3.

[45] EEOC Issues Final Rules on Employer Wellness Programs Equal Opportunity Employment Commission (16 May 2016), https://www.eeoc.gov/eeoc/newsroom/release/5-16-16.cfm.

[46] Trindel, Supra note 3.

[47] Don Peck, Supra note 4.

[48] Gideon Mann and Cathy O’Neil, “Hiring algorithms are not neutral.” Harvard Business Review, 9 Dec. 2016, https://hbr.org/2016/12/hiring-algorithms-are-not-neutral

[49] Claire Cain Miller, “Can an algorithm hire better than a human.” New York Times, 25 Jun. 2015, https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.html.

[50]Id.

[51]John Rossheim, Algorithmic Hiring: Why Hire By Numbers?, https://hiring.monster.com/hr/hr-best-practices/recruiting-hiring-advice/strategic-workforce-planning/hiring-algorithms.aspx.

[52] Nathan R. Kuncel, Deniz S. Ones, and David M. Klieger, “In hiring, algorithms beat instinct.” Harvard Business Review, May 2014, https://hbr.org/2014/05/in-hiring-algorithms-beat-instinct.

[53]Id.

[54] Lauren A. Rivera, “Guess who doesn’t fit in at work.” New York Times, 30 May 2015, https://www.nytimes.com/2015/05/31/opinion/sunday/guess-who-doesnt-fit-in-at-work.html.

[55] Trindel, Supra note 3.

[56]Written Testimony of Michael Housman Workforce Scientist, hiQ Labs, Equal Employment Opportunity Commission (13 Oct. 2016), https://www.eeoc.gov/eeoc/meetings/10-13-16/housman.cfm.

[57]Chevron, U.S.A., Inc. v. Nat. Resources Def. Council, Inc., 467 U.S. 837, 842 (1984).

[58]Gutierrez-Brizuela v. Lynch, 834 F.3d 1142, 1148 (10th Cir. 2016).

[59] 29 CFR § 1607.3 A (2017).

[60] 29 CFR § 1607.3 B (2017).

[61]29 CFR § 1607.4 A (2017).

[62] Id.

[63]Id.

[64]29 CFR § 1607.5 (2017).

[65] 29 CFR § 1607.4 C (2017).

[66] Connecticut v. Teal, 457 U.S. 440, 451 (1982).

[67] 29 CFR § 1607.14 A (2017).

[68] 29 CFR § 1607.14 B (2017).

[69] 29 CFR § 1607.14 B (2) (2017).

[70] Id.

[71] Id.

[72] 29 CFR § 1607.14 (2017).

[73] 29 CFR § 1607.11 (2017).

[74] Social Media Is Part of Today’s Workplace but its Use May Raise Employment Discrimination Concerns, Equal Employment Opportunity Commission (12 Mar. 2014), https://www.eeoc.gov/eeoc/newsroom/release/3-12-14.cfm.