Published on January 15th, 2020
Written by Zahra Thabet, UROP Student in the GMTaC Lab
Last semester I conducted research on Freedom of Information Act (FOIA) requests submitted to U.S. Immigration and Customs Enforcement, or ICE, a federal law enforcement agency that operates under the purview of the Department of Homeland Security. The goal of this project is to understand what is publicly known about ICE’s surveillance practices and uncover further information through FOIA requests. The Freedom of Information Act, enacted in 1967, mandates that if a member of the public requests information from a federal agency, they must share that information, barring certain exemptions.1 FOIA requests are a useful site to uncover information on ICE’s surveillance practices because both their content and the process for obtaining them (length of time of request processing, subject of request, etc.) can be made available to the wider public through sites like MuckRock, an online repository and filing assistant for FOIA requests. By gathering existing FOIA requests on MuckRock I was able to create a dataset of requests and their resultant documents and perform both quantitative and qualitative analyses on their content. This subset of document offered multiple routes to further explore the surveillance practices of ICE by means of either further textual analysis or filing additional FOIA requests.
This project gave me an opportunity not only to examine specific documents released, but also to explore how ICE’s processing of FOIA requests has evolved and stayed the same since 2014. Entering this project, I wanted to find out how many FOIA requests have been submitted to ICE in recent years, and, of the documents released, how many contain records related to surveillance technologies and practices used by the agency.
I worked with documents archived on MuckRock, an online repository and filing assistant for FOIA requests. I specifically chose to focus on the set of files available here as they are available to almost anyone who has access to the internet. In this sense, MuckRock becomes an agent for the democratization of this information enabling accessible research analysis. MuckRock is a nonprofit. While requests filed with MuckRock are available to the public, the filing assistant portion of the service either can be used with a one-time payment or on a paid-subscription basis.2 I gathered documents that were released in response to FOIA requests fulfilled by ICE from January 1st, 2014 to November 3rd, 2019. Of the 346 requests to ICE currently registered on MuckRock during that time period, 119 have been tagged as “completed” on the site which is usually indicative of ICE responding with to the requester with documents relevant to their request.3 In the time period of January 1st. 2014 to December 31st 2018, the 263 FOIA requests to ICE on MuckRock that are not pending account for 0.08 percent of all 314,171 requests processed by ICE as indicated by the DHS’ FOIA annual reports.4, 5 It is clear that even though FOIA is ostensibly an act to benefit the public, its execution effectively limits the dissemination of those benefits, as no one clearinghouse exists.6 MuckRock is one of the few nongovernmental resources to that end.7
To process the data from the completed requests, I classified each document from each request with various document- and request-level identifies . Using these identifiers, I was able to further sort the ICE-related documents available on MuckRock and extract data related to the length of time it took for requests to be filled. A FOIA request on MuckRock can consist of multiple documents, including documents related to the processing of the FOIA request itself as well as documents that are responsive to the request.
The processing software I used to format the document-based dataset was NVivo. NVivo is a data analysis software that can be used for qualitative and mixed-method content analysis. It allowed me to perform content analysis on a widespread set of documents.8
The average number of days it took ICE to fulfill a FOIA request on MuckRock during the period of January 1st, 2014 to November 3rd, 2019 was 253 days. The three highest number of days it took for ICE to fulfill a request tagged as completed on MuckRock were 1,041, 914, and 878 days. Respectively, these requests provided a list of the detention and deportation facilities to open in 2014 and 2015;9 documents related to ‘Operation Torrent Divide,’ an ICE mission that was referred to in a statement by DHS Secretary Jeh Johnson that seems to pertain to unaccompanied minors crossing the border;10 and FY2013 contracts between ICE and Ammunition Accessories Inc, Federal Cartridge Company, and Colt Defense LLC.11 Alternatively, the three shortest periods of time for FOIA requests to be “completed” on MuckRock were 7, 8, and 16 days. In these, ICE provided documents that contained a referral of the requester to a publicly available document with the number of detainee deaths in 2016,12 a contract between ICE and Digital Receiver Technology Inc,13 and ICE’s Secure Communities deportation program.14 These three requests were most likely fulfilled so quickly because the information being requested had either already been requested before or was already publicly available.
The data I gathered from MuckRock is spread across almost 5 years, more when the date the initial request was filed is considered rather than just the date it was fulfilled, as this variable stretches back to 2012. Given the time span, I decided to see if the amount of time that it took ICE to deliver on FOIA requests was correlative to when the request was placed. One could imagine, for example, that there might be a correlation with changes in ICE’s processing structure, or if the agency was unprepared for a change in the number of requests stemming from current events. Included below is a visual representation of this relationship. Just from initial sight this does not seems to be the case. Further analysis into the number of requests filed on a specific date and this number’s correlation either with a political event or with changes to MuckRock’s own processing structure could be beneficial.
A central question we are investigating is: what are the surveillance methods that ICE employs? To start to answer this question, I did a word search for all the times “surveillance” appeared in the responsive documents. This search returned 607 instances of the word. I then coded each of these appearances according to theme. Most of these instances fell under the Quality Assurance Surveillance Plan. The QASP is a document modified and used across government agencies to monitor and regulate contractor performance. It is modified according to the contract’s context. The document was mostly present in FOIA requests related to ICE’s contracts. This use of the term surveillance was not as relevant to the type of surveillance we aimed to identify; most of its employment in this context pertained to the enforcement of standards for businesses contracted by ICE. However, the term in this context had the most widespread deployment across all completed requests to ICE on MuckRock the documents. Most other themes identified were relegated to one or two files, while uses of the word in the QASP and related contexts spanned 22 files.
Upon further reflection it was interesting that most of the documents related to the QASP pertained to contractors’ roles in detention facilities. Thus, these documents lent details on the surveillance practices within the facilities, as ICE’s standards for contractors in this context is tied to the agency’s expectation of surveillance by the company. For instance, there were a couple mentions of contractors needing to institute telephone surveillance, and, in a document similar to the QASP, the contractor’s maintenance of a video surveillance system of a facilities’ perimeter. 15, 16
The presence of these contracts and their links to surveillance practices of ICE offers a couple of potential leads to investigate. Though the facilities are administered by ICE, I have identified multiple instances where localities as legal bodies were involved or mentioned within a contract. Since many states and localities have enacted their own laws modeled off of the FOIA, it is sometimes possible to request documents from these bodies in a similar manner as filing a FOIA request. Requesting documents form non-federal governmental bodies has its benefits, as local agencies tend to have less of a FOIA burden to begin with.
Moving forward, I aim to do further content analysis of the document set I assembled, continue to use NVivo to search for other terms in the dataset related to surveillance such as “monitoring,” “lawful interception,” and “aerial observation,” and explore the possibility of submitting FOIA requests to these aforementioned locales to gather more information on ICE’s surveillance practices.
Appendix 1: MuckRock Tags
MuckRock assigns an identifier to all FOIA requests to indicate their status. The possible status tags include completed, awaiting response, no responsive documents, fix required, withdrawn, rejected, awaiting appeal, awaiting acknowledgement, partially completed, processing payment required, or in litigation. Before discussing the dataset that I compiled, I want to mention a few issues related to these status tags. According to MuckRock, the tag of completed means that “Responsive records have been released” for the request at hand.17 However, I found that there were a couple of exceptions to this definition. More specifically, a couple of requests marked as “completed” actually had no responsive documents18. I would expect these requests to fall under the status of “no responsive documents.” I included the documents from requests that had no responsive documents, despite their merit being categorized as complete being questionable. Another exception were requests that were ultimately transferred to another agency, but were marked as “completed” in MuckRock’s repository of requests to ICE. I did not include documents from other agencies in my dataset; see appendix 3 for further elaboration.
Another factor to consider in the designation of “completed” requests is that this status ultimately depends on whether the requester decides to appeal ICE’s decision. This ability to appeal a request previously tagged as “completed” places it into another category and keeps it active. Indeed, during a review and verification of the FOIA requests to ICE in my database, I found that the most recently completed request had moved from the “completed” category to the “awaiting appeal” status.
Appendix 2: Limited Processing Capabilities
NVivo proved to have some difficulties handling excel spreadsheets. The software was designed to read spreadsheets exclusively as documentation of survey respondents or other systematized collection of data. However, since the identifying data is usually redacted in the excel files from the request I uploaded, bending the documents to make them fit the software has not been a profitable endeavor. One major hurdle is that each row in these spreadsheets may or may not concern unique incidents, so it does not make sense to apply an ID system that just serves to give each row a unique identifier, thereby declaring them unique with no evidence that they actually are. NVivo then would act on that and treat them as if they are separate entities. However, otherwise they are grouped by non-unique identifiers, such as date or location. This is not helpful information and takes a lot of time for NVivo to create and for me to manually get rid of. Additionally, NVivo refuses to import multi-sheet spreadsheets. It insists on importing one sheet at a time which creates confusion by separating documents that were not separate in MuckRock. This complicates any additional analysis performed as it would treat each of the different sheets of a worksheet as different documents. This could, for example, overemphasize the overall document relevance of a term.
Because of the above stated reasons, I have stopped importing excel spreadsheets from MuckRock for the time being. However, for documentation purposes, I have tracked all of the spreadsheets that I have not uploaded and plan to integrate them in the future.
Appendix 3: Agency Discrepancies
Some FOIA requests are transferred from agency to agency. In these cases, I did not include documents issued from agencies that are not ICE. However, I did collect documents relating to the transfer of the request from one agency to another, regardless of if this document originated inside or outside of ICE. By doing so, I hoped to make it easier to track down the requests in where they originated or ended up.
In the future for requests that are transferred, I hope to add classification of what agency it was transferred to or from. This will be an easy task, as I have already created a document classification for agency transfer. The only case where this might not be possible is if the transfer is not documented in documents, but in email correspondence between the agencies and the requester.
The problem of information being communicated over email is a larger theme than that of transfer requests. Lots of information relating to the processing of a request is stored in the emails between ICE and the requester rather than in documents that I collected. These emails are also publicly available on MuckRock. Surveying the emails would be most relevant if there is a desire to analyze the agency’s FOIA process itself. A lot of data can be extracted from the emails requesting documents themselves. It is possible to save these emails, however, I did not as they are not within the scope of collection I set, which was only to collect the documents sent and received in FOIA requests.
Sometimes ICE sent the wrong documents in response. This was especially relevant when the requester had sent multiple FOIA requests to the agency. Most likely a result of confusion, these documents have absolutely nothing to do with the FOIA case at hand. For this reason, I did not include these documents.
During the course of FOIA requests, especially during appeals and long processes, sometimes a FOIA request’s agency-issued ID changed. The consistency of the FOIA ID is important in identifying which files belong to which request. Therefore, I assigned all the documents that were in the same request to the last used FOIA ID that the request was assigned.
Appendix 4: Format of files sent
Some documents are not text searchable. I converted these to a text-searchable form with Adobe Acrobat. Additionally, some documents appear to have been non text searchable before being sent, but then were converted to being searchable before being sent to the requester. These documents and the ones that I convert have much more room for error than text-based files that can be natively read by a text processing software. In the massive amount of data that I have, it is not time effective to go through all of the documents that have been converted to correct any misspellings or misreading.
Some files were sent such that each page of a document was in its own pdf files. For this case, I combined these files into one pdf. In addition, there were cases were multiple copies of the same files were sent. In these cases, I only imported one copy of these files.
As mentioned previously, some documents are publicly available on the internet outside of MuckRock and any FOIA context. In these cases, it seemed like the best practice for the agency to link to them in their response letter. However, there were cases where this was not the case and the file was sent. I did not include these due to the rather random nature of them being sent.
3 See Appendix 1 for more detail on the MuckRock tagging system and Appendix 3 for how I handle FOIA requests that are transferred between agencies.
5 The FOIA requests to ICE on MuckRock accounts for requests that were processed but are not necessarily tagged as completed; these requests account for all requests that were not tagged as processing, awaiting acknowledgement, awaiting review, or awaiting appeal.
6 Kwoka, Margaret B. “Foia, Inc.” Duke LJ 65 (2015): 1361.
7 DeLuca, Lisa. “Where do FOIA responses live? Electronic Reading Rooms and web sources.” College & Research Libraries News 80.1 (2019): 42.
8 See Appendix 2 for complications on using NVivo with MuckRock data and Appendix 4 for how I handle non-text documents.
15 Page 125 of the responsive document found at https://www.muckrock.com/foi/united-states-of-america-10/ice-contractsmous-re-northwest-detention-center-15949/
16 Page 146 of the responsive document found at https://www.muckrock.com/foi/united-states-of-america-10/ice-contracts-with-central-arizona-detention-center-15032/
18 For example: https://www.muckrock.com/foi/united-states-of-america-10/proposed-records-schedules-ice-11842/
Zahra Thabet is a sophomore at Wellesley College where she majors in Sociology and Economics. She joined the Global Media Technology and Culture Lab in September 2019 as an undergraduate researcher studying Immigration Customs Enforcement’s surveillance practices.