Nicole M. Bennett's Techdirt Profile

Nicole M. Bennett

About Nicole M. Bennett

Posted on Techdirt - 9 December 2025 @ 03:52pm

How ICE’s Plan To Monitor Social Media Threatens Not Just Privacy, But Civic Participation

When most people think about immigration enforcement, they picture border crossings and airport checkpoints. But the new front line may be your social media feed.

U.S. Immigration and Customs Enforcement has published a request for information for private-sector contractors to launch a round-the-clock social media monitoring program. The request states that private contractors will be paid to comb through “Facebook, Google+, LinkedIn, Pinterest, Tumblr, Instagram, VK, Flickr, Myspace, X (formerly Twitter), TikTok, Reddit, WhatsApp, YouTube, etc.,” turning public posts into enforcement leads that feed directly into ICE’s databases.

The request for information reads like something out of a cyber thriller: dozens of analysts working in shifts, strict deadlines measured in minutes, a tiered system of prioritizing high-risk individuals, and the latest software keeping constant watch.

I am a researcher who studies the intersection of data governance, digital technologies and the U.S. federal government. I believe that the ICE request for information also signals a concerning if logical next step in a longer trend, one that moves the U.S. border from the physical world into the digital.

A new structure of surveillance

ICE already searches social media using a service called SocialNet that monitors most major online platforms. The agency has also contracted with Zignal Labs for its AI-powered social media monitoring system.

The Customs and Border Protection agency also searches social media posts on the devices of some travelers at ports of entry, and the U.S. State Department reviews social media posts when foreigners seek visas to enter the United States.

What would change isn’t only the scale of monitoring but its structure. Instead of government agents gathering evidence case by case, ICE is building a public-private surveillance loop that transforms everyday online activity into potential evidence.

Private contractors would be tasked with scraping publicly available data to collecting messages, including posts and other media and data. The contractors would be able to correlate those findings with data in commercial datasets from brokers such as LexisNexis Accurint and Thomson Reuters CLEAR along with government-owned databases. Analysts would be required to produce dossiers for ICE field offices within tight deadlines – sometimes just 30 minutes for a high-priority case.

Those files don’t exist in isolation. They feed directly into Palantir Technologies’ Investigative Case Management system, the digital backbone of modern immigration enforcement. There, this social media data would join a growing web of license plate scans, utility records, property data and biometrics, creating what is effectively a searchable portrait of a person’s life.

Who gets caught in the net?

Officially, ICE says its data collection would focus on people who are already linked to ongoing cases or potential threats. In practice, the net is far wider.

The danger here is that when one person is flagged, their friends, relatives, fellow organizers or any of their acquaintances can also become subjects of scrutiny. Previous contracts for facial recognition tools and location tracking have shown how easily these systems expand beyond their original scope. What starts as enforcement can turn into surveillance of entire communities.

What ICE says and what history shows

ICE frames the project as modernization: a way to identify a target’s location by identifying aliases and detecting patterns that traditional methods might miss. Planning documents say contractors cannot create fake profiles and must store all analysis on ICE servers.

But history suggests these kinds of guardrails often fail. Investigations have revealed how informal data-sharing between local police and federal agents allowed ICE to access systems it wasn’t authorized to use. The agency has repeatedly purchased massive datasets from brokers to sidestep warrant requirements. And despite a White House freeze on spyware procurement, ICE quietly revived a contract with Paragon’s Graphite tool, software reportedly capable of infiltrating encrypted apps such as WhatsApp and Signal.

Meanwhile, ICE’s vendor ecosystem keeps expanding: Clearview AI for face matching, ShadowDragon’s SocialNet for mapping networksBabel Street’s location history service Locate X, and LexisNexis for looking up people. ICE is also purchasing tools from surveillance firm PenLink that combine location data with social media data. Together, these platforms make continuous, automated monitoring not only possible but routine.

Lessons from abroad

The United States isn’t alone in government monitoring of social media. In the United Kingdom, a new police unit tasked with scanning online discussions about immigration and civil unrest has drawn criticism for blurring the line between public safety and political policing.

Across the globe, spyware scandals have shown how lawful access tools that were initially justified for counterterrorism were later used against journalists and activists. Once these systems exist, mission creep, also known as function creep, becomes the rule rather than the exception.

The social cost of being watched

Around-the-clock surveillance doesn’t just gather information – it also changes behavior.

Research found that visits to Wikipedia articles on terrorism dropped sharply immediately after revelations about the National Security Agency’s global surveillance in June 2013.

For immigrants and activists, the stakes are higher. A post about a protest or a joke can be reinterpreted as “intelligence.” Knowing that federal contractors may be watching in real time encourages self-censorship and discourages civic participation. In this environment, the digital self, an identity composed of biometric markers, algorithmic classifications, risk scores and digital traces, becomes a risk that follows you across platforms and databases.

What’s new and why it matters now

What is genuinely new is the privatization of interpretation. ICE isn’t just collecting more data, it is outsourcing judgment to private contractors. Private analysts, aided by artificial intelligence, are likely to decide what online behavior signals danger and what doesn’t. That decision-making happens rapidly and across large numbers of people, for the most part beyond public oversight.

At the same time, the consolidation of data means social media content can now sit beside location and biometric information inside Palantir’s hub. Enforcement increasingly happens through data correlations, raising questions about due process.

ICE’s request for information is likely to evolve into a full procurement contract within months, and recent litigation from the League of Women Voters and the Electronic Privacy Information Center against the Department of Homeland Security suggests that the oversight is likely to lag far behind the technology. ICE’s plan to maintain permanent watch floors, open indoor spaces equipped with video and computer monitors, that are staffed 24 hours a day, 365 days a year signals that this likely isn’t a temporary experiment and instead is a new operational norm.

What accountability looks like

Transparency starts with public disclosure of the algorithms and scoring systems ICE uses. Advocacy groups such as the American Civil Liberties Union argue that law enforcement agencies should meet the same warrant standards online that they do in physical spaces. The Brennan Center for Justice and the ACLU argue that there should be independent oversight of surveillance systems for accuracy and bias. And several U.S. senators have introduced legislation to limit bulk purchases from data brokers.

Without checks like these, I believe that the boundary between border control and everyday life is likely to keep dissolving. As the digital border expands, it risks ensnaring anyone whose online presence becomes legible to the system.

Nicole M. Bennett is a Ph.D. Candidate in Geography and Assistant Director at the Center for Refugee Studies at Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted on Techdirt - 11 September 2025 @ 01:49pm

How Palantir Is Mapping Everyone’s Data For The Government

When the U.S. government signs contracts with private technology companies, the fine print rarely reaches the public. Palantir Technologies, however, has attracted more and more attention over the past decade because of the size and scope of its contracts with the government.

Palantir’s two main platforms are Foundry and Gotham. Each does different things. Foundry is used by corporations in the private sector to help with global operations. Gotham is marketed as an “operating system for global decision making” and is primarily used by governments.

I am a researcher who studies the intersection of data governance, digital technologies and the U.S. federal government. I’m observing how the government is increasingly pulling together data from various sources, and the political and social consequences of combining those data sources. Palantir’s work with the federal government using the Gotham platform is amplifying this process.

Gotham is an investigative platform built for police, national security agencies, public health departments and other state clients. Its purpose is deceptively simple: take whatever data an agency already has, break it down into its smallest components and then connect the dots. Gotham is not simply a database. It takes fragmented data, scattered across various agencies and stored in different formats, and transforms it into a unified, searchable web.

The stakes are high with Palantir’s Gotham platform. The software enables law enforcement and government analysts to connect vast, disparate datasets, build intelligence profiles and search for individuals based on characteristics as granular as a tattoo or an immigration status. It transforms historically static records – think department of motor vehicles files, police reports and subpoenaed social media data like location history and private messages – into a fluid web of intelligence and surveillance.

These departments and agencies use Palantir’s platform to assemble detailed profiles of individuals, mapping their social networks, tracking their movements, identifying their physical characteristics and reviewing their criminal history. This can involve mapping a suspected gang member’s network using arrest logs and license plate reader data, or flagging individuals in a specific region with a particular immigration status.

The efficiency the platform enables is undeniable. For investigators, what once required weeks of cross-checking siloed systems can now be done in hours or less. But by scaling up the government’s investigative capacity, Gotham also alters the relationship between the state and the people it governs.

Shifting the balance of power

The political ramifications of Palantir’s rise come into focus when you consider its influence and reach across the government. U.S. Immigration and Customs Enforcement alone has spent more than US$200 million on Palantir contracts, relying on the software to run its Investigative Case Management system and to integrate travel histories, visa records, biometric data and social media data.

The Department of Defense has awarded Palantir billion-dollar contracts to support battlefield intelligence and AI-driven analysis. Even domestic agencies like the Centers for Disease Control and Prevention and the Internal Revenue Service, and local police departments like the New York Police Department, have contracted with Palantir for data integration projects.

These integrations mean that Palantir is not just a vendor of software; it is becoming a partner in how the federal government organizes and acts on information. That creates a kind of dependency. The same private company helps define how investigations are conducted, how targets are prioritized, how algorithms work and how decisions are justified.

Because Gotham is proprietary, the public, and even elected officials, cannot see how its algorithms weigh certain data points or why they highlight certain connections. Yet, the conclusions it generates can have life-altering consequences: inclusion on a deportation list or identification as a security risk. The opacity makes democratic oversight difficult, and the system’s broad scope and wide deployment means that mistakes or biases can scale up rapidly to affect many people.

Beyond law enforcement

Supporters of Palantir’s work argue that it modernizes outdated government IT systems, bringing them closer to the kind of integrated analytics that are routine in the private sector. However, the political and social stakes are different in public governance. Centralized, attribute-based searching, whether by location, immigration status, tattoos or affiliations, creates the capacity for mass profiling.

In the wrong hands, or even in well-intentioned hands under shifting political conditions, this kind of system could normalize surveillance of entire communities. And the criteria that trigger scrutiny today could be expanded tomorrow.

U.S. history provides warning examples: The mass surveillance of Muslim communities after 9/11, the targeting of civil rights activists in the 1960s and the monitoring of anti-war protesters during the Vietnam era are just a few.

Gotham’s capabilities may enable government agencies to carry out similar operations on a much larger scale and at a faster pace. And once some form of data integration infrastructure exists, its uses tend to expand, often into areas far from its original mandate.

A broader shift in governance

The deeper story here isn’t just that the government is collecting more data. It’s that the structure of governance is changing into a model where decision-making is increasingly influenced by what integrated data platforms reveal. In a pre-Gotham era, putting someone under suspicion of wrongdoing might have required specific evidence linked to an event or witness account. In a Gotham-enabled system, suspicion can stem from patterns in the data – patterns whose importance is defined by proprietary algorithms.

This level of data integration means that government officials can use potential future risks to justify present action. The predictive turn in governance aligns with a broader shift toward what some scholars call “preemptive security.” It is a logic that can erode traditional legal safeguards that require proof before punishment.

The stakes for democracy

The partnership between Palantir and the federal government raises fundamental questions about accountability in a data-driven state. Who decides how these tools are used? Who can challenge a decision that was made by software, especially if that software is proprietary?

Without clear rules and independent oversight, there is a risk that Palantir’s technology becomes normalized as a default mode of governance. They could be used not only to track suspected criminals or terrorists but also to manage migration flows, monitor and suppress protests, and enforce public health measures. The concern is not that these data integration capabilities exist, but that government agencies could use them in ways that undermine civil liberties without public consent.

Once put in use, such systems are hard to dismantle. They create new expectations for speed and efficiency in law enforcement, making it politically costly to revert to slower, more manual processes. That inertia can lock in not only the technology but also the expanded scope of surveillance it enables.

Choosing the future

As Palantir deepens its government partnerships, the issues its technology raises go beyond questions of cost or efficiency. There are civil liberties implications and the potential for abuse. Will strong legal safeguards and transparent oversight constrain these tools for integrated data analysis? The answer is likely to depend on political will as much as technical design.

Ultimately, Palantir’s Gotham is more than just software. It represents how modern governance might function: through data, connections, continuous monitoring and control. The decisions made about its use today are likely to shape the balance between security and freedom for decades to come.

Nicole M. Bennett is a Ph.D. Candidate in Geography and Assistant Director at the Center for Refugee Studies, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted on Techdirt - 2 May 2025 @ 01:36pm

How The Government Is Quietly Repurposing Everyone’s Data For Surveillance

A whistleblower at the National Labor Relations Board reported an unusual spike in potentially sensitive data flowing out of the agency’s network in early March 2025 when staffers from the Department of Government Efficiency, which goes by DOGE, were granted access to the agency’s databases. On April 7, the Department of Homeland Security gained access to Internal Revenue Service tax data.

These seemingly unrelated events are examples of recent developments in the transformation of the structure and purpose of federal government data repositories. I am a researcher who studies the intersection of migration, data governance and digital technologies. I’m tracking how data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement.

Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades.

These data-sharing arrangements often take place outside public scrutiny, driven by national security justificationsfraud prevention initiatives and digital modernization efforts. The result is that the structure of government is quietly transforming into an integrated surveillance apparatus, capable of monitoring, predicting and flagging behavior at an unprecedented scale.

Executive orders signed by President Donald Trump aim to remove remaining institutional and legal barriers to completing this massive surveillance system.

DOGE and the private sector

Central to this transformation is DOGE, which is tasked via an executive order to “promote inter-operability between agency networks and systems, ensure data integrity, and facilitate responsible data collection and synchronization.” An additional executive order calls for the federal government to eliminate its information silos.

By building interoperable systems, DOGE can enable real-time, cross-agency access to sensitive information and create a centralized database on people within the U.S. These developments are framed as administrative streamlining but lay the groundwork for mass surveillance.

Key to this data repurposing are public-private partnerships. The DHS and other agencies have turned to third-party contractors and data brokers to bypass direct restrictions. These intermediaries also consolidate data from social media, utility companies, supermarkets and many other sources, enabling enforcement agencies to construct detailed digital profiles of people without explicit consent or judicial oversight.

Palantir, a private data firm and prominent federal contractor, supplies investigative platforms to agencies such as Immigration and Customs Enforcement, the Department of Defensethe Centers for Disease Control and Prevention and the Internal Revenue Service. These platforms aggregate data from various sources – driver’s license photossocial servicesfinancial informationeducational data – and present it in centralized dashboards designed for predictive policing and algorithmic profiling. These tools extend government reach in ways that challenge existing norms of privacy and consent.

The role of AI

Artificial intelligence has further accelerated this shift.

Predictive algorithms now scan vast amounts of data to generate risk scores, detect anomalies and flag potential threats.

These systems ingest data from school enrollment records, housing applications, utility usage and even social media, all made available through contracts with data brokers and tech companies. Because these systems rely on machine learning, their inner workings are often proprietary, unexplainable and beyond meaningful public accountability.

Sometimes the results are inaccurate, generated by AI hallucinations – responses AI systems produce that sound convincing but are incorrect, made up or irrelevant. Minor data discrepancies can lead to major consequences: job loss, denial of benefits and wrongful targeting in law enforcement operations. Once flagged, individuals rarely have a clear pathway to contest the system’s conclusions.

Digital profiling

Participation in civic life, applying for a loan, seeking disaster relief and requesting student aid now contribute to a person’s digital footprint. Government entities could later interpret that data in ways that allow them to deny access to assistance. Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance. And with growing dependence on private contractors, the boundaries between public governance and corporate surveillance continue to erode.

Artificial intelligencefacial recognition systems and predictive profiling systems lack oversight. They also disproportionately affect low-income individuals, immigrants and people of color, who are more frequently flagged as risks.

Initially built for benefits verification or crisis response, these data systems now feed into broader surveillance networks. The implications are profound. What began as a system targeting noncitizens and fraud suspects could easily be generalized to everyone in the country.

Eyes on everyone

This is not merely a question of data privacy. It is a broader transformation in the logic of governance. Systems once designed for administration have become tools for tracking and predicting people’s behavior. In this new paradigm, oversight is sparse and accountability is minimal.

AI allows for the interpretation of behavioral patterns at scale without direct interrogation or verification. Inferences replace facts. Correlations replace testimony.

The risk extends to everyone. While these technologies are often first deployed at the margins of society – against migrants, welfare recipients or those deemed “high risk” – there’s little to limit their scope. As the infrastructure expands, so does its reach into the lives of all citizens.

With every form submitted, interaction logged and device used, a digital profile deepens, often out of sight. The infrastructure for pervasive surveillance is in place. What remains uncertain is how far it will be allowed to go.

The Conversation

Nicole M. Bennett is a Ph.D. Candidate in Geography and Assistant Director at the Center for Refugee Studies, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.