Category: Technology

  • One of the greatest tricks capitalism ever played on the global intelligentsia was convincing some of them that it no longer exists, that it is dead and gone, having vanished in plain sight from the face of the earth. And in the last few years, a branch of political economy has risen arguing exactly this, whereby capitalism has unraveled and devolved into techno-feudalism. That is, that capitalism has exited the stage of world history, or has started to do so, only to be replaced by techno-feudalism, i.e., a socio-economic formation, where markets have been usurped and/or abolished in favor of highly-organized and highly-controlled internet platforms, who are owned and operated by one-person, or a select few, that control every aspect of their digital fiefdoms. Techno-feudalism is the idea that capitalism has receded, along with profits and the profit-imperative, in favor of central-bank money, i.e., fiat money, now taking the place of all profit-making. For these theoreticians of techno-feudalism, capitalism is dead. And like an old battle weary baby boomer, gently easing him or herself into a warm tub, filled with Epsom salt, capitalism as well, has gently eased itself into the hot tub of techno-feudalism and dissolved itself into a new post-capitalist socio-economic regime, without kicking up a fuss.

    Ultimately, this is false. It is false, in the sense that: 1) the world economy is, on most counts, functioning and operating according to the logic of capitalism, namely, that 99.9999% of the world economy continues to be all about the maximization of profit, which also includes these internet fiefdoms. In the sense that the maximization of profit by any means necessary, for share-holders, continues to be the driving force of the system. Granted, in certain instances, profits may be derived from the central-banks through quantitative easing practices, i.e., the soaking up of easy money by giant corporations, who buy back their own shares, rather than by selling commodities to consumers. Notwithstanding, from the share-holders’ perspective, this is still profit, regardless where the capital surplus comes from. Subsequently, nowadays, profits can come from anywhere in all sorts of forms, whether it is from quantitative easing, accumulation by dispossession, rent, war, and/or from interest payments etc. Wherever it comes from, it is profit through and through, a surplus, which is then classified under the category of profit or revenue, regardless of its source. Hence, if it is a surplus, it is a profit, a revenue of some kind. In the sense that, in the age of techno-capitalist-feudalism, whatever an entity or entities can get away with in the marketplace and/or the sphere of production is ultimately valid, legitimate, and normal.

    Therefore, despite what some academics’ argue, profit is still the central-operating-code of all these so-called internet fiefdoms, whether they are conscious of this fact or not. True, these digital platforms are about the cultivation and harvesting of personal information, but all this information-gathering is fundamentally about amassing profit, namely, super-profits in an anonymous and indirect manner. These digital platforms convert personal data into profit, whether by selling these data-sets to advertisers, or by improving their own technology to better stimulate individual customer purchases and services on their own specific platforms. As always, the point is capitalist revenue, i.e., profit-making by any means necessary, at the lowest financial cost, as soon as possible.

    And finally, 2) whatever its make-up, the newly-risen, neo-feudal tech-aristocracy continues to subscribe heart and soul to the logic of capitalism. In the sense that its origins lie in the logic of capitalism, or more specifically, the logic of capitalism super-charged to its utmost neoliberal extremes. This new aristocracy is capitalist to the core. It seeks to capitalize on resources and people, by any means, in order to amass capital for itself at the expense of the workforce/population, as it always has throughout its history. And rent is a method of amassing capital. Consequently, this new aristocracy is not a technological aristocracy per se, it is a capitalist aristocracy, first and foremost. It is an aristocracy that uses the tools of machine-technology as a means to amass power, profit, and capital, for itself. That is, this aristocracy uses the tools of machine-technology to better align itself with the dictates of the logic of capitalism. For this aristocracy, technology is not an end-in-itself, but, a means to amass more power, profit, and capital, nothing more and nothing less. Thus, capital accumulation is still the end-game of all the financial maneuvers, all the algorithmic innovations, all the rent-extractions, and all the power-plays that these internet fiefdoms engage-in. The highest possible return on capital investment continues to be the bottom-line, regardless of where this return comes from, or what the theoreticians of techno-feudalism claim. Lest we forget that these internet fiefdoms do invest massively in research and development, more so than at any other time in history.

    Subsequently, the software of the system encoded upon everything and everyone is still the logic of capitalism, while the hardware of the system is continually changing and mutating into all sorts of monstrous forms. And, naturally, the theoreticians of techno-feudalism want you to solely focus on the constantly mutating hardware of the capitalist-system, forgoing the hidden immutable software of the system, namely, the software driving the whole system. In the sense that it is only in this manner that the techno-feudal argument holds any water. Forget the software, only concentrate on the hardware, and ye shall believe, believe as an enchanted zealot in the opulence of the techno-feudal sci-fi fantasy and hypothesis.

    Specifically, the economic theoreticians of techno-feudalism are those individuals who would look-upon the first generation terminator model, i.e., the T-800, from the first terminator film as the only authentic cyborg worthy of being called, a terminator. While, the second generation terminator model, i.e., the T-1000, from the second installment of the terminator films would not be a terminator in their eyes, but, something totally different, something that has completely transcended the definition of what a terminator is and what it constitutes. The T-1000 is not a terminator, if you follow the logic of techno-feudalism, because it does not behave, or look, as the original terminator does. The T-1000 is made of liquid metal, while the first generation T-800 is made of living tissue, covering its stainless steel skeleton. Thereby, they are two totally different incompatible entities. Indeed, the T-1000 is not a terminator, but an entity that is totally new and different, unrecognizable as a terminator in relation to the T-800.

    And, indeed, the whole set of arguments about the validity of techno-feudalism revolve around such theoretical tricks and sleights of hand. That capitalism is no longer capitalism because it does not function and operate as capitalism once did in its distant past. Capitalism has evolved into something completely different, as the initial terminator, i.e., the T-800, has evolved into something completely different, the T-1000. What the theoreticians of techno-feudalism do not notice, or simply fail to mention, is that all the multi-varied versions of capitalism, as well as the multi-varied versions of terminators, share the same immutable software. They share the same central-operating-code, which is their defining immutable characteristic. Due to the fact that the T-800 and the T-1000 were algorithmically programmed to achieve the same end, to kill John Conner by any means necessary. This is what defined these cyborg-machines as terminators, not their make-up, their radically different hardware. Just as the old form of capitalism and its newer model, i.e., techno-capitalist-feudalism, share the same end, the same code, i.e., the maximization of profit, by any means necessary, at the lowest financial cost, as soon as possible. Despite the fact that both function and operate in radically different manners and have radically different hardware.

    Ultimately, to achieve and maximize capital is the algorithmic thread that unites all prior forms of capitalism with all its newest model versions, since, surplus value from the central banks, or surplus value from the exploitation of laborers, or surplus value from rent, or wherever else, amounts to the same thing, profiteering at the expense of another, regardless how that profiteering is made, or where that profiteering comes from. As a matter of fact, the tired and antiquated feudal mechanism of rent extraction was appropriated by emerging capitalism, where it was upgraded, supercharged, and welded-tight to industrial capital, as its own.

    In short, capitalist rent is a type of profiteering; capitalist profit is a type of profiteering. Rent is stolen unpaid work, i.e., surplus value; profit is stolen unpaid work, i.e., surplus value. And both rent and profit are fundamental types of capitalist revenue, namely, they are two sides of the same capital coin. And both are the embodiment of magnitudes of force and influence, capable of bending existence to the will of a capitalist entity. Finally, rent and profit are methods by which to accumulate capital, in the sense that profit and rent are both capital, as well as specific modes of capital accumulation. And both have been present in and across the system since the dawn of capitalism and even before that.

    Notwithstanding, the theoreticians of techno-feudalism do not acknowledge this fact. For them, profit is profit only when 1) it is private, i.e., it is strictly made by private enterprises; and 2) it comes strictly from the sphere of commodity-production, i.e., the exploitation of workers in the production sphere. While, rent is rent when it is a fee commanded and paid for, pertaining to the use of a piece of private property, whatever that property is. Specifically, for these theoreticians, rent and profit do not intermingle and they do not embody the same substance, i.e., force and influence. For them, rent and profit are different because their modes of capital accumulation are different. As a result, the theoreticians of techno-feudalism skip over many fundamental economic facts, concerning the certainty that rent and profit are capital, and that rent and profit comprise their own individual methods of capital accumulation. That is, that the end-game of both profit and rent is the same, i.e., to accumulate as much capital as force and influence will allow. Like the T-800 and the T-1000, rent and profit have the same fundamental objective, the same central-operating-code, namely, to accrue the maximum amount of capital by any means necessary, as soon as possible! In the sense that both methods of capital accumulation have been subsumed and integrated into the accumulation processes of totalitarian-capitalism. Whereby, today, profit-making and rent-extraction function and operate in tandem at the behest of the logic of capitalism and the 1 percent.

    And more importantly, the whole techno-feudalist theoretical framework and argument hinges on profit and rent being radically dissimilar and distinctly separated; when, in reality, they are in principle the same. They are both surplus value, forms of power, and profiteering modes of capital accumulation, modes by which to absorb and extract value from another, gratis. Rent is paid out of newly created surplus value; and profit is paid out of newly-created surplus value. All the same, the theoreticians of techno-feudalism pass over these fundamental economic facts in silence, concealing their inherent similarities and their primary importance as capitalist gain.

    Above all, the theoreticians of techno-feudalism do not give credence to Mark Fisher’s notion that capitalism is “a monstrous, infinitely plastic entity, capable of metabolizing and absorbing anything with which it comes into contact”, akin to the T-1000 terminator.1 The theoreticians of techno-feudalism do not acknowledge that capitalism can change its hardware in an infinity of forms, gothic, gruesome, and sadistic, whatever it needs to do; all the while still adhering to its immutable uncompromising software demanding endless capital accumulation. These techno-feudal theorists never mention the immutable software of capitalism, its central-operating-code, the same for the last 250 years. And just like the T-1000 terminator, capitalism will end and disappear only when its software, its central-operating-code, ends and disappears in the hellish crucible of molten metal, that is, the anarchist revolution.

    For example, under the old form of capitalism we had yachts, but now, under the new form of capitalism, i.e., totalitarian-capitalism, or more importantly, techno-capitalist-feudalism, we have super-yachts. Similarly, under the old form of capitalism we had privately-owned backyard air-fields, but now, under the new form of capitalism, we have privately-owned backyard space launch centers. Of course, the theoreticians of techno-feudalism would have you believe that super-yachts, or privately-owned backyard space launch centers, are the product of a wholly different system that has abandoned the logic of capitalism. However, one can clearly see and comprehend that super-yachts and backyard space launch centers are just a logical progression of the logic of capitalism, supercharged to the Nth degree. One develops out of the other, thanks to a new fanatical form of neoliberalism, neoliberalism on methamphetamine, manically driving backwards towards feudalism redux, that is, FEUDALISM 2.0.

    Indeed, even the billionaire caste is a grotesque abomination of the logic of capitalism, out of control and out of whack. In the sense that thousands must be rendered destitute and homeless in order to manufacture a single billionaire. And being Frankenstein monsters, these grotesque billionaire mutants of totalitarian-capitalism unhinged, will eventually have to be hunted down with pitchforks and blowtorches, if the multitude of peasant-workers are to overcome and abolish the horrors of techno-capitalist serfdom, once and for all.

    As a result, capitalism is not dead, but, has only amplified itself to its utmost logical extreme. It has become totalitarian and super-exploitative. In the sense that the logic of capitalism still powers all the newly-risen fiefdoms of the era of techno-capitalist-feudalism. And just like the old form of capitalism, the new form of capitalism has a ruling caste, which, in most instances, is still ironically the same ruling caste that was in power during the reign of the old form of capitalism. And just like the old form of capitalism, the members of this new form of capitalism, comprising its so-called new ruling caste, continue to be the sole owners of the means and forces of production, while, the workforce/population continues to be dominated by a capitalist wage-system, like in the old days, when the old antiquated form of powdered-wig capitalism ruled supreme.

    Today, just like in the past era of run-of-the-mill traditional-capitalism, peasant-workers only have their labor-power, or creative-power, to sell to the owner or owners of the means and forces of production. The only difference in-between the old form of capitalism and the new form of capitalism, i.e., the age of monopoly-capitalism and the age of super-monopoly, or techno-capitalist-feudalism,  is that today peasant-workers can be paid below subsistence levels, whereas before, they were not. In fact, workers now have to work multiple jobs and more hours to make ends meet, since they have no benefits and are paid largely below subsistence levels. Therefore, the logic or software of capitalism has not disappeared. And to say that it has is a gross exaggeration. In other words, the age of monopoly has simply given way to the age of super-monopoly, namely, the dark age of techno-capitalist-feudalism. Wherein, the logic of capitalism continues to thrive and multiply, ad nauseam. Because capitalism has always been dead, dead and congealed, namely, a congealed set of power-relations, which vampire-like live the more, the more creativity they suck, from all their unsuspecting, living peasant-workers.2

    In sum, the specter of capitalism haunts techno-feudalism as software, as its hidden code lodged deep within its radically incompatible ever-mutating hardware. Thereby, the specter of capitalism haunts all the theoretical machinations and the minutia of techno-feudalism, since, techno-feudalism, or more accurately, techno-capitalist-feudalism, is the result of the capital/labor relationship at its most lopsided, oppressive, and technologically dominating. The capital/labor relationship continues to hold; it continues to hold at the center of techno-feudalism, or more accurately, techno-capitalist-feudalism. In the sense that the logic of capitalism pervades, envelops, infects, and poisons, all aspects of society and techno-feudalism, since, the logic of capitalism continues to be the foundation and the fundamental under-girder of society and techno-feudalism, a foundation that techno-feudalism refuses to acknowledge or even address, adequately. (Let us not forget that, like its predecessors, techno-feudalism continues the long history of the critique of capitalism by talking once again about the central concept of capitalism, i.e., CAPITAL, whether this is monopoly-capital, rentier-capital, digital-capital, communicative-capital, surveillance-capital, and/or the new all-terrifying poltergeist of cloud-capital). In short, techno-feudalism distorts the aberrant monstrosity of techno-capitalist-feudalism, the horror-show that is the despotic age of totalitarian-capitalism, whereby, super-monopoly and super-profits are multi-varied, ruthless, and fundamentally undisciplined.

    As it happens, in the dark age of techno-capitalist-feudalism, super-exploitation is 24/7. Whereupon, there is no escape or chance of relief, as the newly-minted post-industrial serfs, i.e., the 99 percent, are forever bound in all sorts of insidious Malthusian traps, financial Catch-22s, crippling debt etc., which have them all going around in circles in and across a hopeless set of bureaucratic labyrinths, all designed to keep them stationary and subservient upon the lower-stratums of the system. As a result, the ruminations of techno-feudalism are scientifically disingenuous. They skew the facts and the true reality of the billions of peasant-workers, toiling under the jackboot of capitalist exploitation, capitalist debt, capitalist rent, and a capitalist wage-system of piecemeal slavery. Subsequently, techno-feudalism is a disservice to workers. To drop the term “capitalist” from techno-capitalist-feudalism, only muddies the clear blue waters of the terminal stage of capitalist development, namely, the new dawning epoch of totalitarian-capitalism, that is, the new dystopian age of techno-capitalist-feudalism, run-amok.

    Just because the old capitalist bourgeoisie has embraced digital algorithms and invasive surveillance technologies as its own, and has abstracted itself at a higher-level of socio-economic existence, away from the workforce/population, whereby, it now appears invisible and increasingly distant from the everyday lives of workers, does not mean the old capitalist bourgeoisie has melted away into thin air, or has been usurped by a new, strictly technological aristocracy. What has happened is that the old capitalist bourgeoisie has become a techno-capitalist-feudal-aristocracy, since, the logic of capitalism, capitalist profit, capitalist rent, and capitalist technological innovations, continue to inform and motivate this authoritarian feudo-capitalist aristocracy.

    In the end, economic supremacy resides with the capitalists, since, they control the repressive-state-apparatuses, while the tech-lords do not. Therefore, these feudo-capitalist lords only exist by virtue of and by the good grace of traditional capitalists, who control the repressive-state-apparatuses. The tech-lords do not have their own repressive-state-apparatus, thereby, they will always remain secondary, merely a small part of the overall feudo-capitalist aristocracy, forever at the mercy of those who control the military, namely, all those blood thirsty repressive-state-apparatuses of the state-finance-corporate-aristocracy, the 1%.

    In view of these damning facts, techno-feudalism is a bust, a wrong turn, a wrong-brained play on words, leading to a theoretical dead-end that only empowers capitalist supremacy at the expense of    workers’ liberation and self-management. It must be jettisoned. The fact of the matter is that the logic of capitalism continues to rule, because the peasant-workers, i.e., the anarcho-proletarians, the punks have yet to overthrow the capitalist mode of production, consumption, and distribution, from the active theater of world history. Thus, within the so-called evolutionary whimper of techno-feudalism, the logic of capitalism is thriving, laughing all the way to the bank. It will never go quietly and orderly into that good night. Capitalism will only go out with a bang, a loud resounding cataclysmic bang. As capitalism came into this world soaked in blood from head to toe; and it will only leave this world gushing blood, allover the globe.3 Because, devoid of the epic blast of rampant anarchist revolution, capitalism invariably marches-on as, and in the form of, techno-capitalist-feudalism. Ergo, in the dark age of TCF:

    Resistance is feudal and the guillotine is forever!

    ENDNOTES:

    1 Mark Fisher, Capitalist Realism (United Kingdom: Zero Books, 2009), p. 6.

    2 Karl Marx, Capital (Volume One), Trans. Ben Fowkes, (London, Eng.: Penguin, 1990), p. 342.

    3 Karl Marx, Capital (Volume One), p. 926.

    The post The Pragmatic-Demolition of  Techno-Feudalism first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • Call it what it is: a panopticon presidency.

    President Trump’s plan to fuse government power with private surveillance tech to build a centralized, national citizen database is the final step in transforming America from a constitutional republic into a digital dictatorship armed with algorithms and powered by unaccountable, all-seeing artificial intelligence.

    This isn’t about national security. It’s about control.

    According to news reports, the Trump administration is quietly collaborating with Palantir Technologies—the data-mining behemoth co-founded by billionaire Peter Thiel—to construct a centralized, government-wide surveillance system that would consolidate biometric, behavioral, and geolocation data into a single, weaponized database of Americans’ private information.

    This isn’t about protecting freedom. It’s about rendering freedom obsolete.

    What we’re witnessing is the transformation of America into a digital prison—one where the inmates are told we’re free while every move, every word, every thought is monitored, recorded, and used to assign a “threat score” that determines our place in the new hierarchy of obedience.

    The tools enabling this all-seeing surveillance regime are not new, but under Trump’s direction, they are being fused together in unprecedented ways, with Palantir at the center of this digital dragnet.

    Palantir, long criticized for its role in powering ICE (Immigration and Customs Enforcement) raids and predictive policing, is now poised to become the brain of Trump’s surveillance regime.

    Under the guise of “data integration” and “public safety,” this public-private partnership would deploy AI-enhanced systems to comb through everything from facial recognition feeds and license plate readers to social media posts and cellphone metadata, cross-referencing it all to assess a person’s risk to the state.

    This isn’t speculative. It’s already happening.

    Palantir’s Gotham platform, used by law enforcement and military agencies, has long been the backbone of real-time tracking and predictive analysis. Now, with Trump’s backing, it threatens to become the central nervous system of a digitally enforced authoritarianism.

    As Palantir itself admits, its mission is to “augment human decision-making.” In practice, that means replacing probable cause with probability scores, courtrooms with code, and due process with data pipelines.

    In this new regime, your innocence will be irrelevant. The algorithm will decide who you are.

    To understand the full danger of this moment, we must trace the long arc of government surveillance—from secret intelligence programs like COINTELPRO and the USA PATRIOT Act to today’s AI-driven digital dragnet embodied by data fusion centers.

    Building on this foundation of historical abuse, the government has evolved its tactics, replacing human informants with algorithms and wiretaps with metadata, ushering in an age where pre-crime prediction is treated as prosecution.

    Every smartphone ping, GPS coordinate, facial scan, online purchase, and social media like becomes part of your “digital exhaust”—a breadcrumb trail of metadata that the government now uses to build behavioral profiles. The FBI calls it “open-source intelligence.” But make no mistake: this is dragnet surveillance, and it is fundamentally unconstitutional.

    Already, government agencies are mining this data to generate “pattern of life” analyses, flag “radicalized” individuals, and preemptively investigate those who merely share anti-government views.

    This is not law enforcement. This is thought-policing by machine, the logical outcome of a system that criminalizes dissent and deputizes algorithms to do the targeting.

    Nor is this entirely new.

    For decades, the federal government has reportedly maintained a highly classified database known as Main Core, designed to collect and store information on Americans deemed potential threats to national security.

    As Tim Shorrock reported for Salon, “One former intelligence official described Main Core as ‘an emergency internal security database system’ designed for use by the military in the event of a national catastrophe, a suspension of the Constitution or the imposition of martial law.”

    Trump’s embrace of Palantir, and its unparalleled ability to fuse surveillance feeds, social media metadata, public records, and AI-driven predictions, marks a dangerous evolution: a modern-day resurrection of Main Core, digitized, centralized, and fully automated.

    What was once covert contingency planning is now becoming active policy.

    What has emerged is a surveillance model more vast than anything dreamed up by past regimes—a digital panopticon in which every citizen is watched constantly, and every move is logged in a government database—not by humans, but by machines without conscience, without compassion, and without constitutional limits.

    This is not science fiction. This is America—now.

    As this technological tyranny expands, the foundational safeguards of the Constitution—those supposed bulwarks against arbitrary power—are quietly being nullified and its protections rendered meaningless.

    What does the Fourth Amendment mean in a world where your entire life can be searched, sorted, and scored without a warrant? What does the First Amendment mean when expressing dissent gets you flagged as an extremist? What does the presumption of innocence mean when algorithms determine guilt?

    The Constitution was written for humans, not for machine rule. It cannot compete with predictive analytics trained to bypass rights, sidestep accountability, and automate tyranny.

    And that is the endgame: the automation of authoritarianism. An unblinking, AI-powered surveillance regime that renders due process obsolete and dissent fatal.

    Still, it is not too late to resist—but doing so requires awareness, courage, and a willingness to confront the machinery of our own captivity.

    Make no mistake: the government is not your friend in this. Neither are the corporations building this digital prison. They thrive on your data, your fear, and your silence.

    To resist, we must first understand the weaponized AI tools being used against us.

    We must demand transparency, enforce limits on data collection, ban predictive profiling, and dismantle the fusion centers feeding this machine.

    We must treat AI surveillance with the same suspicion we once reserved for secret police. Because that is what AI-powered governance has become—secret police, only smarter, faster, and less accountable.

    We don’t have much time.

    Trump’s alliance with Palantir is a warning sign—not just of where we are, but of where we’re headed. A place where freedom is conditional, rights are revocable, and justice is decided by code.

    The question is no longer whether we’re being watched—that is now a given—but whether we will meekly accept it. Will we dismantle this electronic concentration camp, or will we continue building the infrastructure of our own enslavement?

    As I point out in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, if we trade liberty for convenience and privacy for security, we will find ourselves locked in a prison we helped build, and the bars won’t be made of steel. They will be made of data.

    The post Trump’s Palantir-Powered Surveillance Is Turning America Into a Digital Prison first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • OpenAI has always said it’s a different kind of Big Tech titan, founded not just to rack up a stratospheric valuation of $400 billion (and counting), but also to “ensure that artificial general intelligence benefits all of humanity.” 

    The meteoric machine-learning firm announced itself to the world in a December 2015 press release that lays out a vision of technology to benefit all people as people, not citizens. There are neither good guys nor adversaries. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole,” the announcement stated with confidence. “Since our research is free from financial obligations, we can better focus on a positive human impact.”

    Early rhetoric from the company and its CEO, Sam Altman, described advanced artificial intelligence as a harbinger of a globalist utopia, a technology that wouldn’t be walled off by national or corporate boundaries but enjoyed together by the species that birthed it. In an early interview with Altman and fellow OpenAI co-founder Elon Musk, Altman described a vision of artificial intelligence “freely owned by the world” in common. When Vanity Fair asked in a 2015 interview why the company hadn’t set out as a for-profit venture, Altman replied: “I think that the misaligned incentives there would be suboptimal to the world as a whole.”

    Times have changed. And OpenAI wants the White House to think it has too.

    In a March 13 white paper submitted directly to the Trump administration, OpenAI’s global affairs chief Chris Lehane pitched a near future of AI built for the explicit purpose of maintaining American hegemony and thwarting the interests of its geopolitical competitors — specifically China. The policy paper’s mentions of freedom abound, but the proposal’s true byword is national security.

    OpenAI never attempts to reconcile its full-throated support of American security with its claims to work for the whole planet, not a single country. After opening with a quotation from Trump’s own executive order on AI, the action plan proposes that the government create a direct line for the AI industry to reach the entire national security community, work with OpenAI “to develop custom models for national security,” and increase intelligence sharing between industry and spy agencies “to mitigate national security risks,” namely from China.

    In the place of techno-globalism, OpenAI outlines a Cold Warrior exhortation to divide the world into camps. OpenAI will ally with those “countries who prefer to build AI on democratic rails,” and get them to commit to “deploy AI in line with democratic principles set out by the US government.”

    The rhetoric seems pulled directly from the keyboard of an “America First” foreign policy hawk like Marco Rubio or Rep. Mike Gallagher, not a company whose website still endorses the goal of lifting up the whole world. The word “humanity,” in fact, never appears in the action plan.

    Rather, the plan asks Trump, to whom Altman donated $1 million for his inauguration ceremony, to “ensure that American-led AI prevails over CCP-led AI” — the Chinese Communist Party — “securing both American leadership on AI and a brighter future for all Americans.”

    It’s an inherently nationalist pitch: The concepts of “democratic values” and “democratic infrastructure” are both left largely undefined beyond their American-ness. What is democratic AI? American AI. What is American AI? The AI of freedom. And regulation of any kind, of course, “may hinder our economic competitiveness and undermine our national security,” Lehane writes, suggesting a total merging of corporate and national interests.

    Related

    Trump’s Big, Beautiful Handout to the AI Industry

    In an emailed statement, OpenAI spokesperson Liz Bourgeois declined to explain the company’s nationalist pivot but defended its national security work.

    “We believe working closely with the U.S. government is critical to advancing our mission of ensuring AGI benefits all of humanity,” Bourgeois wrote. “The U.S. is uniquely positioned to help shape global norms around safe, secure, and broadly beneficial AI development—rooted in democratic values and international collaboration.”

    The Intercept is currently suing OpenAI in federal court over the company’s use of copyrighted articles to train its chatbot ChatGPT.

    OpenAI’s newfound patriotism is loud. But is it real? 

    In his 2015 interview with Musk, Altman spoke of artificial intelligence as a technology so special and so powerful that it ought to transcend national considerations. Pressed on OpenAI’s goal to share artificial intelligence technology globally rather than keeping it under domestic control, Altman provided an answer far more ambivalent than the company’s current day mega-patriotism: “If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who?” 

    He also said, in the early days of OpenAI, that there may be limits to what his company might do for his country.

    “I unabashedly love this country, which is the greatest country in the world,” Altman told the New Yorker in 2016. “But some things we will never do with the Department of Defense.” In the profile, he expressed ambivalence about overtures to OpenAI from then-Secretary of Defense Ashton Carter, who envisioned using the company’s tools for targeting purposes. At the time, this would have run afoul of the company’s own ethical guidelines, which for years stated explicitly that customers could not use its services for “military and warfare” purposes, writing off any Pentagon contracting entirely. 

    Related

    OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

    In January 2024, The Intercept reported that OpenAI had deleted this military contracting ban from its policies without explanation or announcement. Asked about how the policy reversal might affect business with other countries in an interview with Bloomberg, OpenAI executive Anna Makanju said the company is “focused on United States national security agencies.” But insiders who spoke with The Intercept on conditions of anonymity suggested that the company’s turn to jingoism may come more from opportunism than patriotism. Though Altman has long been on the record as endorsing corporate support of the United States, under an administration where the personal favor of the president means far more than the will of lawmakers, parroting muscular foreign policy rhetoric is good for business.

    One OpenAI source who spoke with The Intercept recalled concerned discussions about the possibility that the U.S. government would nationalize the company. They said that at times, this was discussed with the company’s head of national security partnerships, Katrina Mulligan. Mulligan joined the company in February 2024 after a career in the U.S. intelligence and military establishment, including leading the media and public policy response to Edward Snowden’s leaks while on the Obama National Security Council staff, working for the director of national intelligence, serving as a senior civilian overseeing Special Operations forces in the Pentagon, and working as chief of staff to the secretary of the Army.

    This source speculated that fostering closeness with the government was one method of fending off the potential risk of nationalization.

    As an independent research organization with ostensibly noble, global goals, OpenAI may have been less equipped to beat back regulatory intervention, a second former OpenAI employee suggested. What we see now, they said, is the company “transitioning from presenting themselves as a nonprofit with very altruistic, pro-humanity aims, to presenting themselves as an economic and military powerhouse that the government needs to support, shelter, and cut red tape on behalf of.”

    The second source said they believed the national security rhetoric was indicative of OpenAI “sucking up to the administration,” not a genuinely held commitment by executives.

    “In terms of how decisions were actually made, what seemed to be the deciding factor was basically how can OpenAI win the race rather than anything to do with either humanity or national security,” they added. “In today’s political environment, it’s a winning move with the administration to talk about America winning and national security and stuff like that. But you should not confuse that for the actual thing that’s driving decision-making internally.”

    The person said that talk of preventing Chinese dominance over artificial intelligence likely reflects business, not political, anxieties. “I think that’s not their goal,” they said. “I think their goal is to maintain their own control over the most powerful stuff.” 

    “I also talked to some people who work at OpenAI who weren’t from the U.S. who were feeling like … ‘What’s going to happen to my country?’”

    But even if its motivations are cynical, company sources told The Intercept that national security considerations still pervaded OpenAI. The first source recalled a member of OpenAI’s corporate security team regularly engaging with the U.S. intelligence community to safeguard the company’s ultra-valuable machine-learning models. The second recalled concern about the extent of the government’s relationship — and potential control over — OpenAI’s technology. A common fear among AI safety researchers is a future scenario in which artificial intelligence models begin autonomously designing newer versions, ad infinitum, leading human engineers to lose control.

    “One reason why the military AI angle could be bad for safety is that you end up getting the same sort of thing with AIs designing successors designing successors, except that it’s happening in a military black project instead of in a somewhat more transparent corporation,” the second source said. 

    “Occasionally there’d be talk of, like, eventually the government will wake up, and there’ll be a nuclear power plant next to a data center next to a bunker, and we’ll all be moved into the bunker so that we can, like, beat China by managing an intelligence explosion,” they added. At a company that recruits top engineering talent internationally, the prospect of American dominance of a technology they believe could be cataclysmic was at times disquieting. “I remember I also talked to some people who work at OpenAI who weren’t from the U.S. who were feeling kind of sad about that and being like, ‘What’s going to happen to my country after the U.S. gets all the super intelligences?’”

    Sincerity aside, OpenAI has spent the past year training its corporate algorithm on flag-waving, defense lobbying, and a strident anticommunism that smacks more of the John Birch Society than the Whole Earth Catalog.

    In his white paper, Lehane, a former press secretary for Vice President Al Gore and special counsel to President Bill Clinton, advocates not for a globalist techno-utopia in which artificial intelligence jointly benefits the world, but a benevolent jingoism in which freedom and prosperity is underwritten by the guarantee of American dominance. While the document notes fleetingly, in its very last line, the idea of “work toward AI that benefits everyone,” the pitch is not one of true global benefit, but of American prosperity that trickles down to its allies.

    Related

    Why an “AI Race” Between the U.S. and China Is a Terrible, Terrible Idea

    The company proposes strict rules walling off parts of the world, namely China, from AI’s benefits, on the grounds that they are simply too dangerous to be trusted. OpenAI explicitly advocates for conceiving of the AI market not as an international one, but “the entire world less the PRC” — the People’s Republic of China — “and its few allies,” a line that quietly excludes over 1 billion people from the humanity the company says it wishes to benefit and millions who live under U.S.-allied authoritarian rule. 

    In pursuit of “democratic values,” OpenAI proposes dividing the entire planet into three tiers. At the top: “Countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.” Given the earlier mention of building “AI in line with democratic principles set out by the US government,” this group’s membership is clear: the United States, and its friends.

    In pursuit of “democratic values,” OpenAI proposes dividing the entire planet into three tiers.

    Beneath them are Tier 2 countries, a geopolitical purgatory defined only as those that have failed to sufficiently enforce American export control policies and protect American intellectual property from Tier 3: Communist China. “CCP-led China, along with a small cohort of countries aligned with the CCP, would represent its own category that is prohibited from accessing democratic AI systems,” the paper explains. To keep these barriers intact — while allowing for the chance that Tier 2 countries might someday graduate to the top — OpenAI suggests coordinating “global bans on CCP-aligned AI” and “prohibiting relationships” between other countries and China’s military or intelligence services.

    One of the former OpenAI employees said concern about China at times circulated throughout the company. “Definitely concerns about espionage came up,” this source said, “including ‘Are particular people who work at the company spies or agents?’” At one point, they said, a colleague worried about a specific co-worker they’d learned was the child of a Chinese government official. The sourced recalled “some people being very upset about the implication” that the company had been infiltrated by foreigners, while others wanted an actual answer: “‘Is anyone who works at the company a spy or foreign agent?’”

    The company’s public adoration of Western democracy is not without wrinkles. In early May, OpenAI announced an initiative to build data centers and customized ChatGPT bots with foreign governments, as part of its $500 billion “Project Stargate” AI infrastructure construction blitz.

    “This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power,” the announcement read.

    Unmentioned in that celebration of AI democracy is the fact that Project Stargate’s financial backers include the government of Abu Dhabi, an absolute monarchy. On May 23, Altman tweeted that it was “great to work with the UAE” on Stargate, describing co-investor and Emirati national security adviser Tahnoun bin Zayed Al Nahyan as a “great supporter of openai, a true believer in AGI, and a dear personal friend.” In 2019, Reuters revealed how a team of mercenary hackers working for Emirati intelligence under Tahnoun had illegally broken into the devices of targets around the world, including American citizens.

    Asked how a close partnership with an authoritarian Emirati autocracy fit into its broader mission of spreading democratic values, OpenAI pointed to a recent op-ed in The Hill in which Lehane discusses the partnership.

    “We’re working closely with American officials to ensure our international partnerships meet the highest standards of security and compliance,” Lehane writes, adding, “Authoritarian regimes would be excluded.”

    OpenAI’s new direction has been reflected in its hiring.

    Since hiring Mulligan, the company has continued to expand its D.C. operation. Mulligan works on national security policy with a team of former Department of Defense, NSA, CIA, and Special Operations personnel. Gabrielle Tarini joined the company after almost two years at the Defense Department, where she worked on “Indo-Pacific security affairs” and “China policy,” according to LinkedIn. Sasha Baker, who runs national security policy, joined after years at the National Security Council and Pentagon.

    OpenAI’s policy team includes former DoD, NSA, CIA, and Special Operations personnel.

    The list goes on: Other policy team hires at OpenAI include veterans of the NSA, a Pentagon former special operations and South China Sea expert, and a graduate of the CIA’s Sherman Kent School for Intelligence Analysis. OpenAI’s military and intelligence revolving door continues to turn: At the end of April, the company recruited Alexis Bonnell, the former chief information officer of the Air Force Research Laboratory. Recent job openings have included a “Relationship Manager” focusing on “strategic relationships with U.S. government customers.”

    Mulligan, the head of national security policy and partnerships, is both deeply connected to the defense and intelligence apparatus, and adept at the kind of ethically ambivalent thinking common to the tech sector.

    “Not everything that has happened at Guantanamo Bay is to be praised, that’s for sure, but [Khalid Sheikh Mohammed] admitting to his crimes, even all these years later, is a big moment for many (including me),” she posted last year. In a March podcast appearance, Mulligan noted she worked on “Gitmo rendition, detention, and interrogation” during her time in government.

    Mulligan’s public rhetoric matches the ideological drift of a company that today seems more concerned with “competition” and “adversaries” than kumbaya globalism. 

    On LinkedIn, she seems to embody the contradiction between a global mission and full-throated alignment with American policy values. “I’m excited to be joining OpenAI to help them ensure that AI is safe and beneficial to all of humanity,” she wrote upon her hiring from the Pentagon.

    Since then, she has regularly represented OpenAI’s interests and American interests as one and the same, sharing national security truisms such as “In a competition with China, the pace of AI adoption matters,” or “The United States’ continued lead on AI is essential to our national security and economic competitiveness,” or “Congress needs to make some decisive investments to ensure the U.S. national security community has the resources to harness the advantage the U.S. has on this technology.” 

    Related

    Trump’s Pick for a Top Army Job Works at a Weapons Company — And Won’t Give Up His Stock

    This is to some extent conventional wisdom of the country’s past 100 years: A strong, powerful America is good for the whole world. But OpenAI has shifted from an organization that believed its tech would lift up the whole world, unbounded by national borders, to one that talks like Lockheed Martin. Part of OpenAI’s national security realignment has come in the form of occasional “disruption” reports detailing how the company detected and neutralized “malicious use” of its tools by foreign governments, coincidentally almost all of them considered adversaries of the United States. 

    As the provider of services like ChatGPT, OpenAI has near-total visibility into how the tools are used or misused by individuals, what the company describes in one report as its “unique vantage point.” The reports detail not only how these governments attempted to use ChatGPT, but also the steps OpenAI took to thwart them, described by the company as an “effort to support broader efforts by U.S. and allied governments.” Each report has focused almost entirely on malign AI uses by “state affiliated” actors from Iran, China, North Korea, and Russia. A May 2024 report outed an Israeli propaganda effort using ChatGPT but stopped short of connecting it to that country’s government.

    Earlier this month, representatives of the intelligence agency and the contractors who serve them gathered at the America’s Center Convention Complex in St. Louis for the GEOINT Symposium, dedicated to geospatial intelligence, the form of tradecraft analyzing satellite and other imagery of the planet to achieve military and intelligence objectives. 

    On May 20, Mulligan took to the stage to demonstrate how OpenAI’s services could help U.S. spy agencies and the Pentagon better exploit the Earth’s surface. Though the government’s practice of GEOINT frequently ends in the act of killing, Mulligan used a gentler example, demonstrating the ability of ChatGPT to pinpoint the location where a photograph of a rabbit was taken. It was nothing if not a sales pitch, one predicated on the fear that some other country might leap at the opportunity before the United States.

    “Government often feels like using AI is too risky and that it’s better and safer to keep doing things the way that we’ve always done them, and I think this is the most dangerous mix of all,” Mulligan told her audience. “If we keep doing things the way that we always have, and our adversaries adapt to this technology before we do, they will have all of the advantages that I show you today, and we will not be safer.” 

    The post Whose National Security? OpenAI’s Vision for American Techno-Dominance appeared first on The Intercept.

    This post was originally published on The Intercept.

  • OpenAI has always said it’s a different kind of Big Tech titan, founded not just to rack up a stratospheric valuation of $400 billion (and counting), but also to “ensure that artificial general intelligence benefits all of humanity.” 

    The meteoric machine-learning firm announced itself to the world in a December 2015 press release that lays out a vision of technology to benefit all people as people, not citizens. There are neither good guys nor adversaries. “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole,” the announcement stated with confidence. “Since our research is free from financial obligations, we can better focus on a positive human impact.”

    Early rhetoric from the company and its CEO, Sam Altman, described advanced artificial intelligence as a harbinger of a globalist utopia, a technology that wouldn’t be walled off by national or corporate boundaries but enjoyed together by the species that birthed it. In an early interview with Altman and fellow OpenAI co-founder Elon Musk, Altman described a vision of artificial intelligence “freely owned by the world” in common. When Vanity Fair asked in a 2015 interview why the company hadn’t set out as a for-profit venture, Altman replied: “I think that the misaligned incentives there would be suboptimal to the world as a whole.”

    Times have changed. And OpenAI wants the White House to think it has too.

    In a March 13 white paper submitted directly to the Trump administration, OpenAI’s global affairs chief Chris Lehane pitched a near future of AI built for the explicit purpose of maintaining American hegemony and thwarting the interests of its geopolitical competitors — specifically China. The policy paper’s mentions of freedom abound, but the proposal’s true byword is national security.

    OpenAI never attempts to reconcile its full-throated support of American security with its claims to work for the whole planet, not a single country. After opening with a quotation from Trump’s own executive order on AI, the action plan proposes that the government create a direct line for the AI industry to reach the entire national security community, work with OpenAI “to develop custom models for national security,” and increase intelligence sharing between industry and spy agencies “to mitigate national security risks,” namely from China.

    In the place of techno-globalism, OpenAI outlines a Cold Warrior exhortation to divide the world into camps. OpenAI will ally with those “countries who prefer to build AI on democratic rails,” and get them to commit to “deploy AI in line with democratic principles set out by the US government.”

    The rhetoric seems pulled directly from the keyboard of an “America First” foreign policy hawk like Marco Rubio or Rep. Mike Gallagher, not a company whose website still endorses the goal of lifting up the whole world. The word “humanity,” in fact, never appears in the action plan.

    Rather, the plan asks Trump, to whom Altman donated $1 million for his inauguration ceremony, to “ensure that American-led AI prevails over CCP-led AI” — the Chinese Communist Party — “securing both American leadership on AI and a brighter future for all Americans.”

    It’s an inherently nationalist pitch: The concepts of “democratic values” and “democratic infrastructure” are both left largely undefined beyond their American-ness. What is democratic AI? American AI. What is American AI? The AI of freedom. And regulation of any kind, of course, “may hinder our economic competitiveness and undermine our national security,” Lehane writes, suggesting a total merging of corporate and national interests.

    Related

    Trump’s Big, Beautiful Handout to the AI Industry

    In an emailed statement, OpenAI spokesperson Liz Bourgeois declined to explain the company’s nationalist pivot but defended its national security work.

    “We believe working closely with the U.S. government is critical to advancing our mission of ensuring AGI benefits all of humanity,” Bourgeois wrote. “The U.S. is uniquely positioned to help shape global norms around safe, secure, and broadly beneficial AI development—rooted in democratic values and international collaboration.”

    The Intercept is currently suing OpenAI in federal court over the company’s use of copyrighted articles to train its chatbot ChatGPT.

    OpenAI’s newfound patriotism is loud. But is it real? 

    In his 2015 interview with Musk, Altman spoke of artificial intelligence as a technology so special and so powerful that it ought to transcend national considerations. Pressed on OpenAI’s goal to share artificial intelligence technology globally rather than keeping it under domestic control, Altman provided an answer far more ambivalent than the company’s current day mega-patriotism: “If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who?” 

    He also said, in the early days of OpenAI, that there may be limits to what his company might do for his country.

    “I unabashedly love this country, which is the greatest country in the world,” Altman told the New Yorker in 2016. “But some things we will never do with the Department of Defense.” In the profile, he expressed ambivalence about overtures to OpenAI from then-Secretary of Defense Ashton Carter, who envisioned using the company’s tools for targeting purposes. At the time, this would have run afoul of the company’s own ethical guidelines, which for years stated explicitly that customers could not use its services for “military and warfare” purposes, writing off any Pentagon contracting entirely. 

    Related

    OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

    In January 2024, The Intercept reported that OpenAI had deleted this military contracting ban from its policies without explanation or announcement. Asked about how the policy reversal might affect business with other countries in an interview with Bloomberg, OpenAI executive Anna Makanju said the company is “focused on United States national security agencies.” But insiders who spoke with The Intercept on conditions of anonymity suggested that the company’s turn to jingoism may come more from opportunism than patriotism. Though Altman has long been on the record as endorsing corporate support of the United States, under an administration where the personal favor of the president means far more than the will of lawmakers, parroting muscular foreign policy rhetoric is good for business.

    One OpenAI source who spoke with The Intercept recalled concerned discussions about the possibility that the U.S. government would nationalize the company. They said that at times, this was discussed with the company’s head of national security partnerships, Katrina Mulligan. Mulligan joined the company in February 2024 after a career in the U.S. intelligence and military establishment, including leading the media and public policy response to Edward Snowden’s leaks while on the Obama National Security Council staff, working for the director of national intelligence, serving as a senior civilian overseeing Special Operations forces in the Pentagon, and working as chief of staff to the secretary of the Army.

    This source speculated that fostering closeness with the government was one method of fending off the potential risk of nationalization.

    As an independent research organization with ostensibly noble, global goals, OpenAI may have been less equipped to beat back regulatory intervention, a second former OpenAI employee suggested. What we see now, they said, is the company “transitioning from presenting themselves as a nonprofit with very altruistic, pro-humanity aims, to presenting themselves as an economic and military powerhouse that the government needs to support, shelter, and cut red tape on behalf of.”

    The second source said they believed the national security rhetoric was indicative of OpenAI “sucking up to the administration,” not a genuinely held commitment by executives.

    “In terms of how decisions were actually made, what seemed to be the deciding factor was basically how can OpenAI win the race rather than anything to do with either humanity or national security,” they added. “In today’s political environment, it’s a winning move with the administration to talk about America winning and national security and stuff like that. But you should not confuse that for the actual thing that’s driving decision-making internally.”

    The person said that talk of preventing Chinese dominance over artificial intelligence likely reflects business, not political, anxieties. “I think that’s not their goal,” they said. “I think their goal is to maintain their own control over the most powerful stuff.” 

    “I also talked to some people who work at OpenAI who weren’t from the U.S. who were feeling like … ‘What’s going to happen to my country?’”

    But even if its motivations are cynical, company sources told The Intercept that national security considerations still pervaded OpenAI. The first source recalled a member of OpenAI’s corporate security team regularly engaging with the U.S. intelligence community to safeguard the company’s ultra-valuable machine-learning models. The second recalled concern about the extent of the government’s relationship — and potential control over — OpenAI’s technology. A common fear among AI safety researchers is a future scenario in which artificial intelligence models begin autonomously designing newer versions, ad infinitum, leading human engineers to lose control.

    “One reason why the military AI angle could be bad for safety is that you end up getting the same sort of thing with AIs designing successors designing successors, except that it’s happening in a military black project instead of in a somewhat more transparent corporation,” the second source said. 

    “Occasionally there’d be talk of, like, eventually the government will wake up, and there’ll be a nuclear power plant next to a data center next to a bunker, and we’ll all be moved into the bunker so that we can, like, beat China by managing an intelligence explosion,” they added. At a company that recruits top engineering talent internationally, the prospect of American dominance of a technology they believe could be cataclysmic was at times disquieting. “I remember I also talked to some people who work at OpenAI who weren’t from the U.S. who were feeling kind of sad about that and being like, ‘What’s going to happen to my country after the U.S. gets all the super intelligences?’”

    Sincerity aside, OpenAI has spent the past year training its corporate algorithm on flag-waving, defense lobbying, and a strident anticommunism that smacks more of the John Birch Society than the Whole Earth Catalog.

    In his white paper, Lehane, a former press secretary for Vice President Al Gore and special counsel to President Bill Clinton, advocates not for a globalist techno-utopia in which artificial intelligence jointly benefits the world, but a benevolent jingoism in which freedom and prosperity is underwritten by the guarantee of American dominance. While the document notes fleetingly, in its very last line, the idea of “work toward AI that benefits everyone,” the pitch is not one of true global benefit, but of American prosperity that trickles down to its allies.

    Related

    Why an “AI Race” Between the U.S. and China Is a Terrible, Terrible Idea

    The company proposes strict rules walling off parts of the world, namely China, from AI’s benefits, on the grounds that they are simply too dangerous to be trusted. OpenAI explicitly advocates for conceiving of the AI market not as an international one, but “the entire world less the PRC” — the People’s Republic of China — “and its few allies,” a line that quietly excludes over 1 billion people from the humanity the company says it wishes to benefit and millions who live under U.S.-allied authoritarian rule. 

    In pursuit of “democratic values,” OpenAI proposes dividing the entire planet into three tiers. At the top: “Countries that commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens could be considered Tier I countries.” Given the earlier mention of building “AI in line with democratic principles set out by the US government,” this group’s membership is clear: the United States, and its friends.

    In pursuit of “democratic values,” OpenAI proposes dividing the entire planet into three tiers.

    Beneath them are Tier 2 countries, a geopolitical purgatory defined only as those that have failed to sufficiently enforce American export control policies and protect American intellectual property from Tier 3: Communist China. “CCP-led China, along with a small cohort of countries aligned with the CCP, would represent its own category that is prohibited from accessing democratic AI systems,” the paper explains. To keep these barriers intact — while allowing for the chance that Tier 2 countries might someday graduate to the top — OpenAI suggests coordinating “global bans on CCP-aligned AI” and “prohibiting relationships” between other countries and China’s military or intelligence services.

    One of the former OpenAI employees said concern about China at times circulated throughout the company. “Definitely concerns about espionage came up,” this source said, “including ‘Are particular people who work at the company spies or agents?’” At one point, they said, a colleague worried about a specific co-worker they’d learned was the child of a Chinese government official. The sourced recalled “some people being very upset about the implication” that the company had been infiltrated by foreigners, while others wanted an actual answer: “‘Is anyone who works at the company a spy or foreign agent?’”

    The company’s public adoration of Western democracy is not without wrinkles. In early May, OpenAI announced an initiative to build data centers and customized ChatGPT bots with foreign governments, as part of its $500 billion “Project Stargate” AI infrastructure construction blitz.

    “This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power,” the announcement read.

    Unmentioned in that celebration of AI democracy is the fact that Project Stargate’s financial backers include the government of Abu Dhabi, an absolute monarchy. On May 23, Altman tweeted that it was “great to work with the UAE” on Stargate, describing co-investor and Emirati national security adviser Tahnoun bin Zayed Al Nahyan as a “great supporter of openai, a true believer in AGI, and a dear personal friend.” In 2019, Reuters revealed how a team of mercenary hackers working for Emirati intelligence under Tahnoun had illegally broken into the devices of targets around the world, including American citizens.

    Asked how a close partnership with an authoritarian Emirati autocracy fit into its broader mission of spreading democratic values, OpenAI pointed to a recent op-ed in The Hill in which Lehane discusses the partnership.

    “We’re working closely with American officials to ensure our international partnerships meet the highest standards of security and compliance,” Lehane writes, adding, “Authoritarian regimes would be excluded.”

    OpenAI’s new direction has been reflected in its hiring.

    Since hiring Mulligan, the company has continued to expand its D.C. operation. Mulligan works on national security policy with a team of former Department of Defense, NSA, CIA, and Special Operations personnel. Gabrielle Tarini joined the company after almost two years at the Defense Department, where she worked on “Indo-Pacific security affairs” and “China policy,” according to LinkedIn. Sasha Baker, who runs national security policy, joined after years at the National Security Council and Pentagon.

    OpenAI’s policy team includes former DoD, NSA, CIA, and Special Operations personnel.

    The list goes on: Other policy team hires at OpenAI include veterans of the NSA, a Pentagon former special operations and South China Sea expert, and a graduate of the CIA’s Sherman Kent School for Intelligence Analysis. OpenAI’s military and intelligence revolving door continues to turn: At the end of April, the company recruited Alexis Bonnell, the former chief information officer of the Air Force Research Laboratory. Recent job openings have included a “Relationship Manager” focusing on “strategic relationships with U.S. government customers.”

    Mulligan, the head of national security policy and partnerships, is both deeply connected to the defense and intelligence apparatus, and adept at the kind of ethically ambivalent thinking common to the tech sector.

    “Not everything that has happened at Guantanamo Bay is to be praised, that’s for sure, but [Khalid Sheikh Mohammed] admitting to his crimes, even all these years later, is a big moment for many (including me),” she posted last year. In a March podcast appearance, Mulligan noted she worked on “Gitmo rendition, detention, and interrogation” during her time in government.

    Mulligan’s public rhetoric matches the ideological drift of a company that today seems more concerned with “competition” and “adversaries” than kumbaya globalism. 

    On LinkedIn, she seems to embody the contradiction between a global mission and full-throated alignment with American policy values. “I’m excited to be joining OpenAI to help them ensure that AI is safe and beneficial to all of humanity,” she wrote upon her hiring from the Pentagon.

    Since then, she has regularly represented OpenAI’s interests and American interests as one and the same, sharing national security truisms such as “In a competition with China, the pace of AI adoption matters,” or “The United States’ continued lead on AI is essential to our national security and economic competitiveness,” or “Congress needs to make some decisive investments to ensure the U.S. national security community has the resources to harness the advantage the U.S. has on this technology.” 

    Related

    Trump’s Pick for a Top Army Job Works at a Weapons Company — And Won’t Give Up His Stock

    This is to some extent conventional wisdom of the country’s past 100 years: A strong, powerful America is good for the whole world. But OpenAI has shifted from an organization that believed its tech would lift up the whole world, unbounded by national borders, to one that talks like Lockheed Martin. Part of OpenAI’s national security realignment has come in the form of occasional “disruption” reports detailing how the company detected and neutralized “malicious use” of its tools by foreign governments, coincidentally almost all of them considered adversaries of the United States. 

    As the provider of services like ChatGPT, OpenAI has near-total visibility into how the tools are used or misused by individuals, what the company describes in one report as its “unique vantage point.” The reports detail not only how these governments attempted to use ChatGPT, but also the steps OpenAI took to thwart them, described by the company as an “effort to support broader efforts by U.S. and allied governments.” Each report has focused almost entirely on malign AI uses by “state affiliated” actors from Iran, China, North Korea, and Russia. A May 2024 report outed an Israeli propaganda effort using ChatGPT but stopped short of connecting it to that country’s government.

    Earlier this month, representatives of the intelligence agency and the contractors who serve them gathered at the America’s Center Convention Complex in St. Louis for the GEOINT Symposium, dedicated to geospatial intelligence, the form of tradecraft analyzing satellite and other imagery of the planet to achieve military and intelligence objectives. 

    On May 20, Mulligan took to the stage to demonstrate how OpenAI’s services could help U.S. spy agencies and the Pentagon better exploit the Earth’s surface. Though the government’s practice of GEOINT frequently ends in the act of killing, Mulligan used a gentler example, demonstrating the ability of ChatGPT to pinpoint the location where a photograph of a rabbit was taken. It was nothing if not a sales pitch, one predicated on the fear that some other country might leap at the opportunity before the United States.

    “Government often feels like using AI is too risky and that it’s better and safer to keep doing things the way that we’ve always done them, and I think this is the most dangerous mix of all,” Mulligan told her audience. “If we keep doing things the way that we always have, and our adversaries adapt to this technology before we do, they will have all of the advantages that I show you today, and we will not be safer.” 

    The post OpenAI’s Pitch to Trump: Rank the World on U.S. Tech Interests appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Is artificial intelligence coming for everyone’s jobs? Not if this lot have anything to do with it

    The novelist Ewan Morrison was alarmed, though amused, to discover he had written a book called Nine Inches Pleases a Lady. Intrigued by the limits of generative artificial intelligence (AI), he had asked ChatGPT to give him the names of the 12 novels he had written. “I’ve only written nine,” he says. “Always eager to please, it decided to invent three.” The “nine inches” from the fake title it hallucinated was stolen from a filthy Robert Burns poem. “I just distrust these systems when it comes to truth,” says Morrison. He is yet to write Nine Inches – “or its sequel, Eighteen Inches”, he laughs. His actual latest book, For Emma, imagining AI brain-implant chips, is about the human costs of technology.

    Morrison keeps an eye on the machines, such as OpenAI’s ChatGPT, and their capabilities, but he refuses to use them in his own life and work. He is one of a growing number of people who are actively resisting: people who are terrified of the power of generative AI and its potential for harm and don’t want to feed the beast; those who have just decided that it’s a bit rubbish, and more trouble than it’s worth; and those who simply prefer humans to robots.

    Continue reading…

    This post was originally published on Human rights | The Guardian.


  • Dig down about a mile or two in parts of the United States and you’ll start to see the remains of an ancient ocean. The shells of long dead sea creatures are compressed into white limestone, surrounding brine aquifers with a higher salt content than the Atlantic Ocean. 

    Last summer, ExxonMobil sponsored week-long camps to teach grade school students from Texas, Louisiana, and Mississippi about the virtues of these aquifers, specifically their ability to serve as carbon capture and sequestration wells, where oil, gas, and heavy industry can bury harmful emissions deep underground. In one exercise, students were given 20 minutes to build a model reservoir out of vegetable oil, Play-Doh, pasta, and uncooked beans. Whoever could keep the most vegetable oil (meant to represent liquified carbon dioxide) in their aquifer, won. 

    This kind of down-home carbon capture boosterism is a relatively new development for the oil and gas giant. Over recent years, ExxonMobil and other fossil fuel companies have spent millions lobbying for government support of what they see as industry-friendly green technology, most prominently carbon capture and storage, which many scientists and environmental activists have argued is ineffective and distracts from eliminating fossil fuel operations in the first place. According to Exxon’s website, it’s evidence that they are leading “the biggest energy transition in history.” 

    Now that Congress has turned its attention to rolling back government spending on renewable energy, it appears that most of the climate “solutions” being left off the chopping block are the ones favored by carbon-intensive companies like Exxon. Corporate tax breaks for carbon capture and storage, for instance, were one of the few things left untouched when House Republicans passed a budget bill on May 22 that effectively gutted the Inflation Reduction Act, or IRA, the Biden administration’s signature climate legislation. What remained of the IRA’s clean energy tax credits were incentives for nuclear, so-called clean fuels like ethanol, and carbon capture. When the IRA was passed in 2022, there was immediate backlash against the provisions for carbon capture. 

    “Essentially, we, the taxpayers, are subsidizing a private sewer system for oil and gas,” said Sandra Steingraber, a senior scientist at the nonprofit Science and Environmental Health Network.

    The tax credits for nuclear power plants, which produce energy without emitting greenhouse gases, are meant to spur what President Donald Trump hopes will be an “energy renaissance,” bolstered by a flurry of pro-nuclear executive orders he issued a day after the budget bill cleared the House. Projects will be able to use the tax credits if they begin construction by 2031; wind and solar companies, however, will lose access to tax credits unless they begin construction within 60 days of Trump signing the bill, and are fully up and running by 2028.

    That the carbon capture tax credit was never in danger of being revoked is a testament to its importance to the oil and gas industry, said Jim Walsh, the policy director at the nonprofit Food and Water Watch. “The major beneficiaries of these tax credits are oil and gas companies and big agricultural interests.” 

    The carbon capture tax credit was first established in 2008, but the subsidies were more than doubled when it was tacked on to the IRA in order to get former Senator Joe Manchin of West Virginia’s vote. Companies now receive $60 for every ton of CO2 captured and used to drive oil out of the ground (a process known as “enhanced oil recovery”) and up to $85 for a ton of CO2 that is permanently stored. As roughly 60 percent of captured C02 in the United States is used for enhanced oil recovery, detractors see the tax credit as something of a devil’s bargain, a provision that props up an industry at taxpayer expense. 

    oil refinery emitting smoke in front of a beautiful pink sunset
    An oil refinery in Los Angeles Mario Tama/Getty Images

    How much carbon is actually captured by these projects is also a matter of debate. The tax credit requires companies that claim it to self-report how much CO2 they inject to the Internal Revenue Service. The Environmental Protection Agency, meanwhile, is in charge of tracking leaks. There are tax penalties if captured carbon ends up leaking, but those penalties only apply if the leaks occur in the first 3 years after injection. Holding companies accountable is made more complicated by the fact that tax returns are confidential, and Walsh cautions that there is very little communication between the EPA and the IRS. Oversight is “very, very minimal,” added Anika Juhn, an energy data analyst at the Institute for Energy Economics and Financial Analysis, a research firm.

    “You can keep some really played out oil fields going for a long time, and you can get the public to pay for it,” said Carolyn Raffensberger, the executive director of the Science and Environmental Health Network, explaining the potential impact of the budget bill. “So the argument is, ‘This is a win for the climate, it’s a win for energy dominance.’ [But] it’s really a budget buster with no guardrails at all.” 

    Existing carbon-capture facilities have been plagued by technical and financial issues. The country’s first commercial carbon capture plant in Decatur, Illinois, sprung two leaks last year directly under Lake Decatur, which is the town’s main source of drinking water. When concentrated CO2 hits water it turns into carbonic acid, which then leaches heavy metals from rocks within the aquifer and poisons the water. Although a certain level of public health concerns come with many emerging technologies, critics point out that all of this risk is being taken for a technology that has not been proven to work at scale, and may actually increase emissions by incentivizing more oil and gas production. It could also strain the existing electrical grid — outfitting a natural gas or coal plant with carbon capture equipment can suck up about 15 to 25 percent of the plant’s power. 

    The tax credits exist “to pollute and confuse people,” said Mark Jacobsen, a professor of civil and environmental engineering at Stanford University, who has argued that there is essentially no reasonable use for carbon capture. They “increase people’s [energy] costs and do nothing for the climate.”

    But the technology does have its defenders among scientists. The 2022 report from the Intergovernmental Panel on Climate Change called an increase in carbon capture technology “unavoidable” if countries  want to reach net-zero emissions. Jessie Stolark, the executive director of the Carbon Capture Coalition, an umbrella organization of fossil fuel companies, unions, and environmental groups, contends that arguments like Jacobsen’s unnecessarily set the technology against renewables. “We need all the solutions in the toolkit,” she said. “We’re not saying don’t deploy these other technologies. We see this very much as a complementary and supportive piece in the broader decarbonization toolkit.” 

    Stolark said that carbon capture didn’t make it out of the budget process entirely unscathed, as the bill specified that companies could no longer sell carbon capture tax credits. So-called “transferability” — the ability to sell these tax credits on the open market — has been invaluable to small energy startups that have struggled to secure financing in their early stages, according to Stolark. The Carbon Capture Coalition is urging lawmakers to restore transferability now that the bill has moved from the House to the Senate.

    Still, the kinds of companies likely to claim carbon capture tax credits — often major players in oil and gas, ammonia, steel, and other heavy industries — are less likely to rely on transferability than more modest companies (often providers of renewable energy), whose smaller tax bills makes it harder for them to realize the value of their respective tax credits. 

    “A lot of the factories, the power plants, the industrial facilities deploying within the next ten years or so, are expected to be these really big [facilities] with the big tax burdens,” said Dan O’Brien, a senior modeling analyst at Energy Innovations, a clean energy think tank based in San Francisco. “They’re not the type of smaller producers — like small solar companies — that are reliant on transferability in order to monetize the tax credit.” 

    To some observers, keeping the carbon capture credit looks like a flagrant giveaway to the oil and gas industry. Juhn estimated that the credit could end up costing taxpayers more than $800 billion by 2040. Given the House bill’s aggressive cuts to social programs like Medicaid and the Supplemental Nutrition Assistance Program, Juhn finds the carbon capture credit offensive. “When we look at these other programs, where we’re nickel and diming benefits to folks that could really use them, what does that mean? It’s gross.” 

    This story was originally published by Grist with the headline What’s likely to survive from Biden’s climate law? The controversial stuff on May 30, 2025.

    This post was originally published on Grist.

  • If one company or small group of people manages to develop godlike digital superintelligence, they could take over the world. At least when there’s an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you’d have an immortal dictator from which we can never escape.

    —Elon Musk

    The Deep State is not going away. It’s just being replaced.

    Replaced not by a charismatic autocrat or even a shadowy bureaucracy, but by artificial intelligence (AI)—unfeeling, unaccountable, and immortal.

    As we stand on the brink of a new technological order, the machinery of power is quietly shifting into the hands of algorithms.

    Under Donald Trump’s watch, that shift is being locked in for at least a generation.

    Trump’s latest legislative initiative—a 10-year ban on AI regulation buried within the “One Big Beautiful Bill”—strips state and local governments of the ability to impose any guardrails on artificial intelligence until 2035.

    Despite bipartisan warnings from 40 state attorneys general, the bill passed the House and awaits Senate approval. It is nothing less than a federal green light for AI to operate without oversight in every sphere of life, from law enforcement and employment to healthcare, education, and digital surveillance.

    This is not innovation.

    This is institutionalized automation of tyranny.

    This is how, within a state of algorithmic governance, code quickly replaces constitutional law as the mechanism for control.

    We are rapidly moving from a society ruled by laws and due process to one ruled by software.

    Algorithmic governance refers to the use of machine learning and automated decision-making systems to carry out functions once reserved for human beings: policing, welfare eligibility, immigration vetting, job recruitment, credit scoring, and judicial risk assessments.

    In this regime, the law is no longer interpreted. It is executed. Automatically. Mechanically. Without room for appeal, discretion, or human mercy.

    These AI systems rely on historical data—data riddled with systemic bias and human error—to make predictions and trigger decisions. Predictive policing algorithms tell officers where to patrol and whom to stop. Facial recognition technology flags “suspects” based on photos scraped from social media. Risk assessment software assigns threat scores to citizens with no explanation, no oversight, and no redress.

    These algorithms operate in black boxes, shielded by trade secrets and protected by national security exemptions. The public cannot inspect them. Courts cannot challenge them. Citizens cannot escape them.

    The result? A population sorted, scored, and surveilled by machinery.

    This is the practical result of the Trump administration’s deregulation agenda: AI systems given carte blanche to surveil, categorize, and criminalize the public without transparency or recourse.

    And these aren’t theoretical dangers—they’re already happening.

    Examples of unchecked AI and predictive policing show that precrime is already here.

    Once you are scored and flagged by a machine, the outcome can be life-altering—as it was for Michael Williams, a 65-year-old man who spent nearly a year in jail for a crime he didn’t commit. Williams was behind the wheel when a passing car fired at his vehicle, killing his 25-year-old passenger, who had hitched a ride.

    Despite no motive, no weapon, and no eyewitnesses, police charged Williams based on an AI-powered gunshot detection program called ShotSpotter. The system picked up a loud bang near the area and triangulated it to Williams’ vehicle. The charge was ultimately dropped for lack of evidence.

    This is precrime in action. A prediction, not proof. An algorithm, not an eyewitness.

    Programs like ShotSpotter are notorious for misclassifying noises like fireworks and construction as gunfire. Employees have even manually altered data to fit police narratives. And yet these systems are being combined with predictive policing software to generate risk maps, target individuals, and justify surveillance—all without transparency or accountability.

    It doesn’t stop there.

    AI is now flagging families for potential child neglect based on predictive models that pull data from Medicaid, mental health, jail, and housing records. These models disproportionately target poor and minority families. The algorithm assigns risk scores from 1 to 20. Families and their attorneys are never told what the scores are, or that they were used.

    Imagine losing your child to the foster system because a secret algorithm said you might be a risk.

    This is how AI redefines guilt.

    The Trump administration’s approach to AI regulation reveals a deeper plan to deregulate democracy itself.

    Rather than curbing these abuses, the Trump administration is accelerating them.

    An executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” signed by President Trump in early 2025, revoked prior AI safeguards, eliminated bias audits, and instructed agencies to prioritize “innovation” over ethics. The order encourages every federal agency to adopt AI quickly, especially in areas like policing and surveillance.

    Under the guise of “efficiency,” constitutional protections are being erased.

    Trump’s 10-year moratorium on AI regulation is the logical next step. It dismantles the last line of defense—state-level resistance—and ensures a uniform national policy of algorithmic dominance.

    The result is a system in which government no longer governs. It processes.

    The federal government’s AI expansion is building a surveillance state that no human authority can restrain.

    Welcome to Surveillance State 2.0, the Immortal Machine.

    Over 1700 uses of AI have already been reported across federal agencies, with hundreds directly impacting safety and rights. Many agencies, including the Departments of Homeland Security, Veterans Affairs, and Health and Human Services, are deploying AI for decision-making without public input or oversight.

    This is what the technocrats call an “algocracy”—rule by algorithm.

    In an algocracy, unelected developers and corporate contractors hold more power over your life than elected officials.

    Your health, freedom, mobility, and privacy are subject to automated scoring systems you can’t see and can’t appeal.

    And unlike even the most entrenched human dictators, these systems do not die. They do not forget. They are not swayed by mercy or reason. They do not stand for re-election.

    They persist.

    When AI governs by prediction, due process disappears in a haze of machine logic.

    The most chilling effect of this digital regime is the death of due process.

    What court can you appeal to when an algorithm has labeled you a danger? What lawyer can cross-examine a predictive model? What jury can weigh the reasoning of a neural net trained on flawed data?

    You are guilty because the machine says so. And the machine is never wrong.

    When due process dissolves into data processing, the burden of proof flips. The presumption of innocence evaporates. Citizens are forced to prove they are not threats, not risks, not enemies.

    And most of the time, they don’t even know they’ve been flagged.

    This erosion of due process is not just a legal failure—it is a philosophical one, reducing individuals to data points in systems that no longer recognize their humanity.

    Writer and visionary Rod Serling warned of this very outcome more than half a century ago: a world where technology, masquerading as progress under the guise of order and logic, becomes the instrument of tyranny.

    That future is no longer fiction. What Serling imagined is now reality.

    The time to resist is now, before freedom becomes obsolete.

    To those who call the shots in the halls of government, “we the people” are merely the means to an end.

    “We the people”—who think, who reason, who take a stand, who resist, who demand to be treated with dignity and care, who believe in freedom and justice for all—have become obsolete, undervalued citizens of a totalitarian state that, in the words of Serling, “has patterned itself after every dictator who has ever planted the ripping imprint of a boot on the pages of history since the beginning of time. It has refinements, technological advances, and a more sophisticated approach to the destruction of human freedom.”

    In this sense, we are all Romney Wordsworth, the condemned man in Serling’s Twilight Zone episode “The Obsolete Man.”

    The Obsolete Man,” a story arc about the erasure of individual worth by a mechanized state, underscores the danger of rendering humans irrelevant in a system of cold automation and speaks to the dangers of a government that views people as expendable once they have outgrown their usefulness to the State. Yet—and here’s the kicker—this is where the government through its monstrous inhumanity also becomes obsolete.

    As Serling noted in his original script for “The Obsolete Man,” “Any state, any entity, any ideology which fails to recognize the worth, the dignity, the rights of Man…that state is obsolete.

    Like Serling’s totalitarian state, our future will be defined by whether we conform to a dehumanizing machine order—or fight back before the immortal dictator becomes absolute.

    We now face a fork in the road: resist the rise of the immortal dictator or submit to the reign of the machine.

    This is not a battle against technology, but a battle against the unchecked, unregulated, and undemocratic use of technology to control people.

    We must demand algorithmic transparency, data ownership rights, and legal recourse against automated decisions. We need a Digital Bill of Rights that guarantees:

    • The right to know how algorithms affect us.
    • The right to challenge and appeal automated decisions.
    • The right to privacy and data security.
    • The right to be free from automated surveillance and predictive policing.
    • The right to be forgotten.

    Otherwise, AI becomes the ultimate enforcer of a surveillance state from which there is no escape.

    As Eric Schmidt, former CEO of Google, warned: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about. Your digital identity will live forever… because there’s no delete button.

    An immortal dictator, indeed.

    Let us be clear: the threat is not just to our privacy, but to democracy itself.

    As I point out in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, the time to fight back is now—before the code becomes law, and freedom becomes a memory.

    The post How AI and the Deep State Are Digitizing Tyranny first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • Not long ago, artificial intelligence executives were asking Congress for more regulation. The House of Representatives budget bill passed last week demonstrates how quickly the industry has changed course.

    Inside the House bill is a moratorium on the sort of state-level AI regulations that have addressed political “deepfakes” and using AI to deny medical claims. At the same time the House bill cuts Medicare, it would funnel more money to tech companies to develop kamikaze drones.

    For proponents of AI regulation, the House bill is the culmination of a shift in the industry’s mindset. Rather than paying lip service to popular concerns about AI, the industry has decided to partner with the Trump administration on its goal of “global AI dominance.”

    “The message is clear. The House Republican proposal is stealing from poor people to give huge handouts to Big Tech to build technology that is going to perpetuate the president’s authoritarian plans and crackdowns against vulnerable people,” said Kevin De Liban, the founder of TechTonic Justice, a nonprofit aimed at preventing tech from harming low-income people.

    Banning Regulation

    The splashiest measure in the House bill may also be one of the least likely provisions to make it into law. States would be banned from drafting their own AI regulations for the next 10 years.

    At a recent House hearing, Rep. Jay Obernolte, R-Calif., said the provision was motivated in part by frustration over Congress’s failure to pass nationwide regulation. The growing patchwork of state regulations presents a challenge for small startups, he said at a recent House hearing.

    “The people who can’t deal with that are two innovators in a garage trying to start the next OpenAI, the next Google. Those are the people we’re trying to protect,” he said. “I would love this to be months, not years. But I think it’s important to send the message that everyone needs to be motivated to come to the table here.”

    The patchwork is hardly as daunting as Obernolte claims it is, argued Amba Kak, co-executive director of the AI Now Institute, a think tank that opposes commercial surveillance. The most sweeping state legislation that has actually passed — in California and Colorado — mostly addresses transparency about when AI is being used, she said. Laws in other states are designed to go after the worst-of-the-worst actors in the developing field, Kak said. Those laws target political “deepfakes,” AI “revenge porn,” and the use of AI by health insurance companies.

    “This is not a sprawling morass of industry-burdening regulation. It truly is the bare minimum,” Kak said.

    The regulation moratorium passed by the House has sparked pushback from a bipartisan group of 40 state attorneys general, who called it “irresponsible,” and from consumer protection groups including Consumer Reports.

    The biggest companies in the AI industry have largely sidestepped commenting directly on the moratorium, but OpenAI CEO Sam Altman gave his broad thoughts on regulation at a Senate hearing earlier this month.

    Related

    Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

    In 2023, Altman made waves by calling for regulation of the industry. This month, however, he gave Senate testimony saying that it would be “disastrous” if the U.S. adopted regulations along the lines of Europe’s, which is set to require registration for “high-risk” AI systems and imposes other requirements on their developers. More broadly, he called for limited regulations of the industry.

    “I think some policy is good. I think it is easy for it to go too far and as I’ve learned more about how the world works, I’m more afraid that it could go too far and have really bad consequences,” he said.

    Observers are skeptical that the House moratorium will make it through the Senate. Republicans are using a process known as reconciliation to push the bill through the Senate with 50 votes instead of 60, the barrier to overcome a filibuster. A reconciliation guideline known as the Byrd rule, however, requires every provision of a reconciliation bill to center on the budget.

    “I feel pretty damn confident that the moratorium is going to fall out,” said Bobby Kogan, senior director of federal budget policy at the Center for American Progress, who previously worked for the Biden White House and as a Democratic staffer on the Senate Budget Committee. “Telling a state what laws it is going to make has nothing to do with getting federal dollars in or out the door.”

    Sen. Josh Hawley, R-Mo., has promised to fight against the moratorium, and Sen. John Cornyn, R-Texas, a budget committee member, has expressed his skepticism that the measure abides by Senate rules.

    Whatever becomes of the House provision, AI watchdogs say it could herald the beginning of an era where state-level regulation faces more hurdles.

    “We are probably going to see the ramping up of industry lobbying,” said Kak. “If this is where we are headed, I think it really ups the stakes for any state legislative effort.”

    Finding “Fraud”

    House Republicans appear to be on firmer footing with a provision to spend $25 million on AI contracts to detect and recoup Medicare fraud.

    That provision worries AI skeptics, who point to the history of such programs thus far. In Arkansas, an algorithm that was used to determine Medicaid eligibility cut off people from in-home health assistance.

    “It resulted in horrific cuts to my clients’ care,” said De Liban, who worked as a Legal Aid lawyer in Arkansas at the time. “People who formerly had been getting eight hours of care, which isn’t enough but is the state’s maximum, were cut to four or five hours of care. … They were lying in waste for hours on end.”

    In Texas, hundreds of Medicaid patients were also allegedly booted out of the program or faced hurdles because of errors in a state algorithm. Doctors and patients should be worried about the the House fraud provision given the history of artificial intelligence and algorithms so far, De Liban said.

    “We are going to use faulty, unreliable technology to probably keep doctors from being able to provide care to patients,” he said.

    Weapons of War

    The money for fraud prevention pales in comparison to other spending on AI. The bill includes billions of dollars for the Pentagon and border security — a promising market for a wave of Silicon Valley-funded defense startups.

    This proposed wave of spending on AI defense and border projects lines up with a shift in attitudes toward guardrails on the technology, Kak said.

    “If anything, this regulatory sentiment that we are seeing in the moratorium ports over even more aggressively to the national security space, where we are hearing that this is a moment to roll back on protections,” she said.

    The provisions in the House bill include $500 million for “attritable autonomous military capabilities” — think low-cost drones such as the ones Ukraine has used to defend against Russia; $450 million for AI and autonomous capabilities in naval shipbuilding; $298 million for AI and autonomous projects at the Pentagon’s Test Management Resource Center; $250 million for artificial intelligence at Cyber Command; $188 million for “maritime robotic autonomous systems”; $120 million for aerial surveillance drones; $111 million for kamikaze attack drones; and $20 million for using artificial intelligence in the Defense Department’s auditing process.

    Related

    The Questionable Case of Kristi Noem’s $50 Million Luxury Jet

    The alphabet soup of agencies involved in Trump’s border security and deportation campaign also benefit from the bill. U.S. Customs and Border Protection receives $2.7 billion for border surveillance technologies, $1 billion for anti-narcotics projects including artificial intelligence. The Coast Guard would also receive $75 million for “autonomous maritime systems.”

    Separately, the Commerce Department would receive $500 million for federal government IT modernization.

    Though Kak is skeptical that the AI defense contractors’ products are as transformational as they promise, these technologies have won favor in the White House. Trump, who once called AI “dangerous,” has tapped industry boosters such as JD Vance and David Sacks for key roles in his administration. One of his first executive orders called on agencies to remove barriers to AI adoption imposed by the Biden administration.

    “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security,” Trump said.

    The post Trump’s Big, Beautiful Handout to the AI Industry appeared first on The Intercept.

    This post was originally published on The Intercept.

  • …Those who control the food, control the (fill in the blank, but hint hint, ‘world’)

    …Those who control the water, control the (fill in the blank, but hint hint, ‘world’)

    …Those who control the money, control the (fill in the blank, but hint hint, ‘world’)

    …Those who control the media, control the (fill in the blank, but hint hint, ‘world’)

    …Those who control outer space, control the (fill in the blank, but hint hint, ‘world’)

    …Those who control the data, control the (fill in the blank, but hint hint, ‘world’)

    Throwing in that “Big data” fishnet just a decade ago was the big new addition to the scientific method: Try to find even the most vague or minimally-recognized studies, and then let the computer develop the patterns.

    Ahh, was it David Quammen or some of the other thousands of sources I have utilized over a 45 year period that exposed me to that rare but highly interesting factoid that a woman’s daily diary entrees while living in Costa Rica allowed science to research why the birthrate had fallen and the survival rate had plummented for the golden toad.

    She catalogued the changing cloud and mist and fog cover in her homeland, and over a thirty year period, she meticulously recorded all number of tidbits of observation, including that reduction in the clouds hitting the treeline.

    Do the Google AI search, and nothing pops up.

    It’s pure lies, these AI generated responses: Quoting below.

    While the story of a woman’s diary specifically detailing the golden toad’s decline and its link to reduced mist in Costa Rica might be fictional or anecdotal, the core issue of the golden toad’s extinction and its relation to changing cloud forest conditions is well-documented.

    Here’s a breakdown of how changes in mist and fog likely contributed to the golden toad’s disappearance:

    1. Golden Toads and Their Cloud Forest Habitat:

    • The golden toad was endemic to a small, high-altitude region within the Monteverde Cloud Forest in Costa Rica, an environment known for its constant mist, high humidity, and a rainy season.
    • These specific conditions were crucial for the golden toad’s breeding cycle and survival.

    2. Climate Change and Reduced Mist:

    • Climate change has altered weather patterns globally, and the Monteverde Cloud Forest was not spared.
    • Researchers have observed a decrease in mist and moisture levels in the cloud forest, potentially due to rising temperatures and shifting cloud patterns.
    • This reduction in mist could have made the environment drier, impacting the golden toad’s delicate skin and potentially leading to dehydration.

    3. Impact on Breeding:

    • Golden toads relied on rainwater pools for breeding.
    • Reduced mist and rainfall likely led to the drying up of these crucial breeding pools, hindering the toad’s ability to reproduce successfully.

    4. The Chytrid Fungus:

    • Changes in climate, including drier conditions, could have created a more favorable environment for the chytrid fungus, a deadly pathogen that affects amphibians.
    • This fungus attacks the skin of frogs and toads, disrupting their ability to absorb water and electrolytes, ultimately leading to death.

    In Summary:

    While a diary account remains unsubstantiated, the scientific consensus points towards climate change, leading to a reduction in mist and rainfall, as a significant factor in the golden toad’s extinction. These altered environmental conditions likely impacted the toad’s ability to breed and may have increased its vulnerability to the deadly chytrid fungus.

    *****

    So, my own Substacks (I have three under pseudonyms) have varying levels of so-called rant and railing and deep deep disregard for most authorities. A few hundred subscribers, and very few are paid ones.

    Because I do these “news” headline “essays,” my own news feeds have been corrupted with many times the opposite sort of sources I would go to for reliable information — Israeli rags, USA mainstream, European mainstream, defense/offensive professional journals, Bloomberg and Fortune, et al.

    What happens, though, is a reverse osmosis sort of play on the hourly news that gets fed to me via Bing, Yahoo, AP, UPI, CNN, and the list goes on.

    It’s not exactly a deep and sophisticated exercise that might end up on Dissident Voice, but I’ll attempt one now:

    1. Forgone conclusions, and this is the techno-fascist world controlling the narrative and that narrative is controlled by the oligarchs and the virus of a shifting baseline disorder and rampant disregard for a precautionary principle and the value of looking at intended and unintended (rare) negative effects of these titans of capital and their Brave Banal Evil Doers, the scientists, engineers, fabricators, technologists, et al.

    Exhibit A (infinity is really that number):

    Looks and sounds like a geek, such a nice sweet looking banal sort of fellow?

    • Google DeepMind CEO Demis Hassabis says in the next 5-10 years, AI will disturb more jobs
    • He urged teens to become code ninjas to deal with the AI-driven world
    • He also said that the youngest generation, Gen Alpha, must start experimenting with AI as soon as possible

    As the world dashes into an AI-driven future, Google DeepMind CEO Demis Hassabis has a clear message for teens: learn now or be left behind. Hassabis leads Google DeepMind, the advanced research lab behind the company’s most high-end AI developments, including the Gemini chatbot. The lab is also spearheading Google’s efforts toward achieving artificial general intelligence (AGI) — a yet-unrealised form of AI capable of human-level reasoning. At the recent Google I/O developer conference, Hassabis said DeepMind is likely less than a decade away from building AGI. As he works in such an environment, he certainly knows what form AI will take in the near future. (India Today, of all rags I get in my feeds)

    There are many many paywalls now, so anything from the NYT, well, I have to do end-arounds sometimes to read the entire pieces, but headlines and sub-headlines do the trick:

    At Amazon, Some Coders Say Their Jobs Have Begun to Resemble Warehouse Work

    Pushed to use artificial intelligence, software developers at the e-commerce giant say they must work faster and have less time to think. Others welcome the shift.

    Go to Reddit on this one headline and you get all sorts of opinions and personal experiences with codes.

    I’ve written much about Amazon, and I even organized with SEIU against Amazon’s warehouse unsafe warehouse conditions and their anti-union stance.

    But the geeks and all those soccer moms and geek dads want their children not to marry cowboys but to marry coders and software engineers, or drone impresarios: A “17-year-old designed a cheaper, more efficient drone. The Department of Defense just awarded him $23,000 for it.”

    Dual Use, man, dual use which is always Capitalist Abuse and Military Murder Hardware: Cooper Taylor, 17, aims to revolutionize the drone industry with a new design.

    Taylor designed a motor-tilting mechanism to lower manufacturing cost and increase efficiency.

    Taylor has spent the last year optimizing a type of drone that’s being used more and more in agriculture, disaster relief, wildlife conservation, search-and-rescue efforts, and medical deliveries.

    All drone technology is for murdering, in the end.

    And what fuels these death machines, these genocide facilitators? Geeks in high school robotics Olympics.

    A breathtaking flyover of nearly every United States Air Force fighter and bomber jet soared during a Florida air show Saturday, stunning footage of the historic aerial display showed.

    Seven of the top military aircraft, called the “Freedom Flyover,” united as “one unstoppable force” for thousands of people to take in over Memorial Day weekend at the Hyundai Air and Sea Show in Miami Beach.

    *****

    Somehow it ALWAYS comes down for me from these feeds back to the Jewish State of Murdering Raping Starving Polluting Poisoning Occupied Palestine:

    As thousands of Israeli nationalists and religious Jews on Monday marked Jerusalem Day, which celebrates Israel’s 1967 capture of east Jerusalem, some chanted “Death to Arabs” while marching through Muslim neighborhoods. Protesters, including an Israeli member of parliament, also reportedly stormed a compound belonging to the UN agency for Palestinian refugees.

    [Oh, you can call these Jews in Israel Right-wing Nationalists, or pro-settler extermists, but they are in the Jewish State of Israel, and they are Jewish. Calling them Jewish is not anti-semitic, and leaving out the term “zionist” is not an error of omission.]

    Last year’s procession, which came during the first year of the war in Gaza, saw ultranationalist Israelis attack a Palestinian journalist in the Old City and call for violence against Palestinians. Four years ago, the march helped set off an 11-day war in Gaza.

    Tour buses carrying young ultranationalist Jews lined up near entrances to the Old City, bringing hundreds from outside Jerusalem, including settlements in the Israeli-occupied West Bank.

    Police said they had detained a number of individuals, without specifying, and “acted swiftly to prevent violence, confrontations, and provocations.”

    *****

    And, the IOF, Israeli Occupation Forces, well, they do teach many US police departments on “crowd suppression,” pressure point holds, subbing peaceful protestors, and strong-arming old ladies and teens.

    And, as always, NBC, or whichever mainstream and corporate media outfit, will always suppress the reality — Some fear excessive use of force will rise as the DOJ drops oversight of police departments.

    George Floyd, Michael Brown?

    No photo description available.

    “It is important to not overstate what consent decrees do,” said Jin Hee Lee with the Legal Defense Fund, referring to the power of federal courts to enforce orders. “They are very important and oftentimes necessary to force police departments to change their policies, to change their practices,” she added. “But consent decrees were never the end all, be all.”

    *****

    Then you get this Jewish Fervor, and who the hell wants to defend the Poison Ivy School, but you have to under the Rapist in Chief Trump and his henchman, Stephen Miller:

    At the Harvard Kennedy School, the Trump administration’s attempt to revoke Harvard’s eligibility to enroll international students — temporarily blocked in court — could eliminate nearly 60 percent of the student body.

    *****

    Until the last half-decade, the majestic lesser flamingo had four African breeding sites: two salt pans in Botswana and Namibia, a soda lake in Tanzania, and an artificial dam outside South Africa’s historic diamond-mining town of Kimberley.

    Now it only has three.

    And then, I get tons of wildlife and climate news: “Lesser flamingos lose one of their only four African breeding sites to sewage.” Emblematic at how messed up the polluting Homo consumopethicus is. Sewage. Shit.

    This sort of stuff, all these headlines, all these stories, seemingly disconnected, unrelated in theme, well, the common theme is clear — Capitalism is a Cancer Supported by Economic Wars and Genocide and a Mafia that is Global in Its Reach. Is that a good enough connecting headline?

    *****

    Thank goodness that I have writers for Substack who put it all into perspective, how this world is up shit creek illustrated by the War on People in Palestine.

    The Poem “Nine,” from Palestine Will Be Free Substack:

    I tended my garden with great care —
    Olives, thyme, dates, sage, and rosemary there.
    Well before fajr and long after isha’s call,
    No effort was spared in looking after them all.

    From tiny seeds to flowers in full bloom,
    I watched them glow in sun’s majestic light.
    And as the first buds of the olives came to life,
    Every glance, every day was an endless delight.

    Wearied days would vanish at their sight
    Each bloom I touched made my mornings bright
    I would count my blessings: one, two, three, four…
    Up to ten — then countless more.

    Then came the fire that scorched it all —
    Thyme, dates, sage, and rosemary gone.
    One gnarly olive barely hangs by a thread;
    My waking moments are soaked in tears, eyes red.

    Now with every breath, a prayer escapes:
    Protect my olive — please keep it safe.
    It’s the last remnant of a heart so full, a life well lived,
    In service of my garden, my people, and God the Esteemed.

    You blessed Yaqub with a garden vast,
    Only to separate him from Yusuf, the rose of his heart.
    Yaqub complained, yearned, and wept till blindness veiled his eyes
    You, the Merciful, answered his prayers and restored old ties.

    So bless me, as You did Yaqub in the end —
    Restore the coolness of my eyes, O Ibrahim’s Friend.
    “For I too have the gift of song which gives me courage to complain,
    But ah! ‘tis none but God Himself whom I, in sorrow, must arraign!”

    Your infinite wisdom is beyond my grasp,
    So, to Your rope of hope I must clasp.
    “The lessons of patience I teach my heart,
    As though to night’s separation I show a false part.”

    *****

    Thank goodness for International 360, with a whole lot of stories aggregated-curated: Only Total Collapse Will Rouse Humanity from Its Suicidal Sleepwalk

    From BettMedia:

    Only Total Collapse Will Rouse Humanity from Its Suicidal Sleepwalk

    Editorial Comment:

    I have been issuing warnings since the genocide began and every nation and institution failed to stop it. Not one invoked the appropriate legal mechanisms designed for such a crisis that would have ended abuse of veto, ousted Israel from the United Nations and imposed sanctions on the genocidal entity. I am pleased to see more activists understanding the extent of manipulation and deception we have all been subjected to.

    Previously I said,

    • There is no hope for the world to be found in any government, institution or movement that can normalize ties with or fail to stop a genocidal oppressor.
    • There can be no faith in leaders that place interests above moral principles.
    • There is no salvation to be found standing with those too cowardly to act in the face of murderous criminality.
    • The hope of humanity rests solely on the shoulders of each awakening individual and on movements in the grassroots bases who have never lost touch with reality and are willing to defend life at all costs.

    Karim has brilliantly and succinctly presented the many facets of our present dilemma in the article below.

    Once we abandon fantasy and begin with these truths, realistic solutions and avenues of dissent, resistance and revolution can constellate and finally manifest.

    A.V.


    BettBeat Media

    As Gaza’s children burn while the world watches, it becomes clear: only climate catastrophe, nuclear armageddon, or World War III may force humanity to abandon Western capitalism’s suicidal path.

    The brutal death march of global capitalism does not pause for our lamentations. It grinds forward with mechanistic certainty, reducing human bodies to raw material and human aspirations to market commodities. We stand now at the precipice of a darkness so profound that our collective imagination fails to grasp its dimensions.

    Gaza Exposed the Multipolar Fantasy

    Let us dispense with comforting illusions. The mythologies we have constructed about saviors – whether BRICS nations, ‘multipolar world orders’, institutions of international law, or benevolent statesmen – have disintegrated before our eyes.

    As Gaza burns and its children scream under collapsing concrete, we witness Russia making backroom deals with the architects of genocide. As Palestinian bodies pile in makeshift morgues, China issues empty declarations at the United Nations while its trade with the genocidal regime continues uninterrupted.

    These are not the actions of counterweights to empire. They are the maneuvers of players within the same global system, differing perhaps in position but not in fundamental nature. They have shown themselves to be integral components of the very machinery we hoped they would dismantle.

    We have watched, with desperate hope, the Palestinians stand against overwhelming military force, the Lebanese resisting occupation, Ibrahim Traore challenging neocolonial structures, Syrians and Yemenis enduring apocalyptic bombardment.

    We projected onto them our desperate yearning for liberation from the imperialist hellscape spreading like wildfire across our planet. But they cannot do it alone, and our delegation of hope to others is itself a form of moral abdication.

    The Frightening Truth is: It Really Comes Down to Us

    The terrible truth we must confront is this: the responsibility is ours. The revolution required is not national but global, because the capitalist system has metastasized globally.

    It has burrowed deep into the institutional structures of every society, captured the regulatory mechanisms that might constrain it, corrupted the informational systems that might expose it, and weaponized the technological systems that might liberate us.

    Consider the grotesque spectacle of our current moment: we watch genocide in real-time on social media platforms owned by billionaires who fund that same genocide, we march in permitted protests that change nothing, we sign petitions that disappear into administrative voids. Meanwhile, the machinery of death continues unabated, and the architects of suffering retire to coastal mansions and mountain retreats after receiving fifty standing ovations for speeches that are nothing more than celebrations of mass murder.

    The ruling classes have constructed a system of control so comprehensive, so technologically sophisticated, and so psychologically insidious that most cannot even perceive the depth of their enslavement. The surveillance apparatus tracks our movements and predicts our thoughts. The military-industrial complex develops weapons of terrifying precision to eliminate those who resist too effectively. The propaganda system manufactures consent with algorithmic efficiency.

    All Wish to Sit at the Blood-Soaked Table of Imperialism

    As vanessa beeley and Fiorella Isabel so meticulously lay out for us, Putin, for all his anti-Western rhetoric, demonstrates through his actions that he seeks merely better terms within the imperial arrangement, not its dissolution. His government works in tacit coordination with Israel while claiming to stand against Western hegemony. This is not resistance; it is negotiation for a better position at the blood-soaked table of imperialism.

    And what of China? Does it dream of global equality? Let’s rephrase, do ruling elites anywhere envision a future where they stand shoulder-to-shoulder with the peasants in the villages they dominate? History speaks to us with terrible clarity. The powerful do not relinquish power voluntarily. Systems of exploitation do not reform themselves out of existence. The capitalist machine — whether neoliberal or state capitalism — will not decommission itself out of ethical awakening.

    We have already weathered global catastrophes that should have taught us these lessons. World War I reduced a generation of young men to shredded flesh in muddy trenches, yet we learned nothing. World War II revealed the industrial-scale horror humans could inflict upon one another, yet we learned nothing. The grinding machinery reassembled itself, adapted, and continued its relentless accumulation.

    “Humanity appears incapable of changing direction without first experiencing the catastrophic consequences of its current trajectory. We seem determined to learn only through suffering, to change only when continuation becomes impossible”

    World War III

    The terrifying conclusion becomes unavoidable: only the total breakdown of global society will create the conditions for fundamental transformation. This is not a wish but a recognition of historical pattern. The entrenchment is too deep, the control too complete, the psychological captivity too thorough for anything less than systemic collapse to break the spell.

    What form will this breakdown take? Perhaps nuclear winter that eliminates most of humanity. Perhaps the collapse of ecological systems that sustain human life. Perhaps the return of fascistic brutality in World War III as soldiers march through our streets, rounding up our women and children to violate their dignity and take away their innocence while the men disappear into torture camps. The specific manifestation matters less than the certainty of its arrival if our course remains unchanged.

    This is the darkness we must stare into without flinching. Humanity appears incapable of changing direction without first experiencing the catastrophic consequences of its current trajectory. Even high-definition genocide—burning children and prisoner rapes streamed directly to our iPhones—fails to move us to effective action. We seem determined to learn only through even more extreme suffering, to change only when our current path becomes literally impossible to continue.

    Resistance

    Yet within this terrible recognition lies a seed of possibility. If we understand the machinery of our destruction with unflinching clarity, if we abandon the comfortable myths that absolve us of responsibility, if we recognize that no external force will save us from ourselves – perhaps then we might begin the work of genuine resistance.

    Not the performative — flag-waving, song-singing, tweet-sharing — resistance that leaves power structures intact, but the fundamental reimagining of human society. Not the delegation of hope to distant leaders, but the reclamation of our collective agency. Not the comfortable protest that returns home for dinner, but the sustained commitment to dismantling systems of death.

    The machinery of global capitalism does not pause for our lamentations, but neither is it invulnerable to our determined opposition. The question remains whether we will summon the courage to oppose it before the breakdown comes, or whether we will continue sleepwalking until we awaken amid the ruins.

    – Karim

    The post Follow the Money? The Algorithm Follows Us to Make the Money first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • By Don Wiseman, RNZ Pacific senior journalist

    Papua New Guinea’s state broadcaster NBC wants shortwave radio reintroduced to achieve the government’s goal of 100 percent broadcast coverage by 2030.

    Last week, the broadcaster hosted a workshop on the reintroduction of shortwave radio transmission, bringing together key government agencies and other stakeholders.

    NBC had previously a shortwave signal, but due to poor maintenance and other factors, the system failed.

    The NBC's 50-year logo to coincide with Papua New Guinea's half century independence anniversary
    The NBC’s 50-year logo to coincide with Papua New Guinea’s half century independence anniversary celebrations. Image: NBC

    Its managing director Kora Nou spoke with RNZ Pacific about the merits of a return to shortwave.

    Kora Nou: We had shortwave at NBC about 20 or so years ago, and it reached almost the length and breadth of the country.

    So fast forward 20, we are going to celebrate our 50th anniversary. Our network has a lot more room for improvement at the moment, that’s why there’s the thinking to revisit shortwave again after all this time.

    Don Wiseman: It’s a pretty cheap medium, as we here at RNZ Pacific know, but not too many people are involved with shortwave anymore. In terms of the anniversary in September, you’re not going to have things up and running by then, are you?

    KN: It’s still early days. We haven’t fully committed, but we are actively pursuing it to see the viability of it.

    We’ve visited one or two manufacturers that are still doing it. We’ve seen some that are still on, still been manufactured, and also issues surrounding receivers. So there’s still hard thinking behind it.

    We still have to do our homework as well. So still early days and we’ve got the minister who’s asked us to explore this and then give him the pros and cons of it.

    DW: Who would you get backing from? You’d need backing from international donors, wouldn’t you?

    KN: We will put a business case into it, and then see where we go from there, including where the funding comes from — from government or we talk to our development partners.

    There’s a lot of thinking and work still involved before we get there, but we’ve been asked to fast track the advice that we can give to government.

    DW: How important do you think it is for everyone in the country to be able to hear the national broadcaster?

    KN: It’s important, not only being the national broadcaster, but [with] the service it provides to our people.

    We’ve got FM, which is good with good quality sound. But the question is, how many does it reach? It’s pretty critical in terms of broadcasting services to our people, and 50 years on, where are we? It’s that kind of consideration.

    I think the bigger contention is to reintroduce software transmission. But how does it compare or how can we enhance it through the improved technology that we have nowadays as well? That’s where we are right now.

    This article is republished under a community partnership agreement with RNZ.

    This post was originally published on Asia Pacific Report.

  • Following multiple employee-led protests against the company’s contracts with the Israeli military, Microsoft workers discovered that any emails they send containing the word “Palestine” inexplicably disappear.

    According to internal communications reviewed by The Intercept, employees on Wednesday began noticing that email messages sent from their company account containing a handful of keywords related to Palestine and Israel’s ongoing war in Gaza were not transmitted as expected. In some cases, employees say the emails arrived following a delay of upwards of 45 minutes. Other emails never even made it to the intended recipient’s inbox at all.

    Keywords subject to the disruption, according to employee test messages shared with The Intercept, include “Palestine,” “Gaza,” “apartheid,” and “genocide.” The word “Palestinian” does not appear affected, nor did emails containing deliberate misspellings of the word “Palestine.” Emails mentioning Israel appear to have gone through immediately.

    The outage was first reported by The Verge.

    In an email to The Intercept, Microsoft spokesperson Frank Shaw confirmed and defended the blockage. “Emailing large numbers of employees about any topic not related to work is not appropriate. We have a established forum for employees who have opted in to political issues. Over the past couple of days, a number of politically focused emails have been sent to tens of thousands of employees across the company and we have taken measures to try and reduce those emails to those that have not opted in.”

    The heavy-handed approach, however, is not just deterring messages sent to large numbers of recipients, but also blocking all emails mentioning Palestine.

    Following an April 7 protest at an event celebrating Microsoft’s 50th anniversary, two employees “sent separate emails to thousands of coworkers, calling on Microsoft to cut its contracts with the Israeli government,” The Verge reported.

    Related

    The Microsoft Police State: Mass Surveillance, Facial Recognition, and the Azure Cloud

    The email disruption comes after multiple demonstrations at the four-day Microsoft Build developer conference this week. The protests were organized by current and former Microsoft employees with No Azure for Apartheid, an advocacy group demanding the suspension of the company’s work with the Israeli government.

    In February, The Associated Press reported usage of Microsoft’s Azure cloud computing services by the Israeli military “skyrocketed” at the start of its ongoing bombardment of Gaza, which has now killed over 53,000 Palestinians. Earlier this month, the company absolved itself of wrongdoing in Gaza following an unspecified internal and external review. While Microsoft claimed “we have found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people,” the company also noted, “It is important to acknowledge that Microsoft does not have visibility into how customers use our software on their own servers or other devices.”

    The post Microsoft Says It’s Censoring Employee Emails Containing the Word “Palestine” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The ever-growing market for personal data has been a boon for American spy agencies. The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There’s simply too much data on sale from too many corporations and brokers.

    So the government has a plan for a one-stop shop.

    The Office of the Director of National Intelligence is working on a system to centralize and “streamline” the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be “misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons.” The documents state spy agencies will use the web portal not just to search through reams of private data, but also run them through artificial intelligence tools for further analysis.

    Rather than each agency purchasing CAI individually, as has been the case until now, the “Intelligence Community Data Consortium” will provide a single convenient web-based storefront for searching and accessing this data, along with a “data marketplace” for purchasing “the best data at the best price,” faster than ever before, according to the documents. It will be designed for the 18 different federal agencies and offices that make up the U.S. intelligence community, including the National Security Agency, CIA, FBI Intelligence Branch, and Homeland Security’s Office of Intelligence and Analysis — though one document suggests the portal will also be used by agencies not directly related to intelligence or defense.

    “In practice, the Data Consortium would provide a one-stop shop for agencies to cheaply purchase access to vast amounts of Americans’ sensitive information from commercial entities, sidestepping constitutional and statutory privacy protections,” said Emile Ayoub, a lawyer with the Brennan Center’s liberty and national security program.

    “ODNI is working to streamline a number of inefficient processes, including duplicative contracts to access existing data, and ensuring Americans civil liberties and Fourth Amendment rights are upheld,” ODNI spokesperson Olivia Coleman said in a statement to The Intercept. Coleman did not answer when asked if the new platform would sell access to data on U.S. citizens, or how it would make use of artificial intelligence.

    Related

    IRS, Department of Homeland Security Contracted Firm That Sells Location Data Harvested From Dating Apps

    Spy agencies and military intelligence offices have for years freely purchased sensitive personal information rather than obtain it by dint of a judge’s sign-off. Thanks largely to unscrupulous advertisers and app-makers working in a regulatory vacuum, it’s trivial to procure extremely sensitive information about virtually anyone with an online presence. Smartphones in particular leave behind immense plumes of data, including detailed records of your movement that can be bought and sold by anyone with an interest. The ODNI has previously defined “sensitive” CAI as information “not widely known about an individual that could be used to cause harm to the person’s reputation, emotional well-being, or physical safety.” Procurement documents reviewed by The Intercept make clear the project is designed to provide access to this highest “sensitive” tier of CAI.

    The documents provide a glimpse at some of the many types of CAI available, including “information addressing economic security, supply chain, critical infrastructure protection, great power competition, agricultural data, industrial data, sentiment analysis, and video analytic services.”

    While the proliferation of data that can reveal intimate details about virtually anyone has alarmed civil libertarians, privacy advocates, and certain members of Congress, the intelligence community sees another problem: There’s too much data to keep organized, and the disorganized process of buying it is wasting money. To address this overabundance, the ODNI is seeking private sector vendors to build and manage a new “commercial data consortium that unifies commercial data acquisition then enables IC users to access and interact with this commercial data in one place,” according to one procurement document obtained by The Intercept.

    The ODNI says the platform, the “Intelligence Community (IC) Data Consortium (ICDC),” will help correct the currently “fragmented and decentralized” purchase of commercial data like smartphone location pings, real estate records, biometric data, and social media content. The document laments how often various spy agencies are buying the same data without realizing it. The ODNI says this new platform, which will live at www.icdata.gov, will “help streamline access to CAI for the entire IC and make it available to mission users in a more cohesive, efficient, and cost-effective manner by avoiding duplicative purchases, preventing sunk costs from unused licenses, and reducing overall data storage and compute costs,” while also incorporating “civil liberties and privacy best practices.”

    “The IC is still adhering to the ‘just grab all of it, we’ll find something to do with it’ mentality.”

    While the project’s nod to civil liberties might come as some relief to privacy advocates, the project also represents the extent to which the use of this inherently controversial form of surveillance is here to stay. “Clearly the IC is still adhering to the ‘just grab all of it, we’ll find something to do with it’ mentality rather than being remotely thoughtful about only collecting data it needs or has a specific envisioned use for,” said Calli Schroeder, senior counsel at the Electronic Privacy Information Project.

    Once the website is up and running, the procurement materials say the portal will eventually allow users to analyze the data using large language models, AI-based text tools prone to major factual errors and fabrications. The portal will also facilitate “sentiment analysis,” an often pseudoscientific endeavor purporting to discern one’s opinion about a given topic using implicit signals in their behavior, movement, or speech.

    Such analysis is a “huge cause for concern” according to Schroeder. “It means the intelligence community is still, to at least some degree, buying into the false promise of a constantly and continuously debunked practice,” she said. “Let me be clear: Sentiment analysis not only does not work, it cannot work. Its only consistent success has been in perpetuating harmful discrimination (of gender, culture, race, and neurodivergence, among others).”

    Whether for sentiment analysis or some other goal, using CAI data sets to query an AI crystal ball poses serious risks, said Ayoub. If such analysis worked as billed, “AI tools make it easier to extract, re-identify, and infer sensitive information about people’s identities, locations, ideologies, and habits — amplifying risks to Americans’ privacy and freedoms of speech and association,” he said. On top of that, “These tools are a black box with little insight into training data, metric, or reliability of outcomes. The IC’s use of these tools typically comes with high risk, questionable track records, and little accountability, especially now that AI policy safeguards were rescinded early in this administration.”

    In 2023, the ODNI declassified a 37-page report detailing the vastly expanding use of such CAI data by the U.S. intelligence community, and the threat this poses to the millions of Americans whose lives are cataloged, packaged, and sold by a galaxy of unregulated data brokers. The report, drafted for then-director of national intelligence Avril Haines, included a dire warning to the public: “Today, in a way that far fewer Americans seem to understand, and even fewer of them can avoid, CAI includes information on nearly everyone that is of a type and level of sensitivity that historically could have been obtained, if at all, only through targeted (and predicated) collection, and that could be used to cause harm to an individual’s reputation, emotional well-being, or physical safety.”

    Related

    American Phone-Tracking Firm Demo’d Surveillance Powers by Spying on CIA and NSA

    The extent to which CAI has commodified spy powers previously attainable only by well-resourced governments cannot be overstated: In 2021, for instance, The Intercept reported the existence of Anomaly Six, a startup that buys geolocational data leaked from smartphones apps. During an Anomaly Six presentation, the company demonstrated its ability to track not only the Chinese navy through the phones of its sailors, but also follow CIA and NSA employees as they commuted to and from work.

    The ICDC project reflects a fundamental dissonance within the intelligence community, which acknowledges that CAI is a major threat to the public while refusing to cease buying it. “The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits,” the ODNI wrote in its 2022 report. While conceding “unfettered access to CAI increases its power in ways that may exceed our constitutional traditions or other societal expectations,” the report says, “the IC cannot willingly blind itself to this information.”

    In 2024, following the declassified report and the alarm it generated, the ODNI put forth a set of CAI usage rules purporting to establish guardrails against privacy violations and other abuses. The framework earned praise from some corners for requiring the intelligence community to assess the origin and sensitivity of CAI before using it, and for placing more rigorous requirements on agencies that wish to use the most intimate forms of private data. But critics were quick to point out that the ODNI’s rules, which enshrined the intelligence community’s “flexibility to experiment” with CAI, amounted to more self-regulation from a part of the government with a poor track record of self-regulating.

    While sensitive CAI comes with more rules — like keeping records of its use, protecting its storage, and some disclosure requirements — these guidelines offer great deal latitude to the intelligence community. The rule about creating a paper trail pertaining to sensitive CAI use, for example, is mandated only “to the extent practicable and consistent with the need to protect intelligence sources and methods,” and can be ignored entirely in “exigent circumstances.” In other words, it’s not really a requirement at all.

    Ayoub told The Intercept he worries the ICDC plan will only entrench this self-policing approach. The documents note that vendors would be tasked to some extent with determining whether the data they sell is indeed sensitive, and therefore subject to stricter privacy safeguards, rather than a third party. “Relying on private vendors to determine whether CAI is considered sensitive may increase the risk that the IC purchases known categories of sensitive information without sufficient safeguards for privacy and civil liberties or the warrant, court order, or subpoena they would otherwise need to obtain,” he said.

    The portal idea appears to have started under the Biden administration, when it was known as the “Data Co-Op.” It now looks like it will go live during a Trump administration. Elon Musk’s so-called Department of Government Efficiency is already working on building and streamlining access to other large repositories of perilously sensitive information. In March, the Washington Post reported that DOGE workers intent on breaking down “information silos” across the federal government were trying to “unify systems into one central hub aims to advance multiple Trump administration priorities, including finding and deporting undocumented immigrants.” The documents note that the portal will also be accessible to so-called “non-Title 50” agencies outside of the national defense and intelligence apparatus.

    Ayoub argued the intelligence community can’t provide access to its upcoming CAI portal without “raising the risk that agencies like DHS’s Homeland Security Investigations (HSI) would access the CAI database to identify and target noncitizens such as student protestors based on their search or browsing histories and location information.”

    While the ODNI has acknowledged the importance of transparency, usernames for the portal will not include the name of the analyst’s agency, “thus obscuring any specific participation from individual participants,” according to the project documents.

    “The irony is not lost on me that they are making efforts to protect individuals within the IC from being identified regarding their participation in this project but have no qualms about vacuuming up the personal data of Americans against their wishes and knowledge,” said Schroeder.

    Sen. Ron Wyden, D-Ore., a longtime critic of the Fourth Amendment end run posed by CAI, expressed concern to The Intercept over how the portal will ultimately be used. “Policies are one thing, but I’m concerned about what the government is actually doing with data about Americans that it buys from data brokers,” he said in a statement. “All indications from news reports and Trump administration officials are that Americans should be extremely worried about how this administration may be using commercial data.”

    The post U.S. Spy Agencies Are Getting a One-Stop Shop to Buy Your Most Sensitive Personal Data appeared first on The Intercept.

    This post was originally published on The Intercept.

  • On April 8, a bipartisan commission chartered by Congress warned that China is rapidly advancing a terrifying new military threat: genetically engineered “super soldiers.”

    The report by the National Security Commission on Emerging Biotechnology (NSCEB) urges the U.S. to respond with a sweeping effort to militarize biotechnology. It offers little concrete evidence that such Chinese programs even exist.

    In the name of national security, Washington is now pushing for deregulation, massive government investment, and human experimentation. Experts say this effort echoes Cold War-era paranoia and threatens to erode ethical boundaries in science and warfare.

    The post Pentagon Using Fabricated Chinese Threat To Build GE Soldiers appeared first on PopularResistance.Org.

    This post was originally published on PopularResistance.Org.

  • Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals.

    The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations and wartime atrocities. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations.

    A third-party consultant Google hired to vet the deal recommended that the company withhold machine learning and artificial intelligence tools from Israel because of these risk factors.

    Three international law experts who spoke with The Intercept said that Google’s awareness of the risks and foreknowledge that it could not conduct standard due diligence may pose legal liability for the company. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses.

    “They’re aware of the risk that their products might be used for rights violations,” said León Castellanos-Jankiewicz, a lawyer with the Asser Institute for International and European Law in The Hague, who reviewed portions of the report. “At the same time, they will have limited ability to identify and ultimately mitigate these risks.”

    Google declined to answer any of a list of detailed questions sent by The Intercept about the company’s visibility into Israel’s use of its services or what control it has over Project Nimbus.

    Company spokesperson Denise Duffy-Parkes instead responded with a verbatim copy of a statement that Google provided for a different article last year. “We’ve been very clear about the Nimbus contract, what it’s directed to, and the Terms of Service and Acceptable Use Policy that govern it. Nothing has changed.”

    Portions of the internal document were first reported by the New York Times, but Google’s acknowledged inability to oversee Israel’s usage of its tools has not previously been disclosed.

    In January 2021, just three months before Google won the Nimbus contract alongside Amazon, the company’s cloud computing executives faced a dilemma.

    The Project Nimbus contract — then code-named “Selenite” at Google — was a clear moneymaker. According to the report, which provides an assessment of the risks and rewards of this venture, Google estimated a bespoke cloud data center for Israel, subject to Israeli sovereignty and law, could reap $3.3 billion between 2023 and 2027, not only by selling to Israel’s military but also its financial sector and corporations like pharmaceutical giant Teva.

    But given decades of transgressions against international law by Israeli military and intelligence forces it was now supplying, the company acknowledged that the deal was not without peril. “Google Cloud Services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank,” resulting in “reputation harm,” the company warned.

    In the report, Google acknowledged the urgency of mitigating these risks, both to the human rights of Palestinians and Google’s public image, through due diligence and enforcement of the company’s terms of service, which forbid certain acts of destruction and criminality.

    But the report makes clear a profound obstacle to any attempt at oversight: The Project Nimbus contract is written in such a way that Google would be largely kept in the dark about what exactly its customer was up to, and should any abuses ever come to light, obstructed from doing anything about them.

    The document lays out the limitations in stark terms.

    Google would only be given “very limited visibility” into how its software would be used. The company was “not permitted to restrict the types of services and information that the Government (including the Ministry of Defense and Israeli Security Agency) chooses to migrate” to the cloud.

    Attempts to prevent Israeli military or spy agencies from using Google Cloud in ways damaging to Google “may be constrained by the terms of the tender, as Customers are entitled to use services for any reason except violation of applicable law to the Customer,” the document says. A later section of the report notes Project Nimbus would be under the exclusive legal jurisdiction of Israel, which, like the United States, is not a party to the Rome Statute and does not recognize the International Criminal Court.

    “Google must not respond to law enforcement disclosure requests without consultation and in some cases approval from the Israeli authorities, which could cause us to breach international legal orders / law.”

    Should Project Nimbus fall under legal scrutiny outside of Israel, Google is required to notify the Israeli government as early as possible, and must “Reject, Appeal, and Resist Foreign Government Access Requests.”

    Google noted this could put the company at odds with foreign governments should they attempt to investigate Project Nimbus. The contract requires Google to “implement bespoke and strict processes to protect sensitive Government data,” according to a subsequent internal report, also viewed by The Intercept that was drafted after the company won its bid. This obligation would stand even if it means violating the law: “Google must not respond to law enforcement disclosure requests without consultation and in some cases approval from the Israeli authorities, which could cause us to breach international legal orders / law.”

    The second report notes another onerous condition of the Nimbus deal: Israel “can extend the contract up to 23 years, with limited ability for Google to walk away.”

    The initial report notes that Google Cloud chief Thomas Kurian would personally approve the contract with full understanding and acceptance of these risks before the company submitted its contract proposal. Google did not make Kurian available for comment.

    Business for Social Responsibility, a human rights consultancy tapped by Google to vet the deal, recommended the company withhold machine learning and AI technologies specifically from the Israeli military in order to reduce potential harms, the document notes. It’s unclear how the company could have heeded this advice considering the limitations in the contract. The Intercept in 2022 reported that Google Cloud’s full suite of AI tools was made available to Israeli state customers, including the Ministry of Defense.

    BSR did not respond to a request for comment.

    The first internal Google report makes clear that the company worried how Israel might use its technology. “If Google Cloud moves forward with the tender, we recommend the business secure additional assurances to avoid Google Cloud services being used for, or linked to, the facilitation of human rights violations.”

    It’s unclear if such assurances were ever offered.

    Related

    U.S. Companies Honed Their Surveillance Tech in Israel. Now It’s Coming Home.

    Google has long defended Project Nimbus by stating that the contract “is not directed at highly sensitive, classified or military workloads relevant to weapons or intelligence services.” The internal materials note that Project Nimbus will entail nonclassified workloads from both the Ministry of Defense and Shin Bet, the country’s rough equivalent of the FBI. Classified workloads, one report states, will be handled by a second, separate contract code-named “Natrolite.” Google did not respond when asked about its involvement in the classified Natrolite project.

    Both documents spell out that Project Nimbus entails a deep collaboration between Google and the Israeli security state through the creation of a Classified Team within Google. This team is made up of Israeli nationals within the company with security clearances, designed to “receive information by [Israel] that cannot be shared with [Google].” Google’s Classified Team “will participate in specialized training with government security agencies,” the first report states, as well as “joint drills and scenarios tailored to specific threats.”

    The level of cooperation between Google and the Israeli security state appears to have been unprecedented at the time of the report. “The sensitivity of the information shared, and general working model for providing it to a government agency, is not currently provided to any country by GCP,” the first document says.

    Whether Google could ever pull the plug on Nimbus for violating the company rules or the law is unclear. The company has claimed to The Intercept and other outlets that Project Nimbus is subject to its standard terms of use, like any other Google Cloud customer. But Israeli government documents contradict this, showing the use of Project Nimbus services is constrained not by Google’s normal terms, but a secret amended policy.

    A spokesperson for the Israeli Ministry of Finance confirmed to The Intercept that the amended Project Nimbus terms of use are confidential. Shortly after Google won the Nimbus contract, an attorney from the Israeli Ministry of Finance, which oversaw the deal, was asked by reporters if the company could ever terminate service to the government. “According to the tender requirements, the answer is no,” he replied.

    In its statement, Google points to a separate set of rules, its Acceptable Use Policy, that it says Israel must abide by. These rules prohibit actions that “violate or encourage the violation of the legal rights of others.” But the follow-up internal report suggests this Acceptable Use Policy is geared toward blocking illegal content like sexual imagery or computer viruses, not thwarting human rights abuses. Before the government agreed to abide by the AUP, Google wrote there was a “relatively low risk” of Israel violating the policy “as the Israel government should not be posting harmful content itself.” The second internal report also says that “if there is a conflict between Google’s terms” and the government’s requirements, “which are extensive and often ambiguous,” then “they will be interpreted in the way which is the most advantageous to the customer.”

    International law is murky when it comes to the liability Google could face for supplying software to a government widely accused of committing a genocide and responsible for the occupation of the West Bank that is near-universally considered illegal.

    Related

    Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List”

    Legal culpability grows more ambiguous the farther you get from the actual act of killing. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations. But if Project Nimbus were to be tied directly to the facilitation of a war crime or other crime against humanity, Google executives could hypothetically face criminal liability under customary international law or through a body like the ICC, which has jurisdiction in both the West Bank and Gaza.

    Civil lawsuits are another option: Castellanos-Jankiewicz imagined a scenario in which a hypothetical plaintiff with access to the U.S. court system could sue Google over Project Nimbus for monetary damages, for example.

    Along with its work for the Israeli military, Google through Project Nimbus sells cloud services to Israel Aerospace Industries, the state-owned weapons maker whose munitions have helped devastate Gaza. Another confirmed Project Nimbus customer is the Israel Land Authority, a state agency that among other responsibilities distributes parcels of land in the illegally annexed and occupied West Bank.

    An October 2024 judicial opinion issued by the International Court of Justice, which arbitrates disputes between United Nations member states, urged countries to “take all reasonable measures” to prevent corporations from doing anything that might aid the illegal occupation of the West Bank. While nonbinding, “The advisory opinions of the International Court of Justice are generally perceived to be quite authoritative,” Ioannis Kalpouzos, a visiting professor at Harvard Law and expert on human rights law and laws of war, told The Intercept.

    “Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations.”

    Establishing Google’s legal culpability in connection with the occupation of the West Bank or ongoing killing in Gaza entails a complex legal calculus, experts explained, hinging on the extent of its knowledge about how its products would be used (or abused), the foreseeability of crimes facilitated by those products, and how directly they contributed to the perpetration of the crimes. “Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations,” Kalpouzos said.

    While there have been a few instances of corporate executives facing local criminal charges in connections with human rights atrocities, liability stemming from a civil lawsuit is more likely, said Castellanos-Jankiewicz. A hypothetical plaintiff might have a case if they could demonstrate that “Google knew or should have known that there was a risk that this software was going to be used or is being used,” he explained, “in the commission of serious human rights violations, war crimes, crimes against humanity, or genocide.”

    Getting their date in court before an American judge, however, would be another matter. The 1789 Alien Tort Statute allows federal courts in the United States to take on lawsuits by foreign nationals regarding alleged violations of international law but has been narrowed considerably over the years, and whether U.S. corporations could even be sued under the statute in the first place remains undecided.

    History has seen scant few examples of corporate accountability in connection with crimes against humanity. In 2004, IBM Germany donated $4 million to a Holocaust reparations fund in connection with its wartime role supplying computing services to the Third Reich. In the early 2000s, plaintiffs in the U.S. sued dozens of multinational corporations for their work with apartheid South Africa, including the sale of ”essential tools and services,” Castellanos-Jankiewicz told The Intercept, though these suits were thrown out following a 2016 Supreme Court decision. Most recently Lafarge, a French cement company, pleaded guilty in both the U.S. and France following criminal investigations into its business in ISIS-controlled Syria.

    There is essentially no legal precedent as to whether the provision of software to a military committing atrocities makes the software company complicit in those acts. For any court potentially reviewing this, an important legal standard, Castellanos-Jankiewicz said, is whether “Google knew or should have known that its equipment that its software was being either used to commit the atrocities or enabling the commission of the atrocities.”

    The Nimbus deal was inked before Hamas attacked Israel on October 7, 2023, igniting a war that has killed tens of thousands of civilians and reduced Gaza to rubble. But that doesn’t mean the company wouldn’t face scrutiny for continuing to provide service. “If the risk of misuse of a technology grows over time, the company needs to react accordingly,” said Andreas Schüller, co-director of the international crimes and accountability program at the European Center for Constitutional and Human Rights. “Ignorance and an omission of any form of reaction to an increasing risk in connection with the use of the product leads to a higher liability risk for the company.”

    Though corporations are generally exempt from human rights obligations under international frameworks, Google says it adheres to the United Nations Guiding Principles on Business and Human Rights. The document, while voluntary and not legally binding, lays out an array of practices multinational corporations should follow to avoid culpability in human rights violations.

    Among these corporate responsibilities is “assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed.”

    The board of directors at Alphabet, Google’s parent entity, recently recommended voting against a shareholder proposal to conduct an independent third-party audit of the processes the company uses “to determine whether customers’ use of products and services for surveillance, censorship, and/or military purposes contributes to human rights harms in conflict-affected and high-risk areas.” The proposal cites, among other risk areas, the Project Nimbus contract. In rejecting the proposal, the board touted its existing human rights oversight processes, and cites the U.N. Guiding Principles and Google’s “AI Principles” as reason no further oversight is necessary. In February, Google amended this latter document to remove prohibitions against weapons and surveillance.

    “The UN guiding principles, plain and simple, require companies to conduct due diligence,” said Castellanos-Jankiewicz. “Google acknowledging that they will not be able to conduct these screenings periodically flies against the whole idea of due diligence. It sounds like Google is giving the Israeli military a blank check to basically use their technology for whatever they want.”

    The post Google Worried It Couldn’t Control How Israel Uses Project Nimbus, Files Reveal appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Cryptocurrency legislation once seemed to be the rare issue that could draw bipartisan support in Donald Trump’s Washington, thanks to the industry’s prolific donations on both sides of the aisle.

    Then Trump and his family attempted to monetize the presidency through a meme coin and a $2 billion crypto deal involving an Abu Dhabi-backed venture firm.

    Democrats were, suddenly, outraged. Some centrist party members who had treated cryptocurrency with deference even began to walk away.

    Nine Senate Democrats pulled their support for so-called “stablecoin” legislation over the weekend, imperiling the industry’s most likely legislative win this year. Meanwhile, Rep. Maxine Waters, D-Calif., blocked a House hearing on a broader, more ambitious crypto “framework” on Tuesday, leading several Democrats in a walkout.

    “Trump may just cause enough polarization to make crypto skepticism mainstream within the Democratic Party.”

    The industry is still pushing for a vote on the legislation in the Senate, where Democrats continue to work on a potential compromise. Yet for skeptics who have had their warnings about crypto’s threat to the economy ignored for years, Democrats’ sudden conversion was heartening. They just wished it hadn’t taken Trump to wake the party up.

    “Crypto has been able to buy so many Democrats because there was no organized opposition and thus little downside to politicians selling their vote,” said Jeff Hauser, a longtime critic of the industry and executive director of the Revolving Door Project. “Trump may just cause enough polarization to make crypto skepticism mainstream within the Democratic Party.”

    Sporadic Opposition

    The industry’s bipartisan alliances were on display last May, as the House debated its favorite legislation: the Financial Innovation and Technology for the 21st Century Act.

    Out of 213 Democrats, 71 joined with Republicans to give overwhelming support to FIT21, as it is known, though the bill did not proceed to a vote in a Senate. The legislation is aimed creating a framework that would largely shield the industry from oversight by the Securities and Exchange Commission, which is viewed as having the sharpest regulatory bite.

    Related

    Here’s How Much the Guests at Trump’s Crypto Summit Donated to His Inauguration

    The industry seemed even better positioned this year, thanks to Trump’s election and a record-breaking $197 million spending spree on the 2024 campaigns. All the cash helped knocked hostile Democrats out of primaries and propel industry-friendly candidates in the general election.

    Analysts predicted that after the new Congress was sworn in, Democrats skeptical of the industry would hold their tongue for fear of facing well-funded primary challengers. Trump and his family’s rapid move into the industry, though, seems to have changed the calculation for some Democrats.

    The White House has said that the Trump family’s crypto deals raise no ethical concerns because Trump’s business interests are held in a trust that his sons run.

    Trump’s Schemes

    In September, Trump’s sons helped launch a crypto marketplace called World Liberty Financial.

    Hours before his inauguration, the Trump Organization launched a Trump meme coin that has now generated more than $320 million in transaction fees, according to a recent analysis.

    Then, last week, World Liberty Financial announced the massive deal with the Emirati firm, which planned to use the company’s tokens to make a transaction with the crypto exchange Binance, according to a report in the New York Times.

    By that point, the bipartisan mood on Capitol Hill was already beginning to sour.

    Waters expressed openness to legislation dealing with stablecoin last year. In March, however, the Trumps announced that they would be issuing a stablecoin of their own. Waters on April 2 tried to amend a stablecoin bill in the House Financial Services Committee, where she serves as ranking member, to prohibit the Trump family from issuing one that benefits the president.

    Related

    Trump’s Very Stable Genius Coin

    Republicans rejected her bid, and the bill passed out of committee with support from several Democrats, including some who have drawn hundreds of thousands of dollars in donations from the industry.

    Then, as news of World Liberty Financial’s Abu Dhabi deal circulated, Sen. Ruben Gallego, D-Ariz., led eight other Democrats in announcing Saturday that they were backing off their support for a similar stablecoin bill in their chamber, imperiling its chances of overcoming a filibuster.

    Though Gallego and several of his colleagues had just voted for the bill in committee, they now said it “has numerous issues that must be addressed, including adding stronger provisions on anti-money laundering, foreign issuers, national security, preserving the safety and soundness of our financial system, and accountability for those who don’t meet the act’s requirements.”

    Gallego’s statement may have had a special sting for the industry, which spent $10 million in super PAC funds helping him win his Senate race last year.

    In a joint statement Monday, three leading crypto trade organizations said they still hoped the Senate would advance the legislation.

    “A comprehensive regulatory framework will enable widespread and increased stablecoin adoption,” the groups said, “which is essential to cementing U.S. dollar dominance in the digital economy.”

    According to Axios, Senate Majority Leader John Thune, R-S.D., is still planning to hold a vote on the stablecoin bill on Thursday, and the measure’s sponsors are hoping to strike a deal to revive the legislation.

    Waters Walks Out

    On Tuesday, Waters ratcheted up pressure on the industry by objecting to a joint House Financial Services and Agriculture committee meeting on the newest iteration of the FIT21 bill.

    “I object to this joint hearing, because of the corruption of the president of the United States and his ownership of crypto and his oversight of all the agencies. I object,” Waters said.

    Rep. French Hill, R-Ark., the chair of the Financial Services Committee, said the hearing had been negotiated with Democrats for weeks.

    “Through her actions today, the ranking member has thrown partisanship into what has historically been a strong, good bipartisan relationship,” Hill said.

    Republicans and some of the committees’ Democrats continued holding a more informal roundtable, as Waters marched over to a different building for a hearing of her own.

    At Waters’s breakaway hearing, one witness said Congress shouldn’t just take a hard line on the Trumps, since some of World Liberty Financial’s most problematic practices are mirrored by other leading companies.

    “In many ways, the Trump family is simply copying common crypto business practices.”

    World Liberty Financial markets itself as the future of decentralized finance. On its website, the company says that its governance system, based on a special token that can be bought but not traded, “ensures that every $WLFI owner has an equal voice. From submitting proposals to casting votes, your participation is key to shaping our decentralized platform.”

    Yet it is controlled by a small set of insiders who stand to profit at the expense of retail customers, according to Mark Hays, associate director for cryptocurrency and financial technology at Americans for Financial Reform and Demand Progress.

    “While it is entirely right for members of Congress to raise concerns about how actions of the Trump presidency distort good policymaking and threaten the public interest, none of us here should lose sight of the fact that, in many ways, the Trump family is simply copying common crypto business practices,” Hays said. “In other words, many of the potential issues we see with the Trump family’s crypto practices are a feature — not a bug — of the crypto industry.”

    Neither the Trump Organization nor World Liberty Financial immediately responded to a request for comment on the company’s governance structure.

    Waters’s effort to disrupt the House hearing pointed to a continuing divide among Democrats. While six other Democrats joined her, several remained at the main hearing featuring industry witnesses, including Rep. Angie Craig of Minnesota, the House Agriculture Committee ranking member.

    “I think that, with more publicity around the corruption, they’re going to pay more attention.”

    Democrats who stayed drew supportive statements from one of the industry’s biggest players, Coinbase.

    Craig and the other Democrats who stayed did, however, criticize the Trump family’s deals.

    “It is corrupt, it is wrong, and it makes this process of coming together to regulate crypto more partisan than it needs to be,” Craig said.

    In the Senate, Gallego and his colleagues’ statement focused on the substance of the stablecoin bill rather than on the Trumps’ attempts to enrich themselves.

    In an interview with The Intercept, Waters predicted that Democrats’ focus will soon shift to Trump. “It’s coming,” she said. “I think that, yes, they had some real issues, but I think that with more publicity around the corruption, they’re going to pay more attention.”

    The post Democrats Woke Up to Trump’s Crypto Grift. Will They Stop Other Scammers? appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Sen. Elizabeth Warren is calling for President Trump’s pick for Under Secretary of the Army to sell his stock in a defense contractor that experts say would pose a clear conflict of interest.

    In a federal ethics agreement first reported by The Intercept, Michael Obadal, Trump’s pick  for the second most powerful post at the Army, acknowledged held equity in Anduril Industries, where he has worked for two years as an executive. Obadal said that contrary to ethics norms, he will not divest his stock, which he valued at between $250,000 and $500,000.

    In a letter shared with The Intercept in advance of Obadal’s confirmation hearing Thursday, Warren, D-Mass., says Obadal must divest from Anduril, calling the current arrangement a “textbook conflict of interest.”

    “By attempting to serve in this role with conflicts of interest, you risk spending taxpayer dollars on wasteful DoD contracts that enrich wealthy contractors but fail to enhance Americans’ national security.”

    Warren, who sits on the Senate Armed Services Committee, writes that Obadal’s stock holdings “will compromise your ability to serve with integrity, raising a cloud of suspicion over your contracting and operational decision.”

    Related

    Trump’s Pick for a Top Army Job Works at a Weapons Company — And Won’t Give Up His Stock

    “By attempting to serve in this role with conflicts of interest, you risk spending taxpayer dollars on wasteful DoD contracts that enrich wealthy contractors but fail to enhance Americans’ national security,” Warren writes.

    A more detailed financial disclosure form obtained by The Intercept shows Anduril is not the full extent of Obadal’s military investments. According to this document, a retirement investment account belonging to Obadal holds stock in both General Dynamics, which does billions of dollars of business with the Army, and Howmet Aerospace, a smaller firm. While nominees are not required to list the precise value of such investments, Obadal says his holdings in General Dynamics and Howmet are worth between $2,000 and $30,000.

    Don Fox, former acting director of the U.S. Office of Government Ethics, told The Intercept that neither stock should be exempt from conflict of interest considerations under federal law. “The fact that they are within either a traditional or Roth IRA doesn’t impact the conflict of interest analysis,” he said. “Not sure why he would be allowed to keep those.”

    “A DoD contractor is a DoD contractor,” said Fox. “The degree of their business with DoD or what they do isn’t material. A lot of people were surprised for example that Disney was/is a DoD contractor. For a Senate confirmation position they would have had to divest.”

    In addition to these defense contractors, Obadal holds stock in other corporations that do business with the Pentagon, including Microsoft, Amazon, Thermo Fisher Scientific, and Cummins, which manufactures diesel engines for the Army’s Bradley Fighting Vehicle. None of these companies are mentioned in Obadal’s ethics letter detailing which assets he will and won’t dispose of if confirmed. In his more detailed disclosure document, known as a Form 278, Obadal explicitly notes he will be able to exercise his shares in Anduril “if there is an equity event such as the sale of the company, or the company becoming a publicly-traded entity,” potentially netting him a large payout. Anduril was most recently reported to be valued by private investors at over $28 billion.

    In addition to divesting from Anduril, Warren’s letter asks Obadal to get rid of the stocks in these other firms, commit to recusing himself entirely from any Anduril-related matters at the Army, and pledge to avoid working for or lobbying on behalf of the defense sector for a period of four years after leaving the Department of Defense. “By making these commitments, you would increase Americans’ trust in your ability to serve the public interest during your time at the Army,” Warren writes, “rather than the special interests of large DoD contractors.”

    The post Trump Army Appointee Should Sell His Anduril Stock, Sen. Warren Demands appeared first on The Intercept.

    This post was originally published on The Intercept.

  • We all know that excessive carbon dioxide isn’t good for the planet. In fact, it’s one of the biggest drivers of climate change. As a greenhouse gas, carbon dioxide traps heat in the Earth’s atmosphere. In small amounts, that’s normal—even necessary. But today, there’s far too much of it, and that excess is heating the planet at an unsustainable rate. The result? More extreme weather events, including storms, droughts, and wildfires—threatening both wildlife and human life.

    From emissions to innovation

    So, how do we keep carbon dioxide from reaching the atmosphere in the first place? The obvious solution is to cut emissions. But another exciting option? Recycling carbon into something new. Right now, a handful of pioneering companies are doing just that. 

    In 2021, for example, New York-based Air Company launched a vodka made with carbon dioxide. Yes, you read that right. The brand uses captured carbon—sourced from the air or industrial sites—along with hydrogen created via electrolysis to produce ethanol. Combine that with water, and you get vodka.

    Air vodkaAir Company

    It’s not alone. Finnish startup Solar Foods captures carbon dioxide and combines it with hydrogen—generated using renewable energy like solar power—to produce a novel protein called Solein. This nutrient-dense powder can be used in everything from meat alternatives to egg-free foods.

    And then there’s Savor, a California-based company transforming captured carbon into versatile, sustainable fats. Most recently, it unveiled its first-ever butter made using this technology. Already, Michelin-starred chefs are taking notice, and you may soon see it on menus at acclaimed restaurants like SingleThread and One65. Beyond butter, Savor’s carbon-based fats could one day replace animal fats, palm oil, and more.

    But how does this science actually work? What does carbon-based butter taste like? And does it function like traditional fats in our bodies? We sat down with Savor CEO Kathleen Alexander to find out.

    Turning carbon into butter with Savor CEO Kathleen Alexander

    VegNews: Let’s start with the basics—how exactly do you turn carbon into butter?

    Kathleen Alexander: Savor has developed a pioneering process that creates real fats without relying on traditional animal or plant agriculture. Our technology begins with the most fundamental building blocks of life—carbon gases like carbon dioxide and methane.

    VN: And what do you do with those gases?

    KA: Through a carefully controlled process involving heat and pressure, we transform these simple carbon gases directly into carbon chains. These chains are then converted into fatty acids—the essential building blocks of fats and oils—and ultimately into complete fat molecules.

    VN: That skips a lot of the usual steps in how we get fat today, right?

    KA: Exactly. This direct approach bypasses the lengthy traditional agricultural cycle where plants capture carbon, animals consume those plants, and humans then harvest, process, and refine those resources into usable fats. By eliminating this conventional pathway, our process dramatically reduces land use, water consumption, and greenhouse gas emissions.

    Savor butterSavor

    VN: And the final product—does it work the same way in our bodies?

    KA: The fats we produce are chemically identical to those you consume daily, meaning they provide the same nutritional fuel for your body.

    VN: Are there any nutritional advantages?

    KA: What makes our fats distinctive is their composition—we produce higher concentrations of both medium-chain and odd-chain fatty acids compared to most agricultural fats. These particular fatty acid profiles have been associated with positive health outcomes, and we are currently conducting nutritional studies to better understand their potential benefits.

    VN: That sounds incredibly versatile. Can you adjust the fat depending on what it’s needed for?

    KA: Our technological platform offers unparalleled versatility. We can match the performance characteristics of virtually any type of fat—from animal fats and dairy fats to vegetable oils, tropical fats, and even specialty oils used in cosmetics—all using the same core technology. This flexibility, combined with our position in the broader energy ecosystem and our adaptability regarding feedstock, makes Savor uniquely positioned to meet diverse industry needs with sustainable fat solutions.

    VN: Let’s talk about scale. A lot of alternative fat producers struggle to expand—how does Savor plan to grow sustainably?

    KA: A peer-reviewed study published in Nature Sustainability—authored by myself, my co-founder Ian McKay, and others before we established the company—describes how we can achieve emissions intensities that are dramatically lower than traditional agriculture—much lower than 0.8 gCO2e/kcal at commercial scale.

    VN: Are you verifying that independently?

    KA: We are currently working with a third party to complete a lifecycle assessment (LCA) for our first commercial facility, which is in the design phase. As we continue to scale production capacity at our pilot plant, we may perform a formal LCA there as well, but our current efforts are focused on lifecycle assessment for commercial production.

    VN: Beyond sustainability, what else sets Savor apart in terms of industry potential?

    KA: Scalability and flexibility make Savor’s fat solutions uniquely positioned to meet industry needs. Our ability to match the performance of animal fats, dairy fats, vegetable oils, tropical fats, as well as specialty oils used in the cosmetics industry—all with the same technological platform—sets us apart.

    VN: Are food companies already taking an interest?

    KA: Our proprietary technology has already attracted multinational consumer packaged goods companies, whose R&D teams are working on ingredient innovation projects that can leverage Savor’s unique ability to create customizable fats and oils. The company is actively negotiating joint development agreements with some of these partners, who have been particularly impressed by the versatility and tunability of fatty acid profiles that Savor’s platform can produce—capabilities that extend well beyond the company’s initial dairy-fat mimicking formulation.

    savor croissantsSavor

    VN: All that science is impressive—but let’s get to the real test: taste. How does your butter compare?

    KA: Our products are made to be direct substitutes in some of the most common applications and recipes. This is true whether our products replace existing fats, or are customized to meet a specific purpose, or if they are integrated into more complex products like butter.

    VN: So can it pass the croissant test?

    KA: Our initial butter formulation has properties that are amazingly close to dairy butter. Impressively, it can “croissant” and can be a 1:1 replacement in most baking applications, plus many other popular culinary uses for butter.

    VN: And finally, what’s the feedback been like?

    KA: Even the most discerning guests at our launch dinners couldn’t tell the difference between our butter and conventional butter.

    This post was originally published on VegNews.com.

  • By Losirene Lacanivalu, of the Cook Islands News

    A leading Cook Islands environmental lobby group is hoping that the Cook Islands government will speak out against the recent executive order from US President Donald Trump aimed at fast-tracking seabed mining.

    Te Ipukarea Society (TIS) says the arrogance of US president Trump to think that he could break international law by authorising deep seabed mining in international waters was “astounding”, and an action of a “bully”.

    Trump signed the America’s Offshore Critical Minerals and Resources order late last month, directing the National Oceanic and Atmospheric Administration (NOAA) to allow deep sea mining permits.

    The order states: “It is the policy of the US to advance United States leadership in seabed mineral development.”

    NOAA has been directed to, within 60 days, “expedite the process for reviewing and issuing seabed mineral exploration licenses and commercial recovery permits in areas beyond national jurisdiction under the Deep Seabed Hard Mineral Resources Act.”

    It directs the US science and environmental agency to expedite permits for companies to mine the ocean floor in the US and international waters.

    In addition, a Canadian mining company — The Metals Company — has indicated that they have applied for a permit from Trump’s administration to start commercially mining in international waters.

    The mining company had been unsuccessful in gaining a commercial mining licence through the International Seabed Authority (ISA).

    ‘Arrogance of Trump’
    Te Ipukarea Society’s technical director Kelvin Passfield told Cook Islands News: “The arrogance of Donald Trump to think that he can break international law by authorising deep seabed mining in international waters is astounding.

    “The United States cannot pick and choose which aspects of the United Nations Law of the Sea it will follow, and which ones it will ignore. This is the action of a bully,” he said.

    “It is reckless and completely dismissive of the international rule of law. At the moment we have 169 countries, plus the European Union, all recognising international law under the International Seabed Authority.

    “For one country to start making new international rules for themselves is a dangerous notion, especially if it leads to other States thinking they too can also breach international law with no consequences,” he said.

    TIS president June Hosking said the fact that a part of the Pacific (CCZ) was carved up and shared between nations all over the world was yet another example of “blatantly disregarding or overriding indigenous rights”.

    “I can understand why something had to be done to protect the high seas from rogues having a ‘free for all’, but it should have been Pacific indigenous and first nations groups, within and bordering the Pacific, who decided what happened to the high seas.

    “That’s the first nations groups, not for example, the USA as it is today.”

    South American countries worried
    Hosking highlighted that at the March International Seabed Authority (ISA) assembly she attended it was obvious that South American countries were worried.

    “Many have called for a moratorium. Portugal rightly pointed out that we were all there, at great cost, just for a commercial activity. The delegate said, ‘We must ask ourselves how does this really benefit all of humankind?’

    Looking at The Metals Company’s interests to commercially mine in international waters, Hosking said, “I couldn’t help being annoyed that all this talk assumes mining will happen.

    “ISA was formed at a time when things were assumed about the deep sea e.g. it’s just a desert down there, nothing was known for sure, we didn’t speak of climate crisis, waste crisis and other crises now evident.

    “The ISA mandate is ‘to ensure the effective protection of the marine environment from the harmful effects that may arise from deep seabed related activities.

    “We know much more (but still not enough) to consider that effective protection of the marine environment may require it to be declared a ‘no go zone’, to be left untouched for the good of humankind,” she added.

    Meanwhile, technical director Passfield also added, “The audacity of The Metals Company (TMC) to think they can flaunt international law in order to get an illegal mining licence from the United States to start seabed mining in international waters is a sad reflection of the morality of Gerard Barron and others in charge of TMC.

    ‘What stops other countries?’
    “If the USA is allowed to authorise mining in international waters under a domestic US law, what is stopping any other country in the world from enacting legislation and doing the same?”

    He said that while the Metals Company may be frustrated at the amount of time that the International Seabed Authority is taking to finalise mining rules for deep seabed mining, “we are sure they fully understand that this is for good reason. The potentially disastrous impacts of mining our deep ocean seabed need to be better understood, and this takes time.”

    He said that technology and infrastructure to mine is not in place yet.

    “We need to take as much time as we need to ensure that if mining proceeds, it does not cause serious damage to our ocean. Their attempts to rush the process are selfish, greedy, and driven purely by a desire to profit at any cost to the environment.

    “We hope that the Cook Islands Government speaks out against this abuse of international law by the United States.” Cook Islands News has reached out to the Office of the Prime Minister and Seabed Minerals Authority (SBMA) for comment.

    Republished from the Cook Islands News with permission.

    This post was originally published on Asia Pacific Report.

  • Our future rests on our capacity to make digital technology more boring.

    This post was originally published on Dissent Magazine.

  • Technology will soon be able to do everything we do – only better. How should we respond?

    Right now, most big AI labs have a team figuring out ways that rogue AIs might escape supervision, or secretly collude with each other against humans. But there’s a more mundane way we could lose control of civilisation: we might simply become obsolete. This wouldn’t require any hidden plots – if AI and robotics keep improving, it’s what happens by default.

    How so? Well, AI developers are firmly on track to build better replacements for humans in almost every role we play: not just economically as workers and decision-makers, but culturally as artists and creators, and even socially as friends and romantic companions. What place will humans have when AI can do everything we do, only better?

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • Donald Trump’s latest adjustment to automobile tariffs were billed as relief for the Big Three carmakers, but a leading analyst said Wednesday that Elon Musk’s Tesla will benefit most while others will be stuck “in quicksand” — potentially creating a slight advantage for a company whose CEO donated nearly $300 million to Trump and other Republican causes during last year’s election.

    Trump’s first round of massive tariffs fueled widespread attention to the fact that, of the major carmakers, Tesla seemed to be the best protected from the direct impact of tariffs.

    “Helps Tesla a lot, Detroit Big 3 still in quicksand.”

    Trump issued a new executive order Tuesday scaling back some auto tariffs, a move sought for weeks by domestic automakers who had been whacked with massive duties on imported car parts. While Trump’s latest tariff seesaw could provide temporary relief to some of the Big Three, however, analyst Dan Ives of Wedbush Securities said Tesla still has an edge.

    “Helps Tesla a lot, Detroit Big 3 still in quicksand,” Wedbush said in an email.

    Wedbush was responding to the complicated impact of two executive orders Trump issued Tuesday, one meant to eliminate “cumulative” tariffs on imported cars and parts, and another providing temporary relief on imported auto parts for cars primarily made in the U.S.

    Wedbush said that overall, the latest changes should help Tesla more than the other carmakers, who have greeted the latest Trump announcement with cautious praise.

    “While not completely unscathed, Tesla is in the best position to weather this storm vs. the Big 3 and other foreign automakers as it localized 85% to 90% of its supply chain in the US and will be exempt from many of these tariffs,” Wedbush said.

    Neither the White House nor Tesla immediately responded to requests for comment.

    Related

    Police Across the Country Are on High Alert Over Tesla Protests

    Tesla models dominate the top of the chart for the percentage of domestic content in their vehicles, with other ostensibly American auto manufacturers falling far behind. According to the “2024 Made in America Auto Index,” produced by American University’s Kogod School of Business, Tesla occupied the top five spots by percentage of their car parts manufactured in the U.S. or Canada.

    “Their Model 3 Performance Model took the no. 1 spot, followed by the Model Y at no. 2 and the new Cybertruck at no. 3. Tesla’s Models S and X tied for the no. 4 spot,” said a webpage on the index. The Ford Mustang is tied for the fourth slot with two Tesla models.

    The new Trump orders offer relief from earlier tariffs by effectively removing penalties for cars that are made up of 85 percent or more domestic content.

    According to the Kogod chart, only two cars have 85 percent or greater U.S. and Canadian content: the Tesla Model 3 and Tesla Model Y. The Tesla Cybertruck is not far behind at 82.5 percent.

    Despite having a relative edge over other automakers, analysts have still warned that Trump’s initial round of massive tariffs posed a threat for Musk because they could lead to lower economic growth overall. Musk has been one of the loudest voices calling on Trump to scale back his tariffs on cars and other items. In a Tesla earnings call last week, he said that he would continue to advocate for lower tariffs.

    Trump’s tariff tweaks were not enough to dissuade Wedbush from sounding a pessimistic note about the future of the auto industry. He said the sticker price of an average car could go up $5,000 to $10,000.

    “This continues to be a Twilight Zone situation for the entire automaker industry which continues to be paralyzed further cost increases and uncertainties that will change the paradigm for the US auto industry for years to come in this stays into effect,” Wedbush said. “We believe the auto tariffs in their current form adds up to $100 billion of costs annually to the auto industry and will essentially get passed directly onto the consumer and clearly erode demand on Day 1 of tariffs.”

    Musk has long been a beneficiary of government largess — a position he has looked to solidify with Trump’s ascendancy.

    Related

    If Protesting Tesla Is Domestic Terrorism, Then What Demonstration Against Musk Isn’t

    When an incipient protest movement caused Tesla to take a major financial hit following Musk’s heavy involvement in Trump’s far-right government, the two men staged what amounted to an unprecedented live ad on the White House lawn for the electric cars. Trump declared at the event that vandalism tied to the protest movement against Tesla was domestic terrorism.

    The protest movement against Musk and Tesla has continued to grow.

    The post Trump’s Auto Tariff Relief “Helps Tesla a Lot” — Leaving Other Carmakers Behind appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Illustration: The Intercept

    In partnership with


    Rita Murad, a 21-year-old Palestinian citizen of Israel and student at the Technion Israel Institute of Technology, was arrested by Israeli authorities in November 2023 after sharing three Instagram stories on the morning of October 7. The images included a picture of a bulldozer breaking through the border fence in Gaza and a quote: “Do you support decolonization as an abstract academic theory? Or as a tangible event?” She was suspended from university and faced up to five years in prison.

    In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians within Israel and east Jerusalem for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp or sharing images from Gaza on their Instagram stories.

    When the New York Times covered Murad’s saga last year, the journalist Jesse Baron wrote that, in the U.S., “There is certainly no way to charge people with a crime for their reaction to a terrorist attack. In Israel, the situation is completely different.”

    Soon, that may no longer be the case.

    Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. There is Mahmoud Khalil, a green card holder married to a U.S. citizen, taken from his Columbia University residence and sent to a detention center in Louisiana. There is Rümeysa Öztürk, a Turkish doctoral student at Tufts disappeared from the streets of Somerville, Massachusetts, by plainclothes officers allegedly for co-authoring an op-ed calling on university administrators to heed student protesters’ demands. And there is Mohsen Mahdawi, a Columbia philosophy student arrested by ICE agents outside the U.S. Citizenship and Immigration Services office where he was scheduled for his naturalization interview.

    In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech.

    In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.” The arsenal was built in concert with American tech companies over the past two decades and already deployed, in part, within the U.S. immigration system.

    Rubio’s “Catch and Revoke” initiative emerges from long-standing collaborations between tech companies and increasingly right-wing governments eager for their wares. The AI industry’s business model hinges on unfettered access to troves of data, which makes less-then-democratic contexts, where state surveillance is unconstrained by judicial, legislative, or public oversight, particularly lucrative proving grounds for new products. The effects of these technologies have been most punitive on the borders of the U.S. or the European Union, like migrant detention centers in Texas or Greece. But now the inevitable is happening: They are becoming popular domestic policing tools.

    Israel was one early test site. As Israeli authorities expanded their surveillance powers to clamp down on rising rates of Palestinian terrorism in the early 2010s, U.S. technology firms flocked to the region. In exchange for first digital and then automated surveillance systems, Israel’s security apparatus offered CEOs troves of the information economy’s most prized commodity: data. IBM and Microsoft provided software used to monitor West Bank border crossings. Palantir offered predictive policing algorithms to Israeli security forces. Amazon and Google would sign over cloud computing infrastructure and AI systems. The result was a surveillance and policing dragnet that could entangle innocent people alongside those who posed credible security threats. Increasingly, right-wing ruling coalitions allowed it to operate with less and less restraint.

    With time and in partnership with many of the same companies, the U.S. security state built its own surveillance capacities to scale.

    Not long ago, Silicon Valley preached a mantra of globalization and integration. It was antithetical to the far-right’s nationalistic agenda, but it was good for business in an economy that hinged on the skilled and unskilled labor of foreigners. So when Trump signed an executive order banning immigration from five Muslim countries and subjecting those approved for visas to extra screening in January 2017, tech executives and their employees dissented.

    Google co-founder Sergey Brin, an immigrant from the Soviet Union, joined demonstrations at the San Francisco airport to protest Trump’s travel ban. Mark Zuckerberg cited his grandparents, Jewish refugees from Poland, as grounds for his opposition to the policy. Sam Altman also called on industry leaders to take a stand. “The precedent of invalidating already-issued visas and green cards should be extremely troubling for immigrants of any country,” he wrote on his personal blog. “We must object, or our inaction will send a message that the administration can continue to take away our rights.”

    Many tech workers spent the first Trump presidency protesting these more sinister entailments of a data-driven economy. Over the following year, Microsoft, Google, and Amazon employees would stage walkouts and circulate petitions demanding an end to contracts with the national security state. The pressure yielded image restoration campaigns. Google dropped a bid for a $10 million Defense Department contract. Microsoft promised their software and services would not be used to separate families at the border.

    Related

    Trump’s Election Is Also a Win for Tech’s Right-Wing “Warrior Class”

    But the so-called tech resistance belied an inconvenient truth. Silicon Valley firms supplied the software and computing infrastructure that enabled Trump’s policies. Companies like Babel and Palantir entered into contracts with ICE in 2015, becoming the bread and butter of ICE’s surveillance capacities by mining personal data from thousands of sources for government authorities, converting it into searchable databases, and mapping connections between individuals and organizations. By 2017, conglomerates like Amazon, Microsoft, and Google were becoming essential too, signing over the cloud services to host mounds of citizens’ and residents’ personal information.

    Even as some firms pledged to steer clear of contracts with the U.S. security state, they continued working abroad, and especially in Israel and Palestine. Investigative reporting over the last year has brought more recent exchanges to light. Deals between U.S. companies and the Israeli military ramped up after October 7, according to leaked documents from Google and Microsoft. Intelligence agencies relied on Microsoft Azure and Amazon Web Services to host surveillance data and used Google’s Gemini and OpenAI’s ChatGPT to cull through and operationalize much of it, often playing direct roles in operations — from arrest raids to airstrikes — across the region.

    These contracts gave U.S. technology conglomerates the chance to refine military and homeland security systems abroad until Trump’s reelection signaled they could do so with little pushback at home. OpenAI changed its terms of use last year to allow militaries and security forces to deploy their systems for “national security purposes.” Google did the same this February, removing language saying it wouldn’t use its AI for weapons and surveillance from its “public ethos policy.” Meta also announced U.S. contractors could use its AI models for “national security” purposes.

    Technology firms are committed to churning out high-risk products at a rapid pace. Which is why privacy experts say their products can turbocharge the U.S. surveillance state at a time when constitutional protections are eroding.

    “It’s going to give the government the impression that certain forms of surveillance are now worth deploying when before they would have been too resource intensive,” Ben Wizner, director of the ACLU’s Speech, Privacy, and Technology Project, offered over the phone last week. “Now that you have large language models, you know, the government may say why not store thousands of hours of conversations just to run an AI tool through them and decide who you don’t want in your country.” 

    Related

    Google Is Helping the Trump Administration Deploy AI Along the Mexican Border

    The parts are all in place. According to recent reports, Palantir is building ICE an “immigrationOS” that can generate reports on immigrants and visa holders — including what they look like, where they live, and where they travel — and monitor their location in real time. ICE will use the database combined with a trove of other AI tools to surveil immigrants’ social media accounts, and to track down and detain “antisemites” and “terrorists,” according to a recent announcement by the State Department. “We need to get better at treating this like a business,” Acting ICE Director Todd Lyons said in a speech at the 2025 Border Security Expo in Phoenix earlier this month, “like [Amazon] Prime, but with human beings.”

     

    It is important to remember that many of the propriety technologies private companies are offering the U.S. surveillance state are flawed. Content moderation algorithms deployed by Meta often flag innocuous content as incendiary, especially Arabic language posts. OpenAI’s large language model are notorious for generating hallucinatory statements and mistranslating phrases from foreign languages into English. Stories of error abound in recent raids and arrests, from ICE officials mistaking Mahmoud Khalil for a student visa holder to citizens, lawful residents, and tourists with no criminal record being rounded up and deported.

    Where AI falters technically, it delivers ideologically.

    But where AI falters technically, it delivers ideologically. We see this in Israel and Palestine, as well as other contexts marked by relatively unchecked government surveillance. The algorithms embraced by Israel’s security forces remain rudimentary. But officials have used them to justify increasingly draconian policies. The Haifa-based human rights organization Adalah says there are hundreds of Palestinians with no criminal record or affiliation with militant groups held behind bars because right-wing activists and politicians instructed police forces to search their phones and social media pages and label what they said, shared, or liked online as “incitement to terrorism” or “support of terrorism.”

    Now we hear similar stories in American cities, where First Amendment protections and due process are disintegrating. The effects were nicely distilled by Ranjani Srinivasan, an Indian Ph.D. student at Columbia who self-deported after ICE officials showed up at her door and cancelled her legal status. From refuge in Canada, she told the New York Times she was fearful of the U.S. expanded algorithmic arsenal. “I’m fearful that even the most low-level political speech or just doing what we all do — like shout into the abyss that is social media — can turn into this dystopian nightmare,” Srinivasan said, “where somebody is calling you a terrorist sympathizer and making you, literally, fear for your life and your safety.”

    It is frightening to think that all this happened in Trump’s first 100 days in office. But corporate CEOs have been working with militaries and security agencies to sediment this status quo for years now. The visible human cost of these exchanges may spawn the opposition needed to head off more repression. But for now, the groundwork is laid for the U.S. surveillance state to finally operate at scale.

    The post U.S. Companies Honed Their Surveillance Tech in Israel. Now It’s Coming Home. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Illustration: The Intercept

    In partnership with


    Rita Murad, a 21-year-old Palestinian citizen of Israel and student at the Technion Israel Institute of Technology, was arrested by Israeli authorities in November 2023 after sharing three Instagram stories on the morning of October 7. The images included a picture of a bulldozer breaking through the border fence in Gaza and a quote: “Do you support decolonization as an abstract academic theory? Or as a tangible event?” She was suspended from university and faced up to five years in prison.

    In recent years, Israeli security officials have boasted of a “ChatGPT-like” arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas’s bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians within Israel and east Jerusalem for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp or sharing images from Gaza on their Instagram stories.

    When the New York Times covered Murad’s saga last year, the journalist Jesse Baron wrote that, in the U.S., “There is certainly no way to charge people with a crime for their reaction to a terrorist attack. In Israel, the situation is completely different.”

    Soon, that may no longer be the case.

    Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military’s policies in Gaza. There is Mahmoud Khalil, a green card holder married to a U.S. citizen, taken from his Columbia University residence and sent to a detention center in Louisiana. There is Rümeysa Öztürk, a Turkish doctoral student at Tufts disappeared from the streets of Somerville, Massachusetts, by plainclothes officers allegedly for co-authoring an op-ed calling on university administrators to heed student protesters’ demands. And there is Mohsen Mahdawi, a Columbia philosophy student arrested by ICE agents outside the U.S. Citizenship and Immigration Services office where he was scheduled for his naturalization interview.

    In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech.

    In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.” The arsenal was built in concert with American tech companies over the past two decades and already deployed, in part, within the U.S. immigration system.

    Rubio’s “Catch and Revoke” initiative emerges from long-standing collaborations between tech companies and increasingly right-wing governments eager for their wares. The AI industry’s business model hinges on unfettered access to troves of data, which makes less-then-democratic contexts, where state surveillance is unconstrained by judicial, legislative, or public oversight, particularly lucrative proving grounds for new products. The effects of these technologies have been most punitive on the borders of the U.S. or the European Union, like migrant detention centers in Texas or Greece. But now the inevitable is happening: They are becoming popular domestic policing tools.

    Israel was one early test site. As Israeli authorities expanded their surveillance powers to clamp down on rising rates of Palestinian terrorism in the early 2010s, U.S. technology firms flocked to the region. In exchange for first digital and then automated surveillance systems, Israel’s security apparatus offered CEOs troves of the information economy’s most prized commodity: data. IBM and Microsoft provided software used to monitor West Bank border crossings. Palantir offered predictive policing algorithms to Israeli security forces. Amazon and Google would sign over cloud computing infrastructure and AI systems. The result was a surveillance and policing dragnet that could entangle innocent people alongside those who posed credible security threats. Increasingly, right-wing ruling coalitions allowed it to operate with less and less restraint.

    With time and in partnership with many of the same companies, the U.S. security state built its own surveillance capacities to scale.

    Not long ago, Silicon Valley preached a mantra of globalization and integration. It was antithetical to the far-right’s nationalistic agenda, but it was good for business in an economy that hinged on the skilled and unskilled labor of foreigners. So when Trump signed an executive order banning immigration from five Muslim countries and subjecting those approved for visas to extra screening in January 2017, tech executives and their employees dissented.

    Google co-founder Sergey Brin, an immigrant from the Soviet Union, joined demonstrations at the San Francisco airport to protest Trump’s travel ban. Mark Zuckerberg cited his grandparents, Jewish refugees from Poland, as grounds for his opposition to the policy. Sam Altman also called on industry leaders to take a stand. “The precedent of invalidating already-issued visas and green cards should be extremely troubling for immigrants of any country,” he wrote on his personal blog. “We must object, or our inaction will send a message that the administration can continue to take away our rights.”

    Many tech workers spent the first Trump presidency protesting these more sinister entailments of a data-driven economy. Over the following year, Microsoft, Google, and Amazon employees would stage walkouts and circulate petitions demanding an end to contracts with the national security state. The pressure yielded image restoration campaigns. Google dropped a bid for a $10 million Defense Department contract. Microsoft promised their software and services would not be used to separate families at the border.

    Related

    Trump’s Election Is Also a Win for Tech’s Right-Wing “Warrior Class”

    But the so-called tech resistance belied an inconvenient truth. Silicon Valley firms supplied the software and computing infrastructure that enabled Trump’s policies. Companies like Babel and Palantir entered into contracts with ICE in 2015, becoming the bread and butter of ICE’s surveillance capacities by mining personal data from thousands of sources for government authorities, converting it into searchable databases, and mapping connections between individuals and organizations. By 2017, conglomerates like Amazon, Microsoft, and Google were becoming essential too, signing over the cloud services to host mounds of citizens’ and residents’ personal information.

    Even as some firms pledged to steer clear of contracts with the U.S. security state, they continued working abroad, and especially in Israel and Palestine. Investigative reporting over the last year has brought more recent exchanges to light. Deals between U.S. companies and the Israeli military ramped up after October 7, according to leaked documents from Google and Microsoft. Intelligence agencies relied on Microsoft Azure and Amazon Web Services to host surveillance data and used Google’s Gemini and OpenAI’s ChatGPT to cull through and operationalize much of it, often playing direct roles in operations — from arrest raids to airstrikes — across the region.

    These contracts gave U.S. technology conglomerates the chance to refine military and homeland security systems abroad until Trump’s reelection signaled they could do so with little pushback at home. OpenAI changed its terms of use last year to allow militaries and security forces to deploy their systems for “national security purposes.” Google did the same this February, removing language saying it wouldn’t use its AI for weapons and surveillance from its “public ethos policy.” Meta also announced U.S. contractors could use its AI models for “national security” purposes.

    Technology firms are committed to churning out high-risk products at a rapid pace. Which is why privacy experts say their products can turbocharge the U.S. surveillance state at a time when constitutional protections are eroding.

    “It’s going to give the government the impression that certain forms of surveillance are now worth deploying when before they would have been too resource intensive,” Ben Wizner, director of the ACLU’s Speech, Privacy, and Technology Project, offered over the phone last week. “Now that you have large language models, you know, the government may say why not store thousands of hours of conversations just to run an AI tool through them and decide who you don’t want in your country.” 

    Related

    Google Is Helping the Trump Administration Deploy AI Along the Mexican Border

    The parts are all in place. According to recent reports, Palantir is building ICE an “immigrationOS” that can generate reports on immigrants and visa holders — including what they look like, where they live, and where they travel — and monitor their location in real time. ICE will use the database combined with a trove of other AI tools to surveil immigrants’ social media accounts, and to track down and detain “antisemites” and “terrorists,” according to a recent announcement by the State Department. “We need to get better at treating this like a business,” Acting ICE Director Todd Lyons said in a speech at the 2025 Border Security Expo in Phoenix earlier this month, “like [Amazon] Prime, but with human beings.”

     

    It is important to remember that many of the propriety technologies private companies are offering the U.S. surveillance state are flawed. Content moderation algorithms deployed by Meta often flag innocuous content as incendiary, especially Arabic language posts. OpenAI’s large language model are notorious for generating hallucinatory statements and mistranslating phrases from foreign languages into English. Stories of error abound in recent raids and arrests, from ICE officials mistaking Mahmoud Khalil for a student visa holder to citizens, lawful residents, and tourists with no criminal record being rounded up and deported.

    Where AI falters technically, it delivers ideologically.

    But where AI falters technically, it delivers ideologically. We see this in Israel and Palestine, as well as other contexts marked by relatively unchecked government surveillance. The algorithms embraced by Israel’s security forces remain rudimentary. But officials have used them to justify increasingly draconian policies. The Haifa-based human rights organization Adalah says there are hundreds of Palestinians with no criminal record or affiliation with militant groups held behind bars because right-wing activists and politicians instructed police forces to search their phones and social media pages and label what they said, shared, or liked online as “incitement to terrorism” or “support of terrorism.”

    Now we hear similar stories in American cities, where First Amendment protections and due process are disintegrating. The effects were nicely distilled by Ranjani Srinivasan, an Indian Ph.D. student at Columbia who self-deported after ICE officials showed up at her door and cancelled her legal status. From refuge in Canada, she told the New York Times she was fearful of the U.S. expanded algorithmic arsenal. “I’m fearful that even the most low-level political speech or just doing what we all do — like shout into the abyss that is social media — can turn into this dystopian nightmare,” Srinivasan said, “where somebody is calling you a terrorist sympathizer and making you, literally, fear for your life and your safety.”

    It is frightening to think that all this happened in Trump’s first 100 days in office. But corporate CEOs have been working with militaries and security agencies to sediment this status quo for years now. The visible human cost of these exchanges may spawn the opposition needed to head off more repression. But for now, the groundwork is laid for the U.S. surveillance state to finally operate at scale.

    The post U.S. Companies Honed Their Surveillance Tech in Israel. Now It’s Coming Home. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Part Three of a three-part Solidarity series

    COMMENTARY: By Eugene Doyle

     

    This post was originally published on Asia Pacific Report.

  • Finland, Poland, Estonia, Latvia and Lithuania cite rising threats from Russia to justify once again using one of world’s most indiscriminate weapons

    Rights groups have expressed alarm and warned of a “slippery slope” of again embracing one of the world’s most treacherous weapons, after five European countries said they intend to withdraw from the international treaty banning antipersonnel landmines.

    In announcing their plans earlier this year, Finland, Poland, Estonia, Latvia and Lithuania all pointed to the escalating military threat from Russia. In mid-April, Latvia’s parliament became the first to formally back the idea, after lawmakers voted to pull out of the 1997 Mine Ban Treaty, which bans the use, production and stockpiling of landmines designed for use against humans.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • An ocean conservation non-profit has condemned the United States President’s latest executive order aimed at boosting the deep sea mining industry.

    President Donald Trump issued the “Unleashing America’s offshore critical minerals and resources” order on Thursday, directing the National Oceanic and Atmospheric Administration (NOAA) to allow deep sea mining.

    The order states: “It is the policy of the US to advance United States leadership in seabed mineral development.”

    NOAA has been directed to, within 60 days, “expedite the process for reviewing and issuing seabed mineral exploration licenses and commercial recovery permits in areas beyond national jurisdiction under the Deep Seabed Hard Mineral Resources Act.”

    Ocean Conservancy said the executive order is a result of deep sea mining frontrunner, The Metals Company, requesting US approval for mining in international waters, bypassing the authority of the International Seabed Authority (ISA).

    US not ISA member
    The ISA is the United Nations agency responsible for coming up with a set of regulations for deep sea mining across the world. The US is not a member of the ISA because it has not ratified UN Convention on the Law of the Sea (UNCLOS).

    “This executive order flies in the face of NOAA’s mission,” Ocean Conservancy’s vice-president for external affairs Jeff Watters said.

    “NOAA is charged with protecting, not imperiling, the ocean and its economic benefits, including fishing and tourism; and scientists agree that deep-sea mining is a deeply dangerous endeavor for our ocean and all of us who depend on it,” he said.

    He said areas of the US seafloor where test mining took place more than 50 years ago still had not fully recovered.

    “The harm caused by deep sea mining isn’t restricted to the ocean floor: it will impact the entire water column, top to bottom, and everyone and everything relying on it.”

    This article is republished under a community partnership agreement with RNZ.

    This post was originally published on Asia Pacific Report.

  • An obscure nonprofit group that gave $100,000 to Donald Trump’s inaugural committee was bankrolled by an artificial intelligence company whose CEO was an unindicted co-conspirator in Trump’s election interference case in Georgia, the company’s president confirmed to The Intercept.

    Unlike more established megadonors such as Boeing or the Heritage Foundation, however, the Institute for Criminal Justice Fairness was created only months ago and has little public profile beyond a barebones website.

    The institute was funded by the startup Tranquility AI, according to company co-founder David Harvilicz, who has pitched Trump administration officials on using its software to speed up deportations of “illegals.”

    The purpose of the institute’s donation to the inaugural fund, Harvilicz said, was “to meet people that were there who might be policymakers who would want to eventually attend some of our events. It was mostly to meet people.”

    The company’s other co-founder is CEO James Penrose, a former National Security Agency leader who has drawn scrutiny — and a grand jury subpoena — for his role in Trump’s attempt to overturn the 2020 presidential election results in Georgia.

    The donation from the Institute for Criminal Justice Fairness was among a slew of gifts to the Trump inaugural committee disclosed over the weekend. The inaugural committee pulled in a record $239 million haul.

    “Inaugural funds present an ideal, problematic opportunity for wealthy special interests.”

    The contribution highlights the loose rules that allowed nonprofits and corporations to make unlimited donations to the Trump inaugural committee, a situation that critics say creates the perception that donations can be used to curry favor with the administration.

    “Because inaugural funds are very loosely regulated, they present an ideal, problematic opportunity for wealthy special interests to ingratiate themselves with an incoming presidential administration,” said Saurav Ghosh, the director for federal campaign finance reform of the nonprofit Campaign Legal Center. “This is particularly true for Trump, who has made clear that he views his office and government in general as largely transactional; donations and support will be rewarded.”

    Following the Money

    There are no signs on the Institute for Criminal Justice Fairness’s sparse website of its relationship to Tranquility AI, a startup company backed by a trio of venture capital funds.

    The institute was created at the end of September, according to incorporation records in Virginia, and says that it is “dedicated to educating the public and advocating to policymakers on the benefits of utilizing artificial intelligence solutions in law enforcement, the military, and government.”

    Related

    Here’s How Much the Guests at Trump’s Crypto Summit Donated to His Inauguration

    One clue linking the Institute for Criminal Justice Fairness and Tranquility AI, however, came in the Trump inaugural committee’s Sunday filing with the Federal Election Commission. The address given for the group’s December 18 donation was the same as Harvilicz’s California home, which burned down weeks later in the Palisades Fire.

    Harvilicz confirmed in a Monday phone call that the company funded the Institute for Criminal Justice Fairness.

    “The nonprofit was designed to help people understand how AI can be used in a positive way to help bring about more fair and equitable criminal justice outcomes,” Harvilicz said.

    Harvilicz said he was unaware that his home address had been used in the FEC filing.

    There appears to be no federal statute banning companies from using so-called straw donors to contribute to inaugural committees, although at least one member of the House of Representatives has introduced legislation seeking to ban the practice.

    “That disclosure is meaningless if the true, original donors aren’t disclosed.”

    Ghosh, the campaign finance watchdog, urged Congress to force “meaningful” transparency.

    “Inaugural funds are required to report their donors, but that disclosure is meaningless if the true, original donors aren’t disclosed,” Ghosh said. “Congress and the FEC should ensure meaningful transparency around these inaugural fund donations, to ensure that special interests aren’t able to secretly curry favor with an incoming president, further marginalizing the voices of everyday Americans in our democracy.”

    “War Zones to Courtrooms”

    Although a relatively new company, Tranquility AI has big ambitions in the world of government contracting, both at the state and federal levels.

    The company markets its signature software product as a time-saving device for local law enforcement agencies, and has expressed interest in national security and immigration work. On its website, Tranquility AI says that it wants to aid decision-makers working from “war zones to courtrooms.”

    Harvilicz, in a series of X posts in early December, pitched Trump’s soon-to-be border czar Tom Homan and Attorney General Pam Bondi on using the company’s software to accelerate deportations.

    “In combination with CBP One App and other OSINT, @TranquilityAi’s TimePilot™ platform will facilitate location, apprehension, and adjudication of millions of illegals in months instead of years,” Harvilicz said, referring to the since-discontinued app used by U.S. Customs and Border Protection to track immigrants.

    Harvilicz said the company was founded by alumni of the first Trump administration. He served as the acting assistant secretary for cybersecurity, energy security, and emergency response in the Department of Energy, according to his biography.

    Penrose, meanwhile, had a 17-year career at the NSA that included several high-level cybersecurity postings. After moving to the private sector, he held a role at the successful startup Darktrace, which was staffed with former intelligence officials from the U.K. and U.S.

    After the 2020 election, however, he found himself under a microscope for his role in Trump and his allies’ attempts to overturn the election results.

    Related

    We Identified the Key Unnamed Figures in Jack Smith’s New Trump Brief

    Penrose worked with Trump attorney Sidney Powell when she led an effort to breach voting machines in Georgia, according to multiple media reports. He was one of the unnamed unindicted co-conspirators in the Fulton County case that eventually led to Powell’s guilty plea, according to the Washington Post. Penrose was also a “suspect” in a Michigan probe of a voting tabulator breach, according to the outlet Votebeat.

    Penrose was not charged with any crime in either state. His supporting role in Trump’s attempts to overturn the 2020 election has drawn scrutiny in places like New Orleans, however, where Tranquility AI worked with the city’s Democratic district attorney.

    The company did not respond to a request for comment on Penrose’s role in the donation to Trump’s inaugural committee.

    The post AI Firm Behind Mysterious Trump Donation Is Run by Alleged Election Overthrow Plotter appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook and Instagram owner criticised for leaving up posts inciting violence during UK riots

    Mark Zuckerberg’s Meta announced sweeping content moderation changes “hastily” and with no indication it had considered the human rights impact, according to the social media company’s oversight board.

    The assessment of the changes came as the Facebook and Instagram owner was criticised for leaving up three posts containing anti-Muslim and anti-migrant content during riots in the UK last summer.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • Several years ago, Louis Blessing’s wife asked for his help replacing the battery in her laptop. An electrical engineer by training, Blessing figured it would be a quick fix. But after swapping out the old battery for a new one and plugging the laptop in, he discovered it wouldn’t charge.

    It quickly dawned on Blessing that the laptop recognized he had installed a battery made by a third party, and rejected it. It’s a classic example of a practice known as parts pairing, where manufacturers use software to control how — and with whose parts — their devices are fixed.

    “To me, that is a garbage business practice,” Blessing told Grist. “Yes, it’s legal for them to do it, but that is truly trash.” After the failed battery swap, Blessing’s wife wound up getting a new computer.

    The business practice that led her to do so may not be legal for much longer. Blessing is a Republican state senator representing Ohio’s 8th Senate district, which includes much of the area surrounding Cincinnati. In April, Blessing introduced a “right-to-repair” bill that grants consumers legal access to the parts, tools, and documents they need to fix a wide range of devices while banning restrictive practices like parts pairing. If Blessing’s bill succeeds, the Buckeye State will become the latest to enshrine the right to repair into law, after similar legislative victories in Colorado, Oregon, California, Minnesota, and New York.

    That would mark an important political inflection point for the right-to-repair movement. While most of the states that have passed repair laws so far are Democratic strongholds, bills have been introduced in all 50 as of February. The adoption of a right-to-repair law in deep red Ohio — where Republicans control the state House, Senate, and the governor’s office, and Donald Trump won the last presidential election by more than 10 percentage points — would further underscore the broad, bipartisan popularity of being allowed to fix the stuff you own.

    “If something breaks that you can’t fix, that’s just as big of a pain if you live in New York as it is in Nebraska,” Nathan Proctor, who heads the right-to-repair campaign at the U.S. Public Research Interest Group, told Grist. 

    Expanded access to repair has the potential to reduce carbon emissions and pollution. A significant fraction of the emissions and air and water pollutants associated with electronic devices occur during manufacturing. Extending the lifespan of those gadgets can have major environmental benefits: The U.S. Public Research Interest Group has calculated that if Americans’ computers lasted just one year longer on average, it would have the same climate benefit as taking over a quarter million cars off the roads for a year. By reducing the pressure to buy replacement devices, repair also helps alleviate demand for the world’s finite stores of critical minerals, which are used not only in consumer electronics but also in clean energy technologies.  

    A bearded person in profile holds two tools above a disassembled cellphone on a light blue tabletop
    Expanded access to repair has the potential to reduce carbon emissions and pollution. Christian Charisius / picture alliance via Getty Images

    Blessing gladly acknowledges the environmental benefits of expanded repair access, but it isn’t the main reason the issue matters to him. He describes himself as “a very free-market guy” who doesn’t like the idea of big businesses being allowed to monopolize markets. He’s concerned that’s exactly what has happened in the electronics repair space, where it is common for manufacturers to restrict access to spare parts and repair manuals, steering consumers back to them to get their gadgets fixed — or, if the manufacturer doesn’t offer a particular repair, replaced.

    “It’s good for a business to be able to monopolize repair,” Blessing said. “But it is most certainly not pro-free market. It’s not pro-competition.”

    Blessing is now sponsoring a right-to-repair bill, called the Digital Fair Repair Act, for the third legislative session in a row. While earlier iterations of the bill never made it out of committee, he feels optimistic about the legislation’s prospects this year, in light of growing support for the right to repair across civil society and the business community. In the past, manufacturers like Apple and Microsoft have vehemently lobbied against right-to-repair bills, but these and other corporations are changing their tune as the movement gains steam.

    “I think there’s an appetite to get something done,” Blessing told Grist, adding that more and more device manufacturers “want to see something that puts this to rest.”


    Repair monopolies don’t just restrict market competition. They also limit a person’s freedom to do what they want with their property. That’s the reason Brian Seitz, a Republican state congressman representing Taney County in southwestern Missouri, is sponsoring a motorcycle right-to-repair bill for the third time this year.  

    Seitz first grew interested in the right to repair about four years ago, when a group of motorcyclists in his district told him they weren’t able to fix their bikes because they were unable to access necessary diagnostic codes. A spokesperson for the American Motorcyclist Association confirmed to Grist that lack of access to repair-relevant data is “a concern for our membership.” Some manufacturers are moving away from on-board diagnostic ports where owners can plug in and access the information they need to make fixes, the spokesperson said.

    A man with a white beard and glasses in a suit stands at a podium in a crowded legislative chamber with a portrait of Abraham Lincoln and a maroon curtain in the background
    Missouri state Representative Brian Seitz, a Republican, speaks at the state Capitol in Jefferson City, Missouri. AP Photo / David A. Lieb

    “The person who drives a motorcycle is a certain type of individual,” Seitz said. “They’re free spirits. They love the open road. And they brought to my attention that they weren’t allowed to repair their vehicles. And I couldn’t believe it.”

    It’s still early days for Seitz’s bill, which has been referred to the Missouri House Economic Development Committee but does not have a hearing scheduled yet. But a version of the bill passed the House during the last legislative session, and Seitz expects it will pass again.

    “Whether or not there’s time to get it done in the Senate, that’s yet to be determined,” he said. The bill died in the Missouri Senate during the last legislative session.

    A spokesperson for Missouri Governor Mike Kehoe declined to comment on Seitz’s bill. But if it were to pass both chambers and receive Kehoe’s signature this year, it would be the first motorcycle-specific right-to-repair law in the country. (A 2014 agreement establishing a nationwide right-to-repair in the auto industry explicitly excluded motorcycles.) Seitz believes many of his fellow conservatives would be “very much in favor” of that outcome.

    “This is a freedom and liberty issue,” Seitz added. 


    Personal liberty is also at the heart of a recent white paper on the right to repair by the Texas Public Policy Foundation, or TPPF, an influential conservative think tank. The paper lays out the legal case for Texas to adopt a comprehensive right-to-repair law “to restore control, agency, and property rights for Texans.” Since publishing the paper, TPPF staffers have advocated for the right to repair in op-eds and closed-door meetings with state policymakers. 

    “Our interest in the right to repair is rooted in a concrete fundamental belief in the absolute nature of property rights and how property rights are somewhat skirted by corporations who restrict the right to repair,” Greyson Gee, a technology policy analyst with the TPPF who co-authored the white paper, told Grist.

    In February, Giovanni Capriglione, a Republican member of the Texas House of Representatives and the chairman of the state legislature’s Innovation and Technology Caucus, introduced an electronics right-to-repair bill that the TPPF provided input on. In March, Senator Bob Hall introduced a companion bill in the Senate. 

    Eight motorcylists ride down a road, with green shrubbery framing the shoulder
    A bill introduced in Missouri would be the first motorcycle-specific right-to-repair law in the country. Jonas Walzberg / picture alliance via Getty Images

    Early drafts of these bills include some carve-outs that repair advocates have criticized elsewhere, including an exemption for electronics used exclusively by businesses or the government, and a stipulation that manufacturers do not need to release circuit boards on the theory that they could be used to counterfeit devices. The Texas bills also contain an “alternative relief” provision that allows manufacturers to reimburse consumers, or offer them a replacement device, instead of providing repair materials. (Ohio’s bill, by contrast, mandates that manufacturers provide board-level components necessary to effect repairs, and it does not allow them to offer refunds instead of complying.)

    Gee says the TPPF has been working with repair advocacy organizations and the bill sponsor to strengthen the bill’s language and is “encouraged by the real possibility of establishing a statutory right to repair in Texas.” 

    “​​Chairman Capriglione is one of the strongest pro-consumer advocates in the Texas House, and we will continue to work with his office as this bill advances [to] ensure there is a codified right to repair in the state,” Gee added. Capriglione, who represents part of the Fort Worth area, didn’t respond to Grist’s request for comment.


    Elsewhere around the country, lawmakers across the political spectrum are advancing other right-to-repair bills this year. In Washington state, a bill covering consumer electronics and household appliances passed the state House in March by a near-unanimous vote of 94 to 1, underscoring the breadth of bipartisan support for independent repair. In April, the Senate passed its version of the bill 48-1. The House must now vote to concur with changes that were made in the Senate, after which the bill heads to the governor’s desk. 

    “This legislation has always been bipartisan,” Democratic state representative Mia Gregerson, who sponsored the bill, told Grist. “The ability to fix our devices that have already been paid for is something we can all get behind.” In her five years working on right-to-repair bills in the state, Gregerson said, she has negotiated with Microsoft, Google, and environmental groups to attempt to address consumer and business needs while reducing electronic waste.

    Conservative politicians and pundits also acknowledge the environmental benefits of the right to repair, despite focusing on personal liberty and the economy in their messaging. In its white paper arguing for a right-to-repair law in Texas, the TPPF highlights the potential for such legislation to eliminate e-waste, citing United Nations research that ties the rapid growth of this trash stream to limited repair and recycling options.

    “Ultimately, the bill itself has to be constitutional. It has to be up to snuff legally,” Gee said. “But it’s certainly an advantage, the environmental impact that this bill would have.” 

    Blessing, from Ohio, agreed. Right to repair will “absolutely mean less electronics in our landfills, among other things,” he told Grist. “I don’t want to diminish that at all.”

    This story was originally published by Grist with the headline The environmental policy backed by free-market Republicans on Apr 18, 2025.

    This post was originally published on Grist.

  • Purge is the word symbol of Athena – Athens

    So, I slip me a work out in on a local trailway and then decide to drop off a deposit at my bank. I spot a branch of my bank just down the way and think, cool, I’ll just pop in the drive-through.

    But there is no drive-through.

    I park and walk inside. There are few twenty-somethings sitting around in offices, but no tellers. Just an automated ATM, who one of the twenty-somethings tells me can take my deposit. But a maintenance worker has the ATM door swung open, working on it. So, I don’t get to make a deposit.

    I resolve to make the deposit the next day, instead.

    Then, my roomie rings me and tells me to pick up a few things at the grocery. I’m no fan of Wally World, but it’s the most convenient stop. I park, run in, and grab a few groceries. I go to the check out, and it’s a lot like the bank I stopped at. It’s not tellerless—it’s checkerless. It’s all automated.

    This doesn’t amuse me.

    The more I think about it, the worse it gets. And, worse still, I do some research.

    Talk about a bill of goods.

    A decade or two back, “outsourcing” was all the rage. Our jobs were being sent overseas and we were livid. Now, blaming immigrants is in vogue.

    But the numbers are funny and don’t really add up. And you don’t have to look real hard to figure it out. According to the internet machine, 4.5% of American jobs are outsourced each year. Also, according to the internet machine, immigrants make up 19% of the American workforce (one in five jobs).

    Neither percentage is anything to dismiss—they just miss the point.

    Our politicians and political pundits use figures like these to obscure the real issue … it’s all sleight of hand nonsense. And it’s a bummer, really, for so many of us, because we’re Pavlovian about terms like “outsourcing” and “immigrants”—as if we live for ill-informed finger-pointing. These economic bogeymen have been drummed into us for decades. Half of you are probably slobbering, now. But, please, dab your taco hole with your shirtsleeve and bear with me.

    Outsourcing and immigrants really only infringe on an already diminished share of the scraps. According to the internet machine, automation has replaced 70% of Middle-Class jobs in the United States since 1980—and a related economic corollary is worse. Also, according to the internet machine, automation has driven down Middle-Class wages 70% since 1980. AND THESE AREN’T OBSCURE FACTS. They’re proffered front and center by a search engine’s AI shortcut!?

    Put that in your mouse and scroll it.

    It’s not just mouth-breathers that need to unite. It’s all of us. It’s anyone that may need a breather. It’s anyone that needs to breathe at all. Because what’s replacing most of us doesn’t.

    President Dildo J. Trump’s claims about immigrants and bringing manufacturing jobs back to America are bald-faced lies, because most of those jobs were lost to robotics, computer processing, etc., and they’re never coming back. Immigrants and outsourcing are obviously easier targets than automation or AI, but still. This should scare you, reader. This should terrify you.

    Immigrants and outsourcing are perfect red herrings, for sure, but neither—as proto-punk, rock-and-rolling band The Trashmen once sublimely put it—“bird is the word.”

    “Purge” is the word.

    Obsolescence is the word.

    Human obsolescence.

    And it’s coming to a universal wage station near you.

    This is what technology hath wrought.

    Vocationally speaking, human jobs have been being tossed in the trash for decades. It probably started innocently enough with something like gas station attendants. But don’t kid yourselves.

    We are no longer surfing the web—the web is surfing us.

    And the wave is about to break.

    The post Purge, Purge, Purge Is the Word first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.