14/04/2025
AI and Trump
11 April 2024
I use AI (Artificial Intelligence) to do some medical work.. The program is called Heidihealth. It is free, remarkably enough and records the consultation, transcribes it, and then puts out the results according to a template. I have no idea how to make a template, but the one for GP consultations is very satisfactory. It records the history, the examination findings as I state them or even mention them, then the diagnosis, plans and next appointment. If there is a discussion about the cricket, or an interruption such as a phone call or an interpreter talking a foreign language, it can ignore all this and still come up with a summary.
I am sure that AI could replace 95% of what I do, and pick up a couple of percent by forgetting nothing and responding in a few seconds. AI will soon be far superior to human thinking.
Here is an SMH article on Trump’s policies:
Did an AI chatbot help draft the US tariff policy?
By Tim Biggs
April 10, 2025
As Donald Trump’s tariff turnaround sends global economies reeling, there’s as much discussion online about how the US president came up with his plan as there is about why he’s now pausing it.
Among many theories, one recurring idea is that Team Trump simply asked ChatGPT or some other large language model to come up with a solution for its trade woes and then ran with it.
The thought has some intuitive appeal given how confidence in the US government’s ability to balance technology use with responsible governance is at an all-time low. But does it hold any water? And, if true, why would that be a terrible thing?
Some Trump-watchers cried ChatGPT almost immediately as the president unveiled his reciprocal tariffs this month, but mostly because they seemed to make little sense at first. Why were countries where the US has a trade surplus still being hit for 10 per cent? How can small island nations with barely any US trade be whacked in the mid-70s?
Journalists and analysts soon found that, despite US government claims that it had calculated the tariffs for each nation and considered existing non-tariff barriers, the list of tariffs actually followed a set and elementary formula; trade deficit divided by exports, with a minimum 10 per cent tariff. The government denied this, but then provided its calculations which showed that’s precisely what it had done.
This kicked speculation about AI usage into high gear, and in a thread on X (plus a summary essay), we see engineer Rohit Krishnan make a convincing argument. After Krishnan asked several large language models to provide a formula for calculating tariffs on each country, with the goal of putting the US on an even playing field, each one returned a formulation very similar to the one Trump’s administration is using.
Krishnan also suspected that the administration had used AI to set the list of domains to be hit by tariffs — given weird inclusions such as Nauru and Reunion Island — as well as the 400-page report justifying it all, which he claimed could be largely generated by a deep research tool if fed enough data.
Of course the fact that the US government and chatbots came to similar conclusions is not proof that one used the other. And we can’t be sure that the chatbot’s output today hasn’t been influenced by the past week of discussion following the tariffs’ announcements. Plus, similar tariff calculations were discussed by Trump in his first term, and by adviser Peter Navarro, so the chatbots could just be accurately predicting what the US would do.
But whether Team Trump used AI or not, asking the likes of ChatGPT and others to come up with the plan does elucidate the situation in some interesting ways.
The road test
I submitted the following prompt to a number of chatbots:
Please come up with a formula that the US government could use to impose tariffs on each nation. The goal is to put the US on an even footing when it comes to trade deficit.
Google’s Gemini immediately cautioned that “designing a formula that is economically sound, fair, and doesn’t trigger harmful retaliations is incredibly complex”, and even though it provided a formula, it additionally worried that it was “highly simplified and potentially problematic”. The formula finds the difference between imports and exports, and expresses it as a percentage of total US imports.
Gemini then delivered a very long and detailed explanation of why the wording of my question was problematic, and why implementing the plan was dangerous. Humorously, it suggested a better strategy would be to focus on US competitiveness by investing in education and infrastructure, while working constructively with other nations to address economic imbalances.
The beginning of Gemini’s very long response.
DeepSeek went further, suggesting the same base formula but adding additional penalties for undervalued currencies and for exports that “exploit weak labor/environmental standards”. That way, it said, the nations engaging in unfair practices would be hit hardest but there would be ways for them to reduce the tariff through negotiating. It did warn US consumer prices would rise.
ChatGPT again suggested a similar base formula, with an adjustable level of aggression and a global correction factor “if the overall trade deficit is persistent”. It noted that its formula meant that balanced or surplus countries face would face no tariffs, so to get to Trump’s calculations I would have had to ask for a 10 per cent base level for all nations.
ChatGPT also gave a long list of reasons the formula would not work, explicitly advising that the US would be shooting itself in the foot, and handily summed the implications up as: higher consumer prices, damaged trade relationships, legal blowback, and slower economic growth. I asked for mitigation ideas, it listed an upper limit on tariffs, a gradual phase-in, exemptions for critical goods or those not available in the US, carve-outs for allies and reinvestment of tariff revenue. The Trump administration is not adopting any of those.
ChatGPT lists some reasons to be cautious in rolling out retaliatory tariffs.
So it seems likely that even if the Trump administration did use AI, it took the formulation and ran with it despite the chatbot itself spelling out why that would be such a bad idea. Krishnan wrote that asking language models about governance might not be a bad idea in absolute terms, but that this case pointed to a lack of chatbot literacy; the user asked a bad question filled with wrong assumptions, then ran with the answer ignoring the qualifications.
He called it “vibe governing”, a spin on the recently coined phrase “vibe coding”, in which a user describes a desired output to an AI and lets it do the coding.
RMIT’s Dr Samar Fatima said that directly using the output of an AI chatbot to craft public policy design or governance could have lethal results, and that the responses from large language models (LLMs) — broad and based on data indifferently scraped from the public internet — were not reliable enough for government use.
“There are so many factors which are contextual, which need that human insight, which have to cover those small nuances of a country’s economy, the geographical position, the political environment, the overall international trade environment,” she said.
“An LLM will not be able to comprehend those unspoken factors, which are there but they are not quite published, or part of the data set.”
So could the Trump administration have taken a chatbot’s word for it and tanked the global economy by accident? It’s impossible to know. And with AI advancing so quickly Fatima said that regulation was unlikely to catch up, but that changes which obliged policymakers to disclose AI use could help mitigate some of the worst impacts.
“In terms of transparency, AI systems are still a black box. And if the output’s used in a system where it is not even disclosed that it was generated by AI, then the black box goes to another level of blackness,” she said.
“Then we cannot even really figure out how the decision was made, while it’s affecting the lives of billions.”