The U.S. government is going “all in” on AI. There are big risks
The U.S. government is going “all in” on AI. There are big risks

The U.S. government wants to go “all in” on AI. There are big risks

In July, for instance, the U.S. Department of Defense handed out $200 million contracts to Anthropic, Google, OpenAI and xAI. Elon Musk’s xAI announced “Grok for Government,” where federal agencies can purchase AI products through the General Services Administration. And all that comes after months of reports that the advisory group called the Department of Government Efficiency has gained access to personal data, health information, tax information and other protected data from various government departments, including the Treasury Department and Veteran Affairs. The goal is to aggregate it all into a central database.
First is data leakage. When you use sensitive data to train or fine-tune the model, it can memorize the information. Say you have patient data trained in the model, and you query the model asking how many people have a particular disease, the model may exactly answer it or may leak the information that [a specific] person has that disease. Several people have shown that the model can even leak credit card numbers, email addresses, your residential address and other sensitive and personal information.
Second, if the private information is used in the model’s training or as reference information for retrieval-augmented generation, then the model could use such information for other inferences [such as tying personal data together].