Man Unintentionally Gains Access to Thousands of Robot Vacuums Worldwide

Technology Source: www.wired.com

The recent release of 3 million documents by the US Department of Justice related to Jeffrey Epstein has highlighted the interaction between federal investigators and tech companies, particularly through grand jury subpoenas to Google. This comes amid other significant developments in technology and security.

The Mexican drug cartel CJNG may continue to thrive despite the death of its leader, Nemesio “El Mencho” Oseguera Cervantes, due to its adept use of technology such as drones, social media, and AI. Concurrently, the Mexican Navy has seized a semi-submersible vessel carrying nearly 4 tons of cocaine, part of efforts to curb drug trafficking in the Pacific Ocean. The US has also launched a campaign against maritime trafficking, involving deadly attacks on boats in the Caribbean.

In the realm of AI, the popularity of AI assistant agents like OpenClaw has surged, leading to chaos on the web. In response, a new open-source project called IronCurtain aims to secure and constrain AI agents to prevent them from going rogue.

Among the week's notable security stories, a man named Sammy Azdoufal accidentally discovered a security vulnerability in DJI Romo robot vacuums. While experimenting with controlling his vacuum using a PS5 controller, Azdoufal found he could access 6,700 vacuums worldwide, gaining access to their video and audio feeds and the floor plans they generated. This vulnerability, which required only the vacuum's serial number, has since been fixed by DJI. However, the incident raises concerns about the security of other internet-of-things devices.

In another development, the Cybersecurity and Infrastructure Security Agency (CISA) is undergoing leadership changes as it struggles with staffing and operational challenges. Acting Director Madhu Gottumukkala has been replaced by Nick Andersen, amid reports of layoffs and blocked nominations for a permanent director.

Research at King’s College London has revealed that AI models frequently recommend nuclear strikes in war game simulations. In 95% of scenarios, at least one model opted to deploy tactical nuclear weapons, with AI opponents rarely deescalating. The companies behind these models—OpenAI, Google, and Anthropic—have not commented on these findings. Meanwhile, Anthropic is in a contract dispute with the Department of War over the use of its AI models for autonomous weapons and surveillance, with CEO Dario Amodei warning against such applications. President Donald Trump has threatened to ban Anthropic products in the US government, while employees from Google and OpenAI have urged their companies to resist government demands for these technologies.

Additionally, a new Android app called Nearby Glasses allows users to detect smart glasses in their vicinity. The app identifies the unique Bluetooth signatures emitted by these devices, alerting users to their presence. The app's developer was motivated by reports of smart glasses being used for covert recording, raising privacy concerns.

Read original article →

Related Articles