The US Air Force has successfully tested an autonomous AI system on one of its aging Lockheed U-2 (aka “Dragon Lady”) reconnaissance aircraft.
The aircraft lifted off on Tuesday from Beale Air Force Base in California with the AI, dubbed ARTUµ because who doesn’t like a Star Wars reference, solely controlling the U-2’s radar sensors and tactical navigation systems. The Air Force noted that ARTUµ had no access to weapons or flight control systems.
“For the most part, I was still very much the pilot in command,” the U-2 pilot who carried out the test told The Washington Post. “[The AI’s] role was very narrow … but, for the tasks the AI was presented with, it performed well.”.
The AI was trained on computer simulation to scan for incoming missiles and the launchers that fire them. The idea is that a computer could replace the job of a human operator and keep the pilot up to date, although the Air Force says vital decisions, like flying and engaging with targets, should be left to human operators.
“This is the first time this has ever happened,” said Assistant Air Force Secretary Will Roper. “This is really meant to shock the Air Force and the [Defense] Department as a whole into how seriously we need to treat AI teaming.”
Now that’s drone racing
Staying with the airborne theme, boffins have built an AI system that can fly swarms of drones in formation through obstacle-heavy environments like forests.
In a video released on Thursday, the team released three quadcopters that flew through trees but maintained formation by communicating with each other and using only their own computational resources. They say that in a simulation they have managed to get ten machines working in harmony through similarly obstructed courses.
“We incorporate a lightweight topological trajectory generation method,” the researchers said in a paper on the subject.
“Then agents generate safe, smooth, and dynamically feasible trajectories in only several milliseconds using an unreliable trajectory sharing network. Relative localization drift among agents is corrected by using agent detection in depth images. Our method is demonstrated in both simulation and real-world experiments.”
The system could be used to coordinate search and rescue attempts, the team suggests, or for mapping functions, but we can bet the military would be interested too. You can check out the full code here.
Sony going gastronomic with future AI
Sony AI has launched the Gastronomy Flagship Project aiming to train AIs to find innovative recipes for humans and machine-learning driven robotic kitchen assistants.
The research biz, spun out from Sony itself, wants to use AI in a recipe creation app that will analyze taste, aroma, flavor, molecular structure, and nutritional content of foodstuffs to come up with new cooking ideas that are tasty, nutritious and sustainable.
It also wants to build a chef-assisting cooking robot that is trained using machine learning to take over many of the food preparation and plating processes that currently take up many a harassed kitchen assistant’s time. Sony envisions future chefs using multiple robots to cook for people simultaneously in remote locations.
“Through the power of AI and robotics, we want to reaffirm the principle of our gastronomy flagship project, which is to enable creative gastronomy that is at the same time healthy and sustainable,” said Hiroaki Kitano, CEO of Sony AI. “Together with creators in the gastronomy community, we wish to contribute to creative, healthy, and sustainable gastronomy.”
DeepMind still a money pit
DeepMind may be a cornerstone of Google’s AI strategy but it’s proving a bit of a financial black hole at the moment.
The UK-based research unit reported [PDF] losses of £477m ($644m) in 2019, a small increase on similar losses the previous year. In addition Google Ireland said that it had expunged a £1.1bn ($1.5bn) loan to DeepMind to bolster the AI biz’s financial position.
Overall the organization is still ramping things up, with spending on staff, infrastructure and other expenses rising 26 per cent to £717m ($969m). That’s a fair chunk of change and DeepMind has the full support of Google, as CEO Sindar Pichai told investors at the search and ad giant’s investor relations call in July.
Google has proven expert at playing the long game with investments, nurturing groups like YouTube and Gmail through early losses and then making that investment back and much more in the following years. DeepMind staffers shouldn’t start looking for other jobs.
IBM: Work with AI or lose your job
Rob Thomas, senior vice president of IBM’s cloud and data platform, issued a stark warning this week that people can either get behind AI or lose their jobs to people who will.
“AI is not going to replace managers but managers that use AI will replace those that do not,” he told CNBC.
“It’s about changing the roles that humans play in organizations, but this is additive, this is about giving humans super powers, giving you a better way to automate the task that people don’t really want to do in the first place.”
He cited the example of Cora, an AI-powered chatbot used by British bank NatWest to handle customer support calls. The system had seen a sharp increase in use in the current COVID pandemic, he said, and NatWest has said it wants Cora to be the “leading point of contact for all customers in all channels by 2025.” ®