In brief Google has finished its probe into the controversial ousting of Timnit Gebru, co-leader of its Ethical AI unit. The ad giant promised to implement new procedures around “potentially sensitive employee exits,” though it did not make its findings public.

Gebru said she was fired for warning coworkers in an internal memo that, due to management apathy, it was a waste of energy trying to foster diversity, equality, and inclusion within the Silicon Valley goliath. Google claimed she effectively resigned.

Bosses had told her to remove her name from a research paper on the negative environmental and social effects of massive, power-hungry, biased language-processing neural networks like the ones used by Google. In the aftermath of that clash, Gebru said she told executives she would be comfortable remaining at the internet super-corp if certain conditions were met. That was treated as a resignation note by managers who turned down her requests. Many researchers within Google and in the wider AI community have rallied behind her in support.

“I heard and acknowledge what Dr Gebru’s exit signified to female technologists, to those in the Black community and other underrepresented groups who are pursuing careers in tech, and to many who care deeply about Google’s responsible use of AI,” Google AI supremo Jeff Dean said in an internal memo that was quickly leaked. “It led some to question their place here, which I regret.”

Meanwhile, Margaret Mitchell, who also co-led the Ethical AI unit alongside Gebru, said on Friday she has been fired. Mitchell had been locked out of her corporate account for weeks.

Mitchell said she was denied access to her work email and other corporate services after trying to gather evidence while investigating her colleague’s ousting. Google has its own take. “After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees,” a spokesperson told El Reg in a statement.

All of this happened just a day after Google tapped up Marian Croak, a veteran engineer, to manage Responsible AI, a new unit that oversees Ethical AI as well as nine other teams. For a summary of how half-arsed Google has handled this whole thing, see the thread below.

IBM said to mulling selling its AI healthcare wing

Big Blue may be getting rid of its IBM Watson Health unit after its machine-learning services marketed at hospitals failed to turn a profit. It is looking at selling the business to another company or a private-equity firm, according to the Wall Street Journal.

The division has suffered numerous problems over the years. Analysts have complained the software used to analyze medical data is difficult to use, and that IBM has consistently over-promised and under-delivered – or simply not been clear at all about what exactly Watson is and how it relates to healthcare. Big Blue has also laid off staff working there in the last year.

AI healthcare is a difficult business to break into. It’s fraught with tough regulations, and America’s drug watchdog hasn’t approved that many newfangled AI algorithms for medical care. It also seems doctors just aren’t keen on Watson.

Waymo rolls out fully autonomous taxis in Cali using staff as guinea pigs

Waymo has begun testing its autonomous robo-taxi service in San Francisco, the Alphabet subsidiary announced this month.

“We’re now accelerating the development and testing of our technology in cities, so we can one day bring the benefits of fully autonomous driving to more people,” the Google stablemate said. Previously only residents taking rides in downtown Phoenix, Arizona, were able to try out the service. Now, it has been expanded to include San Francisco, California.

However, don’t get your hopes up for a robot ride right now. Waymo is only transporting employees who volunteer at the moment, although – let’s face it – if you work at the biz and decline, it’s not a good look.

Waymo said it has improved its software’s capabilities to adapt to busy cities like SF, including detecting jaywalkers, and spotting objects like traffic cones that signal road work ahead so it can reroute its cars.

US pours $15m into theoretical deep learning boffinry

America’s National Science Foundation launched a new program dedicated to funding research on understanding the theoretical foundations behind neural networks.

The Stimulating Collaborative Advances Leveraging Expertise in the Mathematical and Scientific Foundations of Deep Learning (SCALE MoDL) program is looking to dole out from 15 to 20 grants worth up to $1.2m each. It expects to invest a total of $15m in the program.

“Machine learning has unfolded as a key motor driving major advances across all areas of science and technology,” the NSF said. “Deep learning, in particular, has met with impressive empirical success that has fueled fundamental scientific discoveries and transformed numerous application domains of artificial intelligence.

“Our incomplete theoretical understanding of the field, however, impedes accessibility to deep learning technology by a wider range of participants, for want of significant resources presently required in the reliable training of such algorithms, both in terms of computational infrastructure and very large sets of labeled data.”

Researchers can submit research proposals to the program until May 12. ®