- Machine learning industry is fast-moving.
- AI image-generators are being trained on explicit photos of children.
- LAION, a nonprofit, has taken down its training data and pledged to remove the offending materials before republishing it.
- Ethics is hard in AI development.
- EU’s AI regulations threaten fines for noncompliance with certain AI guardrails.
Keeping up with an industry as fast-moving as machine learning is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. This week in AI, the news cycle finally (finally!) quieted down a bit ahead of the holiday season. But that’s not to suggest there was a dearth to write about, a blessing and a curse for this sleep-deprived reporter.
A Watchdog’s Concerns
A particular headline from the AP caught my eye this morning: “AI image-generators are being trained on explicit photos of children.” The gist of the story is, LAION, a data set used to train many popular open source and commercial AI image generators, including and, contains thousands of images of suspected child sexual abuse. A watchdog group based at Stanford, the Stanford Internet Observatory, worked with anti-abuse charities to identify the illegal material and report the links to law enforcement.
The Good, The Bad, The Ugly
Now, LAION, a nonprofit, has taken down its training data and pledged to remove the offending materials before republishing it. But incident serves to underline just how little thought is being put into generative AI products as the competitive pressures ramp up. Thanks to the proliferation of no-code AI model creation tools, it’s becoming frightfully easy to train generative AI on any data set imaginable. That’s a boon for startups and tech giants alike to get such models out the door. With the lower barrier to entry, however, comes the temptation to cast aside ethics in favor of an accelerated path to market.
Ethics is hard — there’s no denying that. Combing through the thousands of problematic images in LAION, to take this week’s example, won’t happen overnight. And ideally, developing AI ethically involves working with all relevant stakeholders, including organizations who represent groups often marginalized and adversely impacted by AI systems. The industry is full of examples of AI release decisions made with shareholders, not ethicists, in mind.
Suffice it to say harms are being done in the pursuit of AI superiority — or at least Wall Street’s notion of AI superiority. Perhaps with the passage of the EU’s AI regulations, which threaten fines for noncompliance with certain AI guardrails, there’s some hope on the horizon. But the road ahead is long indeed. Here are some other AI stories of note from the past few days:…[Weitere AI-Geschichten]
More machine learnings… [Mehr Maschinenlernen]