Are we ready to trust AI with our bodies?
Home/Technology / Are we ready to trust AI with our bodies?
Are we ready to trust AI with our bodies?

But as AI enters ever more sensitive areas, we need to keep our wits about us and remember the limitations of the technology. Generative AI systems are excellent at predicting the next likely word in a sentence, but they don’t have a grasp on the wider context and meaning of what they are generating. Neural networks are competent pattern seekers and can help us make new connections between things, but they are also easy to trick and break and prone to biases. 

The biases of AI systems in settings such as health care are well documented. But as AI enters new arenas, I am on the lookout for the inevitable weird failures that will crop up. Will the foods that AI systems recommend skew American? How healthy will the recipes be? And will the workout plans take into account physiological differences between male and female bodies, or will they default to male-oriented patterns? 

And most important, it’s crucial to remember these systems have no knowledge of what exercise feels like, what food tastes like, or what we mean by “high quality.” AI workout programs might come up with dull, robotic exercises. AI recipe makers tend to suggest combinations that taste horrible, or are even poisonous. Mushroom foraging books are likely riddled with incorrect information about which varieties are toxic and which are not, which could have catastrophic consequences. 

Humans also have a tendency to place too much trust in computers. It’s only a matter of time before “death by GPS” is replaced by “death by AI-generated mushroom foraging book.” Including labels on AI-generated content is a good place to start. In this new age of AI-powered products, it will be more important than ever for the wider population to understand how these powerful systems work and don’t work. And to take what they say with a pinch of salt. 

Deeper Learning

How generative AI is boosting the spread of disinformation and propaganda

Governments and political actors around the world are using AI to create propaganda and censor online content. In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”

Downward spiral: The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. Read more from Tate Ryan-Mosley in her weekly newsletter on tech policy, The Technocrat.

Bits and Bytes

Predictive policing software is terrible at predicting crimes
The New Jersey police department used an algorithm called Geolitica that was right less than 1% of the time, according to a new investigation. We’ve known about how deeply flawed and racist these systems are for years. It’s incredibly frustrating that public money is still being wasted on them. (The Markup and Wired



Source link