Amir Homayun

The language of depression isn’t universal 

The language of depression isn’t universal 

Scroll through any social media feed and you’ll find people sharing their highs and lows, sometimes openly, sometimes between the lines. For years, researchers have hoped that artificial intelligence (AI) could learn to recognize early signs of depression from our words. The idea is simple and powerful: if language carries emotional fingerprints, algorithms might one day detect distress before it becomes a crisis.  
But what if these systems end up hearing some experiences more clearly than others?

Can an AI be “unbiased”?

Can an AI be “unbiased”?

New research shows that even so-called “aligned” AIs — trained to follow human values and avoid harmful outputs — still reflect stereotypes. Even GPT-4, one of the most advanced models, repeats the very biases it was meant to suppress.

Is your red my red?

Is your red my red?

Remember that dress that nearly ended friendships? The one where half of the world was convinced it was blue and black, while the other saw it as white and gold? Then you’ve perhaps already asked yourself: Do we really see the same colors? Colour blindness shows that the answer likely is: not quite. Our experience of colour is not in the world, but constructed by the brain.