Last week’s viral moment at a Coldplay concert took an unexpected and uncomfortable turn when a light-hearted “Kiss Cam” feature catapulted two unsuspecting concertgoers into the global spotlight. Within seconds of appearing on screen, facial recognition technologies and internet sleuthing turned a brief interaction into a cautionary tale. Social media platforms, including X (formerly Twitter), Reddit, TikTok, and Instagram, erupted with commentary. Their identities—names, marital status, workplaces, job titles, even information about their children—were uncovered and circulated at an alarming speed.
In the blink of an eye, memes exploded. Then came AI-generated content: one especially viral video featured a hyper-realistic deepfake of Chris Martin seemingly apologising for the event, further blurring the line between real and synthetic media.
There’s been no shortage of conversation recently about AI’s growing presence in our lives, but this incident highlights an often-overlooked concern: the ethical boundaries around AI-enabled surveillance and public shaming. With the increasing sophistication of facial recognition and deepfake generation, we are facing a perfect storm where technology, public curiosity, and a lack of regulation collide.
As someone who has spent years exploring the intersections of moral psychology, design ethics, and human-centred technology, I find myself asking: how have we arrived at a point where the wellbeing and digital dignity of individuals can be compromised in seconds, with no meaningful legal protections in place?
While the advantages of AI in regulated sectors like finance and security are undeniable, its application in open digital ecosystems raises significant concerns. Where are the safeguards for privacy? Where is the accountability for harm caused?
"I proposed a theory: that personal data would one day become a valuable commodity—one that individuals would own and have the exclusive right to monetise. A kind of 'personal data currency.'"
This moment reminded me of a lecture I gave some years ago on blockchain. I proposed a theory: that personal data would one day become a valuable commodity—one that individuals would own and have the exclusive right to monetise. A kind of “personal data currency.” At the time, it felt like a stretch.
But perhaps not anymore.
Denmark recently made headlines for proposing legislation to combat deepfakes. If passed, this law would allow individuals to copyright their own face and voice—granting them legal ownership and control over how their identity is used. These progressive steps raise a timely question: was my prediction about the commodification—and rightful ownership—of personal data really that far-fetched?
As we edge further into an AI-infused reality, we need to reconsider the frameworks that govern digital ethics, data protection, and moral design. Because a moment on a Kiss Cam shouldn’t spiral into a global invasion of privacy.
Written by Denise Mulvaney, Astara – UX & Design Thinking Consulting Agency
This article was created in collaboration with AI, combining human insight with digital assistance.
Photo by Pragyan Bezbaruah: https://shorturl.at/iglRQ
#HumanCenteredDesign #DigitalDignity #EthicalAI #AIandEthics
