As artificial intelligence continues to permeate different facets of our industry, new questions arise around its ethical use. Last month’s Cannes Lions International Festival of Creativity saw a concentrated effort to explore how AI can be leveraged to advance equity and inclusion, sparking thought-provoking discussions both on and off stage. From redefining recruitment processes to shaping marketing strategies, AI’s far-reaching and transformative impact is evident. However, the underlying ethical imperative remains constant: It is crucial to ensure that AI aligns with the diverse tapestry of our society.
A panel discussion titled “AI Can’t Replace Diverse Teams,” featuring leaders from iHeartMedia, The Weather Company, Sonos and Bridge, as well as myself, sought to tackle the complexities of creating unbiased AI-driven recruitment processes and ensuring fair representation within AI. Tony Coles, president of multicultural business and development at iHeartMedia, moderated the session and began by raising a common concern: People who use AI outputs are often not involved in their development. However, the panelists suggested that there are ways to align AI outputs with equitable practices. Here are key insights drawn from our session.
Train your data to promote fairness
Early adopters of AI in marketing grapple with a common concern: AI systems often reproduce existing biases in the historical data that they’re trained on. If we look at hiring processes, this is particularly evident. However, there is cause for optimism: With proper training, AI has the potential not only to enhance diversity and equity but also to eliminate long-standing biases.
As to the first step to getting there, Amit Seth, global head of data and AI, The Weather Company, put it simply. “It starts with getting your data right, period.”
To truly imbue AI with our best qualities, we must proactively instill and shape its reflection of our values. The only way to do that is to ensure meticulous data training and robust human oversight, especially in the early stages.
Inclusion should be a standard practice
On top of getting their data in order, today’s leading brands are taking proactive measures to combat these biases in innovative ways. At Monks, I’ve witnessed this firsthand with Dove and Sephora. For its 20th anniversary of the Real Beauty campaign, Dove used AI to highlight how commonly accepted beauty standards are narrow and unreflective of true beauty. Similarly, Sephora challenged AI-generated scripts on sensitive topics, uncovering a disturbing trend of victim-blaming narratives. These actions underscore how brands can leverage AI not just as a tool but as a means to champion their core values of inclusivity and representation.
Sheryl Daija, founder and CEO of Bridge, a marketing industry trade group focused on operationalizing inclusion as a business growth driver, introduced additional insight by delineating the group’s Imax framework. This tool measures an organization’s inclusion maturity for over 80+ business practices across five pillars including organizational, marketing management, commercial, communications and advocacy. Moreover, Bridge, together with its partner XR Extreme Reach, is using AI to measure representation in creative outputs — a capacity that empowers marketers to evaluate the representation of various demographics, such as skin tone, body type and gender to understand how it measures the audience they serve.
This intentional focus on inclusion prompts different, more equitable questions and solutions from AI systems. For instance, Sephora’s focus on inclusion has led them to measure their performance rigorously, setting a benchmark for other brands. When inclusion becomes a foundational practice, it inherently influences data curation and AI training, culminating in fairer and more representative outcomes.
Read full article here