In a world of Photoshop, it’s hard to tell which is ‘shopped’ or not. Adobe’s Content Authenticity Initiative challenges deepfakes and deceptively manipulated content, allowing creatives and consumers to have transparency and trust.

/ 27 October 2020

Photoshop is generally used to enhance the quality of photos or take creative ideas into great lengths of imagery. But in the hands of the wrong people, it can be used to manipulate truth and falsify reality, even changing people’s perspective in how they see their bodies.

While memes and fabricated travel photos may not be a very big deal, this issue becomes much more significant when you consider other areas that it affects: politics and policies, for example.

Take these ‘alleged’ Photoshopped photos of Sen. Villar’s COVID-19 equipment donation for an example, which made it seem like the lawmaker was physically present in the turnover of equipment for Las Piñas’ #COVID19 laboratory. These photos were posted on her official Facebook page, but were later then taken down.

These types of photos would lead some people to get the idea that the senator did indeed present it herself. Manipulating photos of politicians in press releases is critical as it gives the public a “falsified” and “manipulated” sense of reality, Jimmy Domingo said in an interview, a senior lecturer of Photojournalism in the University of the Philippines.

Adobe, says it shares those concerns, and they’re hoping that the Content Authenticity Initiative changes this. There is a need to challenge deepfakes and deceptively manipulated content. Not only does it tell stories as they are, but it also allows creatives and consumers to have transparency and trust.

This prototype feature will let creatives add their name, location and edit history, adding that secure layer and providing a “tamper evident” paper trail, which will make it easier to identify and recognize if it’s been altered or not. This will help viewers have a full understanding of the photos they’re looking at, creating that value to authenticity.



The CAI feature was launched that followed hours of technical development and information sharing with collaborators like The New York Times Company, Twitter, Inc., Microsoft, BBC, Qualcomm Technologies, Inc., Truepic, WITNESS, CBC etc. “Collaborating with leaders across a variety of industries will help to create an industry-wide attribution framework, making it easier on consumers and creatives alike to use the tool,”

Adobe hopes the attribution tool helps creators build up exposure for their portfolio, so in case their images go viral even with lots of modifications, they would still get the credit as the original creator.

“We believe attribution will create a virtuous cycle. The more creators distribute content with proper attribution, the more consumers will expect and use that information to make judgement calls, thus minimizing the influence of bad actors and deceptive content. Ultimately, a holistic solution that includes attribution, detection and education to provide a common and shared understanding of objective facts is essential to help us make more thoughtful decisions when consuming media. Today is a huge leap forward for the CAI, but this is just the beginning.”

The feature will be available to select Photoshop and Behance customers in coming weeks. Maybe by then we would be able to quickly spot fake news and report misinformation, especially in a world where content like images can easily be plagiarized.