Facing the truth about FaceApp

A clear and concise privacy policy outlining the details related to processing activity in simple language can go a long way in gaining consumer trust.

By Mira Swaminathan and Shweta Reddy (Virtual Insanity)

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Sat 20 Jul 2019, 10:00 PM

Last updated: Sun 21 Jul 2019, 12:02 AM

If you, much like a large number of celebrities, have spammed your followers with the images of 'how you may look in your old age', you have successfully been a part of the FaceApp fad that has gone viral this week.
The problem with the FaceApp trend isn't that it has penetrated most social circles, but rather, the fact that it has gone viral with minimal scrutiny of its vaguely worded privacy policy guidelines. We click 'I agree' without understanding that our so called 'explicit consent' gives the app permission to use our likeness, name, and username, for any purpose, without our knowledge and consent, even after we delete the app. FaceApp is currently the most downloaded free app on the Apple Store due to a large number of people downloading the app to 'turn their old selfies grey'.
There are many things that the app could do. It could process the images on your device, rather than take submitted photos to an outside server. It could also upload your photos to the cloud without making it clear to you that processing is not taking place locally on their device.
Further, if you have an Apple product, the iOS app appears to be overriding your settings even if you have denied access to their camera roll. People have reported that they could still select and upload a photo despite the app not having permission to access their photos.
This 'allowed behaviour' in iOS is quite concerning, especially when we have apps with loosely worded terms and conditions.
FaceApp responded to these privacy concerns by issuing a statement with a list of defences.
The statement clarified that FaceApp performs most of the photo processing in the cloud, that they only upload a photo selected by a user for editing and also confirmed that they never transfer any other images from the phone to the cloud. However, even in their clarificatory statement, they stated that they 'might' store an uploaded photo in the cloud and explained that the main reason for that is "performance and traffic". They also stated that 'most' images are deleted from their servers within 48 hours from the upload date.
Further, the statement ends by saying that "all pictures from the gallery are uploaded to our servers after a user grants access to the photos". This is highly problematic. The entire point of 'transparency' in a privacy policy is for the user to understand the extent of processing undertaken by the organisation and then have the choice to provide consent.
Vague phrases do not adequately provide a clear indication of the extent of processing of personal data of the individual.
The obligation of implementing reasonable security measures to prevent unauthorised access and misuse of personal data is placed on the organisations processing such data.
FaceApp's privacy policy assures that reasonable security measures according to commercially accepted standards have been implemented. Despite such assurances, FaceApp's waiver of the liability by stating that it cannot ensure the security of the information against it being accessed, disclosed, altered or destroyed itself says that the policy is faltered in nature.
The privacy concerns and the issue of transparency (or the lack thereof) in FaceApp are not isolated. After all, as a Buzzfeed analysis of the app noted, while there appeared to be no data going back to Russia, this could change at any time due to its overly broad privacy policy.
The business model of most mobile applications being developed currently relies heavily on personal data collection of the user. The users' awareness regarding the type of information accessed based on the permissions granted to the mobile application is questionable.
In May 2018, Symantec tested the top 100 free Android and iOS apps with the primary aim of identifying cases where the apps were requesting 'excessive' access to information of the user in relation to the functions being performed. The study identified that 89 per cent of Android apps and 39 per cent of the iOS app request for what can be classified as 'risky' permissions, which the study defines as permissions where the app requests data or resources which involve the user's private information, or, could potentially affect the user's locally stored data or the operation of other apps.
Requesting risky permissions may not on its own be objectionable, provided clear and transparent information regarding the processing, which takes place upon granting permission, is provided to the individuals in the form of a clear and concise privacy notice.
The study concluded that 4 per cent of the Android apps and 3 per cent of the iOS apps seeking risky permissions didn't even have a privacy policy.
The New York Times, as part of its Privacy Project, analysed the length and readability of privacy policies of around 150 popular websites and apps. It was concluded that the vast majority of the privacy policies that were analysed exceeded the college reading level.
Usage of vague language like "adequate performance" and "legitimate interest" and wide interpretation of such phrases allows organisations to use data in extensive ways while providing limited clarity on the processing activity to the individuals.
The Data Protection Authorities operating under the General Data Protection Regulation are paying close attention to openness and transparency of processing activities by organisations. The French Data Protection Authority fined Google for violating their obligations of transparency and information. The UK's Information Commissioner's office issued an enforcement notice to a Canadian data analytics firm for failing to provide information in a transparent manner to the data subject.
Thus, in the age of digital transformation, the unwelcome panic caused by FaceApp should be channelled towards a broader discussion on the information paradox currently existing between individuals and organisations. Organisations need to stop viewing ambiguous and opaque privacy policies as a get-out-of-jail-free card. On the contrary, a clear and concise privacy policy outlining the details related to processing activity in simple language can go a long way in gaining consumer trust.
The next time an "AI-based Selfie App" goes viral, let's take a step back and analyse how it makes use of user-provided data and information both over and under the hood, since if data is the new gold, we can easily say that we're in the midst of a gold rush.
-thewire.in
Mira Swaminathan and Shweta Reddy are programme officers of the Centre for Internet and Society Bangalore


More news from