Lensa AI Selfie App Raises Questions on Theft and Ethics

4 min read

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.



Lensa: “We’ll be talking about AI a lot in the next few years,” someone said to me about two months ago. AI is very much here, and it took mainstream use cases with the potential for virality for it to become a thing. 

Over the weekend, OpenAI launched their ChatGPT bot — an artificial intelligence chatbox that feels like you’re talking with Telstra’s on-site chat function because you cbf waiting on hold to an IRL person from the head office on the phone. But it also feels like you’re speaking to your really smart mate who reads up on everything and has the brain power of an enormous sponge. 

Lensa AI

This week, the Lensa AI app is the next new thing. It’s an application that allows you — for the reasonable price of AU$4.99 — to generate between 50-100 avatars of your wildest childhood dreams where you are the superhero of your own show. 

There wasn’t much space on Instagram newsfeeds that wasn’t taken up with people exploring the new application. Vanity and curiosity led to me downloading the app myself and I was dumbfounded by the epicness of what it could do. 

But just a few moments later, ample people across my own social feeds were blasting the application and its method of scraping millions of data points of traditional artists’ works that are used to train the AI algorithm. 

The Lensa AI app is inspired by Stable Diffusion, a free to use application that was trained on 2.3 billion captioned images across the internet including Flickr, DeviantArt, ArtStation, as well as from stock images of Getty and Shutterstock. 


OpenAI, responsible for the ChatGPT bot, and more recently known for DALL-E, has innovated with AI to create text-to-image generators that use a massive amount of data to convert sentences into images. The Lensa AI app uses images that you send and upload, takes data that includes original pieces of artwork from across the internet and then charges you for it. 

In Lensa’s terms and conditions, it states that images can be used by Prisma AI — the company behind Lensa that further trains the AI’s neural networks. It uses position, orientation and face topology analytical tech that is also used for Apple’s TrueDepth API, the same technology used for iPhone holders to unlock phones with their face. 

The app has been scrutinised for storing images after uploading, but the app did notify users that images will not be held long term. Whether it creeps you out that the imagery is used to train the AI is up for you to decide. 


Many artists have called out the Lensa app for ripping off their work, fearing the loss of their income because their work has been stolen to feed the neural networks without compensation. Others called out that people should buy art from their friends to support them first. 


“It uses Stable Diffusion, an AI art model to sample artwork from artists that never consented to their work being used. First and foremost, if you used the app already you did nothing wrong if you didn’t know this is what the app does. As awful as the SD model is, it’s a great learning moment for the public to know what AI models using art without consent can harm artists,” they said. 

“We work tirelessly to develop our skills and styles just for one “nonprofit” project to use said skills and styles without consent and compensation. I do not think AI art is inherently bad, but when artists are cut out of the conversation we are being taken advantage of. Please spread the message to others so they can understand the harm these apps and AI art models can do to artists.”


LAION, the non-profit that creates the data behind the Stable Diffusion model, states it is simply indexing the internet with URLs and alt-text. On its website, it says that it was built purely for research purposes to enable a testing model and is “not meant for any real-world production or application”. 

Prisma Labs, responsible for Prisma AI, shared that AI-generated images will “not replace digital artists” because AI “does not have the same level of attention and appreciation for art as a human being”. Doesn’t sound very convincing for the future of creatives.