Of Computational Photography, AI-Powered Scene Optimization, Deepfakes, Ethics, and More
All of it brought on by the Samsung Moongate controversy
The ingress of artificial intelligence (AI) into the craft of writing promises to change the way the word is written forever. Generative AI has the ability to do the same for photography or what’s called, “imaging”.
The raison d’etre for this post actually is “Moongate” — the controversy sparked off by a Reddit user claiming that the moon shots delivered by the Samsung Galaxy S23 Ultra were “fake”. The accusation was the said smartphone had applied non-existent details to photos of the moon.
Which then made Samsung get up and give an explanation to Toms Guide explaining how its phones have been using AI-based scene optimization for years.
Samsung said its phone had used a neural network that had trained on hundreds of moon images to “enhance” the texture of the moon on the said moon shot, adding that this may have led some to mistakenly think that that was because of the phone’s camera’s capability when it was not.
Samsung Is Right
I hold no brief for Samsung but the company is right. Calling the moonshot “fake” is wrong. It is not fake as we know of deepfakes. It is what is called as “AI-based scene optimization”, which can be explained to the lay reader as photo enhancement but using artificial intelligence. (Remember Photoshop?). Samsung has never hidden this. In fact, as way back as 2020–21, during the launch of the Galaxy S21, Samsung had talked about this new tech. See press release.
And here is Samsung’s rejoinder to Moongate; it’s self-explanatory.
But let’s move on from Moongate as that’s not the purpose of this post. What it did though, was draw my attention to an entire range of digital/computational/AI tech that’s been slowly added to the world of imaging over the years, and the possibilities they have opened up for content creators, a.k.a. photographers and video developers.
What Is AI-based Scene Optimization?
AI-based scene optimization has the potential to revolutionize the creative economy by automating many of the tasks traditionally performed by image and video editors. This could lead to significant cost savings for companies in the creative industry, as well as increased efficiency and productivity.
For example, AI-based scene optimization could be used to automatically remove unwanted objects or people from images or videos, adjust the lighting and color balance, and even generate entirely new pieces of content based on existing images or videos. This could significantly reduce the amount of time and effort required to produce high-quality visual content.
But to understand this tech, you need to first grasp what computational photography is all about.
Ok, Give It To Me. What is Computational Photography?
The smartphone camera was the game-changer, the disruptive force in this case. Computational photography is a growing discipline that integrates calculations, digital technology, digital sensors, optical systems, intelligent lighting technology, hardware design, and software-computer expertise in order to enhance conventional camera imaging processes.
Which means what? Well, this technology and technique go beyond the boundaries of conventional image-making. They use machine learning, algorithms, image stacking, etc, to produce images that otherwise would not have been possible because of hardware limitations. What’s more, they also represent the development of absolutely new images that borrow from a computer’s understanding of a “perfect” photo, say of the moon, as in the Reddit/Samsung case.
In even simpler terms, the smartphone manufacturers thought up using computing, digital sensors, and optical arrays to improve the photographs the phone cameras clicked. Because of the small size of the smartphones, and their cameras, what was possible on a professional, standalone camera was not always possible, so the phone-makers resorted to computers to digitally enhance the clicked images or videos. The joke is: today, DSLRs, too, have embraced computational photography.
Samsung leads the race by relying too much on computational photography, hence, Moongate.
But if you want to make more sense out of computational photography, and AI-based scene optimization, then you need to know a little about “image super-resolution reconstruction based on deep learning”.
Image Super-Resolution Reconstruction Based On Deep Learning
Are you still with me? Good. Image super-resolution (SR) is the tech used to make high-resolution (HR) images from low-resolution (LR) ones. That’s possible? You bet. Super-resolution reconstruction is a very high-level image processing technique in computer vision and image processing. It’s often used in medical and satellite imaging, in videos taken by surveillance cameras, by space telescopes, and so on.
But make no mistake, this technique is also being used in your “average Joe” photography. A simple explanation would be: it’s a process that increases the size of small images, all the while ensuring that the loss of drop-in quality is kept to a minimum. It can even mean making an HR image using details from an LR image. Deep learning allows that, acting as a mapping bridge between the LR-HR images.
The Person Behind The Camera Is Gone?
Kind of. The use of computers, algorithms, and AI is said to be “democratizing” the business of imaging. It is transforming the creative economy by making it easier for people to take high-quality photos/videos without needing to have a lot of technical knowledge about the subjects. There are some concerns about the ethical implications of using such tech, but more on that later in this post.
So Is Moongate An Example Of Deepfake?
No. In my books, and even going by the testimony of some experts, the embellished moonshots can neither be called “fake” nor be quoted as an example of deepfake. In layman’s terms, it’s a case of using computers, AI, and other digital ingredients to better a poor image. But there was an image to start with, however blurry.
A deepfake is a “false” image, to begin with. The Scientific American explains it thus: Falsified videos created by AI — in particular, by deep neural networks (DNNs) — are a recent twist to the disconcerting problem of online disinformation. Although the fabrication and manipulation of digital images and videos are not new, the rapid development of AI technology in recent years has made the process to create convincing fake videos much easier and faster.
Call it a coincidence or what, but it was a Reddit user with the name, “Deepfakes”, who, in 2017, had posted pornographic videos generated with a DNN-based face-swapping algorithm. It was only after that that the term, “deepfake” started to be used more frequently in reference to AI-generated impersonating photos and videos. Deepfakes are also weaponized for several reasons — one of them being to favor a particular candidate during an election.
One of the primary ethical concerns associated with AI-based scene optimization is the potential for creating deepfakes and manipulated content. Deepfakes are videos that use AI algorithms to replace the face of one person in a video with the face of another person, creating a realistic but false representation of events. This technology could be used for malicious purposes, such as spreading false information or defaming someone’s character.
This Person Does Not Exist
During the course of research for this newsletter, I came across this website called This Person Does Not Exist. See the photo here. Well, the woman is not real, she does not exist.
Now there are two ways one can use such photos: either mischievously, or in a more positive way by creating models for your product or service. Saves you time, money, and effort, right?
What About Job Loss? Is It The End Of The Photographer, Videographer, Video Editor?
Like when ChatGPT was introduced, and questions were raised on whether writers would lose their jobs because of AI, a similar concern is now being asked about AI-based scene optimization and the use of similar tech. The answer, as in the case of ChatGPT, too, is a clear no.
As AI technology improves, it has the potential to automate many of the tasks traditionally performed by image and video editors. This could lead to job re-defining and re-designations and may force individuals in these fields to up-skill.
But first, here are some examples of AI-based scene optimization in different industries:
AI-based scene optimization is being used in digital photography to improve the visual quality of images. For example, the Google Photos app uses AI algorithms to automatically adjust the brightness, contrast, and color saturation of photos to improve their visual quality.
AI-based scene optimization is also being used in video production to improve the visual quality of videos. For example, Adobe Premiere Pro, a popular video editing software, has a feature called Lumetri Color that uses AI algorithms to automatically adjust the color balance of videos.
AI-based scene optimization is also being used in virtual reality (VR) to improve the visual quality of VR experiences. For example, the NVIDIA VRWorks SDK includes a feature called VR SLI, which uses AI algorithms to optimize the rendering of VR scenes for multi-GPU configurations, resulting in improved performance and visual quality.
To address concerns about job loss, it is essential to ensure that the benefits of AI-based scene optimization are shared fairly and equitably across the creative industry. This could include investing in training and education programs to help individuals develop the skills required to work with AI-based technologies, as well as providing support and resources to help companies transition to new, more automated ways of working.
In the last 10 years, computational photography has undergone major advancements as technology companies have looked for ways to make smartphone cameras perform on par with or better than conventional DSLR cameras without using bulky lenses or physically larger sensors.
Or like this expert Robert Bishop says in this video, it may impact the stock image industry. This, he says, will change drastically because of AI-generated images which eventually will come to replace the traditional stock images as we know them today.
Will photographers be redundant? No. AI tools can be used by them as an ally, as an inspirational tool to click even better images than what they are capable of.
AI-based scene optimization is a rapidly evolving technology, and its future is likely to be shaped by several factors, including:
Advances in AI Technology
As AI technology continues to advance, AI-based scene optimization is likely to become more powerful and sophisticated. This could lead to new applications and use cases for the technology.
New Applications in Emerging Industries
As new industries emerge, there will be new opportunities for the use of AI-based scene optimization. For example, AI-based scene optimization could be used in the development of autonomous vehicles to improve their ability to perceive their surroundings.
Hey, You Never Spoke Of The Ethics Of Using Such Tech
Again, like in ChatGPT, there are huge concerns about the ethical implications of deploying AI-based scene optimization technology. Many feel it is up to each individual and company to decide whether they believe it is ethical to use AI-based scene optimization technology or not. It is also always better if companies, like in the case of Samsung, are transparent and upfront about the use of such technology in their machines and otherwise.
Frankly, there’s no clear answer to this ( and will never be, if you are to ask me). As in other things in our society, every development comes with its bundle of good and bad.
Privacy and surveillance, bias or discrimination, as well as the role of human judgment, are some of the ethical challenges that society is facing as a result of AI. Some people contend that it is unethical to utilize AI-based scene optimization technology because it can change the original image and might not accurately reflect the scene’s hues. Others may contend that using AI-based scene optimization technology is morally acceptable because it can improve the image’s quality and aesthetic appeal.
Yes, the tech or the technique does alter reality. At the same time, it offers literally limitless ways to enhance your work of art or your labor of love. And as long as we do not misuse this tech to create blatantly false images with the sole purpose of cheating and fraud, we should not be facing a big moral dilemma.
The good news is there are now hundreds of tools, hardware, and software available to detect deepfakes. Big Corporations like Google, Microsoft, and Meta offer ways to find them out. With the big tech boys gunning for them, and certain laws in place to discourage them, deepfakes may dwindle, though whether they will disappear altogether is a different question altogether.
Like what you just read? This is part of my rather irregular newsletter on Substack — All About Content. Do subscribe.