On-Device AI: Gambling With User Privacy?

Sorab Ghaswalla
9 min readJul 25, 2024

--

It’s been over three months since I first wrote about privacy concerns around the hardwiring of artificial intelligence (AI) in our computing devices. The question I had raised then (and must say I was perhaps one of the first in the world to do so) was: with on-device AI, are we (once again) about to hand over the keys to our personal and professional lives to tech companies in exchange for….convenience?

This time, it will not be just our photographs, videos, and memories but our bank statements, tax details, and even medical files, that we’ll be handing over. Not to mention a host of business-related information.

The recently concluded Mobile World Congress (MWC) 2024 in Barcelona was a whirlwind of innovation, and at its heart was the resounding theme of on-device (or in-device) AI.

Today, even as some of the Big Tech cos have started rolling out AI-infused smartphones and PCs, that question is still pertinent. To it, I want to add one more — what’s the tearing hurry to introduce AI on our personal computing devices? People are still coming to grips with the new tech, but companies, wanting to ride on its popularity and rake in the big bucks, are introducing devices with AI in them without a thought for the end-user, ethics, laws, etc. Let’s not forget that gen-AI tools and apps can prove to be a handful in the hands of end-users.

What makes me write this follow-up piece is two near-similar reactions I got a few days ago from colleagues when I asked them about the coming of on-device AI. Their response was: Hey, why do you keep raising the privacy bogey? The AI is in the device, and everything will only happen on it! Your data’s safe.

I was shocked by their naivety. If these people, already working in the tech field, were willing to take only the word of Big Tech where the protection of their personal data was concerned, then, well, what can I say about the ordinary user?

Let me emphasize here that I am not being an alarmist. I am a true lover of all things tech, and welcome AI with open arms. At the same time, having seen the evolvement of the Internet and the World Wide Web from inception, and being involved with many things digital, I’ve seen how the dice has rolled (in favor of Big Tech) all these years, especially where user data is concerned. With the advent of AI, the same seems to be happening all over again. Data, yours and mine will once again be fished (or phished) for commercial gains.

That’s why in my first article, my opening lines were: First, they came for our photographs, videos, and our social data. Now, Big Tech is coming for our business information and extremely personal data.

In their rush for gains, the tech giants have started rolling out a new wave of smartphones and computers boasting on-device AI capabilities. Apple, with its latest iPhones, and Microsoft, with its Copilot PCs, promise features like automated photo editing and real-time scam detection. This represents a significant shift in our digital lives, one that comes with a probable hidden cost: our privacy.

This on-device and user privacy debate can be broken down into four broad topics:

  1. Is on-device truly that? And, will it ever be that, given the resources required by LLMs?
  2. What about regulation? Laws are still evolving to deal with all the problems specific to AI, including individual privacy.
  3. What are the safeguards in place (for now) against the unauthorized (illegal) or unethical use of customer personal data by tech companies or their staff?
  4. What are the measures taken by manufacturers to prevent bad-state actors from misusing in-device AI?

Why In-Device AI Raises Privacy Concerns

To begin with, at least till today, July 24, 2024, there’s no device that can truly boast of offering ONLY on-device AI. Yes, companies can stick that label on their offerings and call it that. But most are at best, hybrid, i.e. there’s in-device AI, which handles very basic AI tasks, coupled with in-Cloud. (Also, I am not including what’s being called “Private AI” in this.)

In layman’s terms, it means — when the AI in your phone finds you making demands it CANNOT fulfill on the device, it sends the data off to the Cloud (which is a fancy word for a string of Internet servers) and that information returns to the device. So, yes, there’s to and fro, between your computing device and the Cloud server over the Internet. Such complex tasks still require the processing power of the Cloud, which means personal data ventures outside the trusted confines of our devices.

Further, in-device AI excels in offering faster response times and so-called lower reliance on Internet connectivity. But to function 360 degrees (at least for now), it requires a more comprehensive view of our data. This means companies are gathering a wider range of data, often including information previously siloed within apps.

The concern lies in not only where but how this sensitive data will be handled. While tech companies assure robust security measures (they did so even in earlier times), the reality is more complex.

Just one example is enough to underline what I am saying here — Microsoft Recall, an AI add-on that this IT major wanted to incorporate in its Copilot+ PC.

The Recall feature tracks everything from web browsing to voice chats, assisting users in reconstructing past activities by taking regular screenshots and storing them locally. Users can then search this database for anything they’ve seen on their PC.

However, Recall faced heavy criticism from security researchers and privacy advocates since its announcement last month. Eventually, after researchers had demonstrated how easily Recall snapshots could be extracted and searched on a compromised system, Microsoft had to recall the Recall, saying it was delaying its introduction and would now preview the feature with a smaller group later, citing concerns about privacy risks.

Hence, my question — are Big Tech cos releasing AI add-ons, features, etc without first adequately addressing personal data and privacy concerns? What would have happened if there was no security uproar and thousands of consumers, if not more, had used Recall before someone yelled “data leak?”

The author of this recent article in The New York Times, for example, writes that this (on-device) change has significant implications for our privacy. To offer new bespoke services, companies and their devices require more persistent and intimate access to our data than ever before. Previously, our use of apps and the way we accessed files and photos on phones and computers were relatively siloed. However, AI needs a comprehensive view to connect the dots between our activities across apps, websites, and communications, according to security experts.

“Do I feel safe giving this information to this company?”, asks the writer.

A 100% On-Device AI Is A Myth?

It would not be wrong to say that as of July 24, 2024, no company offers a 100% in-device AI. On-device AI is designed to process data directly on the device, eliminating the need to send sensitive information to external servers. This approach helps protect the data from various security threats. Sounds great in theory.

While companies like Apple, Samsung and Microsoft are making strides with on-device AI for their smartphones and PCs, it’s not entirely possible to rely solely on the device itself for complex AI tasks.

Here’s why:

  • Limited Processing Power: On-device AI chips are powerful, but they still can’t compete with the raw processing power of the Cloud for very demanding AI operations.
  • The Cloud’s Role: The Cloud acts as a partner in the current in-device AI model. It supplements the device’s processing power when needed.

Just pause here to understand the kind of hardware/software that’s required to run AI personal computing devices. This form of AI requires the device’s hardware, including CPUs, GPUs, and then some, so also specialized chips like neural processing units (NPUs), to run the AI algorithms on the edge, or on the device.

I would highly recommend that readers read this explainer by Sahin Ahmed, a data scientist to get a basic understanding of how on-device AI actually works.

The moment data is forced to travel outside the device, we all are acutely aware of what can happen to it. Sending data to the Cloud introduces vulnerabilities. Data breaches, malicious actors, and even government surveillance become potential threats. This is particularly worrisome for sensitive information like photos, messages, and emails, which were previously considered “for our eyes only.”

Apple recently introduced “Apple Intelligence”, claiming it was taking all possible steps to mitigate the risks of data misuse in in-device AI.Apple says it prioritizes user privacy by emphasizing on-device processing. This approach ensures that your data is analyzed on your device rather than being uploaded to Apple’s servers whenever possible.

Apple Intelligence is designed to keep your data on your device whenever possible. This not only allows for faster response times but also mitigates potential privacy concerns associated with Cloud storage.

In the above sentences, do not miss the words, “Whenever possible”.

So, clearly, for now at least, even with in-device AI, some of your data will likely travel to the Cloud for processing, introducing a potential privacy risk.

What About Private AI?

Private AI puts you in control of your data. It prioritizes keeping your information on your device and uses special techniques to analyze it without revealing everything. This means more privacy for you, but the AI features might be a little less powerful. And since it does not come in-built, you need to do everything — from deciding on the LLM to deploying it within your device, to deciding its functions, etc.

On-device AI comes pre-built. You don’t need to build anything. On-device AI and private AI are closely related concepts, but there are some key distinctions:

Focus:

  • On-device AI: Focuses primarily on where the processing happens — directly on the user’s device. This offers benefits like faster response times and potentially less reliance on Internet connectivity.
  • Private AI: Models like Meta’s LAMA 2 or 3 focus on the overall approach to AI development and deployment, prioritizing user privacy and data control throughout the process. This might involve on-device processing, but not necessarily.

Benefits:

  • On-device AI: Offers benefits like speed, reliability (without needing constant Internet), and potentially lower power consumption.
  • Private AI: Offers benefits like increased user trust, transparency, and enhanced data security since much of it happens on a user’s computer.

Inherently, developing a private AI infrastructure and maintaining it can prove to be expensive.

Here’s an analogy:

Imagine a bakery. On-device AI is like having an oven at home to bake cookies. It’s convenient and fast. Private AI is like having your own private bakery with a focus on your secret family recipe. You control the ingredients and process, but it might require much more effort. And don’t miss the cost of setting up the bakery versus buying an oven.

The Legal Landscape: Playing Catch-up

The legal framework surrounding in-device AI privacy is nascent. While regulations are being drafted, they haven’t caught up to the rapid pace of technological advancements. In the six months between January and July 2024 alone, significant progress has been made in on-device AI capabilities, but lawmakers are still grappling with how to best protect user privacy in this evolving landscape.

The Road Ahead: Balancing Convenience and Security

On-device AI offers undeniable benefits, but it’s crucial to strike a balance between convenience and privacy. Here’s what needs to happen:

  • Transparency: Tech companies must be upfront about the data they collect and how it’s used. Users deserve clear and concise explanations.
  • Stronger Regulations: Lawmakers need to develop robust regulations that govern data collection, storage, and security in the context of in-device AI.
  • User Control: Users should have granular control over what data is collected and how it’s used. This empowers individuals to make informed choices about their privacy.

The future of AI may be undeniably on-device, but it must be built on a foundation of trust and transparency. Only then can we truly enjoy the benefits of intelligent devices without compromising our fundamental right to privacy.

Have you signed up for one of the fastest-growing online communities around artificial intelligence, “AI For Real”?

--

--

Sorab Ghaswalla
Sorab Ghaswalla

Written by Sorab Ghaswalla

An AI Communicator, tech buff, futurist & marketing bro. Certified in artificial intelligence from the Univs of Oxford & Edinburgh. Ex old-world journalist.

No responses yet