A.I. Privacy and Security Issues That Affect Everyone Today

A.I. is changing things rapidly, and in some cases, dramatically. With these changes come important privacy and security risks. This article explores some of issues for organizations, law firms, and everyday consumers.

We’ve previously written about issues with ChatGPT and how A.I. can be used with OSINT, along with the risks that come with it.

What we haven’t done is dive into some of the broader A.I. issues that affect us all. We’ll do that here by addressing each of the three audiences we serve (businesses, law firms, and everyday people) while also posing a few questions for you to consider.

Image of a circuit board with "AI" on a chip. This represents AI privacy and security issues.

A.I. Privacy and Security Issues for Organizations

If you’re a business owner, decision-maker, or responsible for protecting your organization, there are many factors to weigh when adopting A.I.

Data Handling and Compliance

With LLMs and A.I. agents, you might be using them to generate code, automate systems, or process and analyze data.

While this technology is powerful, entering protected data into an LLM, or using an agent to automate tasks around sensitive data, can create problems such as:

  • Is the A.I. platform using this data to train its models?
  • How is the data stored?
  • Are employees using unapproved A.I. applications to process or analyze protected data?

If your organization is subject to laws like HIPAA, GDPR, CCPA, or PCI, how are you anonymizing PII/PHI? How are you minimizing the amount of sensitive data exposed to A.I. systems?

Violations of data privacy laws carry steep fines. The safest approach is to know what data you have, how it’s classified, and who or what has access to it.

Policies and employee training are also critical to establishing acceptable use standards.

Supply Chain, Vendors, and Vulnerabilities

Whether you’re a small business simplifying daily operations or a large enterprise outsourcing client data processing, your third-party vendors can create privacy and security risks like:

  • How does a vendor handle your data?
  • If the vendor uses A.I., are they compliant with applicable regulations?
  • Are subcontractors safeguarding sensitive information, or are they casually dropping your data into A.I. tools to save time?

Even tools from repositories like Huggingface or GitHub raise questions like how secure is the code, and how does the app handle your data?

Vulnerabilities in A.I. applications can lead to unintended behaviors, including deletion of data, prompt injections, or remote code execution. These risks put both your stakeholders’ data and your reputation at risk.

This is why you should thoroughly vet vendors and test applications before use, and apply security patches as soon as they’re released.

A.I. Privacy and Security Issues for Consumers

Woman using a laptop at home, illustrating consumer AI privacy and security risks.

If you use chatbots like ChatGPT, Gemini, Meta AI, or Claude, imagine a data breach involving your platform of choice.

This wouldn’t be an ordinary breach. The exposed information could include deeply personal conversations for things you might never share with friends or family. Such a breach could be devastating.

That’s why it’s critical to:

  • Enable every available security and privacy control the platform provides.
  • Limit sharing personal and sensitive information with the chatbots.
  • Limit the platform from using your inputs/outputs to train its models.

Legal cases are also raising new privacy concerns. For example, lawsuits like The New York Times vs. OpenAI  required OpenAI to preserve output log data, which includes user chats. Survivors of abuse who used chatbots, even with the “Temporary Chat” function enabled, may now see their conversations preserved as evidence. This is a huge privacy concern for them because they thought their chats were private.

If you are in an abusive situation and planning to leave, do not use a chatbot for help. If an abuser has access to your devices, the consequences can be dangerous.

Deepfakes and Disinformation

A.I.-powered scams are growing—and the results can be devastating.

One woman in California lost her home and life savings after being tricked into a fake relationship with an actor. The entire scheme relied on deepfake videos.

We’ve also covered how “A.I. slop” videos push harmful disinformation.

To protect yourself and your family, it’s important to:

  • Learn how to spot A.I.-generated content.
  • Use detection tools where possible.
  • Learn to recognize and stop the spread of disinformation.

A.I. Privacy and Security Issues for Law Firms

To be clear, Bsquared Intel is not a law firm. But we can highlight emerging risks.

Recent reporting from Reuters and Forbes shows that client communications with generative A.I. do not qualify for attorney-client privilege. That means chatbot conversations may be discoverable in court.

This could create real complications if clients continue using chatbots during ongoing cases.

Law firms also face the same issues outlined for businesses a few sections above. Firms may use A.I. tools for the following business needs:

  • Billing, evidence collection, analysis, or transcripts.
  • Adopting A.I. tools to manage internal operations.

Each of these comes with data privacy and security risks.

Up to this point, we’ve presented somewhat unique privacy and security issues related to A.I. with the clients that we help.

Universal A.I. Threats Everyone Faces

Masked threat actor in the dark with computer terminal reflected in glasses, representing universal AI security threats.

Some A.I. issues affect all of the audiences we serve equally.

Deepfakes and Disinformation

We touched on this earlier in the section related to consumers. This is an everyone privacy and security issue.

Perhaps one of the largest deepfake scams that occurred was in 2024 when multi-national London-based engineering firm Arup was the victim of a $25M dollar scam. It played out in the organization’s Hong Kong office when a finance worker received a suspicious email and was then convinced it was legit after a video conference call with the company’s CFO and other staff. This was all fake. The audio and likeness of the attendees were digital clones.

Everyone (businesses, law firms, and consumers) needs education and tools to spot deepfakes. Social media platforms also need to do more to detect harmful generative A.I. content.

Indirect Prompt Injection Attacks

In one of our newsletters, we covered research by Brave about indirect prompt injection attacks.

An indirect prompt injection happens when content from an external source (like a website or file) triggers a large language model (e.g. chatbot) to perform unintended actions. This can lead to:

  • Disclosure of sensitive data
  • Revealing system information
  • Executing commands on a connected device

Information Sharing Risks

Another universal concern is how others handle your communications. Are your emails, texts, or shared files being fed into LLMs without your consent?

Windows Recall, for example, stores snapshots of activity that could expose sensitive information. This affects communications in business, legal, and personal settings.

Solutions and Best Practices

Here are some solutions and best practices for you:

  • Protect your A.I. accounts: Use a strong password, or if offered, a passkey, and ensure MFA is enabled.
  • Vet the tools/services that you want to use. This means reading the tool’s ToS and privacy policy, making sure the tool meets your needs and security/privacy requirements, and make sure to take it out for a spin to test things out before fully implementing.
  • For businesses and law firms, have polices in place around what’s allowed when it comes to employees using A.I. This includes how to deal with client data or propriety data belonging to the company. This also means finding frameworks to add to the organization to reduce risk related to A.I.
  • For scams where A.I. is used, have protocols in place to verify if the person making a request is legitimate. Are you really talking to the CEO, a vendor, your friends, family, or loved ones? Also ensure that when money is involved that you have a multi-step multi-person process to approve financial transactions.

How We Help

At Bsquared Intel, we provide:

Reach out through the contact form below to schedule a free strategy call.

Contact Us | Bsquared Intel

Please fill out the form below, or call 203.828.0012, to learn how Bsquared Intel can assist you.

Name(Required)
Secret Link