How Ethical is Salesforce Generative AI — Analysing Top 5 Security Concerns

Generative AI by Salesforce: Security Risks and Ethical Concerns
With ethical and privacy concerns hampering trust in Generative AI models, we look at how Salesforce focuses on developing ethical AI with their Trusted AI Principles.

Earlier this year, Google was under fire from enraged users around the world for the racial re-writing of history by its AI model, Gemini. This scandal brought a lot of concerns people had about AI to the forefront. Companies began debating whether AI can really be trusted to keep their data private or whether it can remain genuinely ‘unbiased’.

As a front-runner in the AI race, Salesforce has fortified its CRM with Salesforce Einstein AI and Einstein Copilot—sophisticated AI tools that have revolutionised how businesses manage their data and engage with customers. Einstein has been seamlessly integrated with Salesforce Sales, Services, and Marketing Clouds to help organisations boost sales efficiency, personalise customer interactions, enhance team collaboration, uncover insights from their CRM data, support decision-making, improve sales forecasting accuracy, and automate day-to-day tasks.

But how ethical is Salesforce AI? Is it safe to entrust your company data with Salesforce Einstein? Should you hold back confidential information while giving prompts to Einstein Copilot? We answer these questions and more in this blog.

The Trust Gap with Salesforce Generative AI

There is no doubt that organisations need to adopt AI to thrive in today’s fast-paced business world.

In fact, Salesforce’s Generative AI in IT Survey reveals that 67% of senior IT leaders prioritise Generative AI for their businesses, with more than 30% naming it as their key consideration.

That said, Salesforce has noticed a ‘trust gap’— more than 14,000 customers and business buyers across 25 countries remain sceptical of AI-generated content’s accuracy and trustworthiness. Many professionals even believe that they lack the necessary skills to use AI safely.

In order to address the trust gap and ensure that their Generative AI model is grounded in data security and ethical considerations, Salesforce has incorporated the Einstein Trust Layer—a set of agreements, security technology, and data and privacy controls—into the Salesforce platform. In 2023, they also published guidelines for the ethical development of Generative AI in Salesforce that are based on their Trusted AI Principles and AI Acceptable Use Policy.

Salesforce also performs rigorous AI testing in simulated environments to mitigate harmful bias and preserve customer data privacy in Salesforce Einstein AI.

But What Are the Security Risks and Ethical Concerns Associated with Using Salesforce Generative AI?

Before we delve into how Salesforce is focusing on building a robust security layer for AI, let’s explore some of the security concerns surrounding Salesforce AI models and Generative AI in general and discuss how businesses can navigate these challenges responsibly.

1. Compromised Data Security

Concern: Salesforce Generative AI relies heavily on large datasets to function effectively. These datasets often contain sensitive company information, such as the customer’s personal data, purchasing behaviour, or communication history. Many data security teams worry whether the AI tool can safeguard such confidential data from unauthorised access or misuse.

Analysis: Salesforce has robust data security measures in place, including encryption, multi-factor authentication, and regular security audits. However, the sheer volume of data processed by Generative AI amplifies the risks. It is possible that unauthorised access to AI-generated insights could lead to significant privacy violations. Furthermore, there’s a risk that AI algorithms might inadvertently expose sensitive information when generating outputs, particularly in scenarios involving unstructured data.

Mitigation: To address such data security concerns, businesses should implement strict access controls, ensuring that only authorised personnel have access to sensitive data. Since Salesforce provides tools for data masking and anonymisation, companies can take advantage of them to protect personal information before it is processed by AI. Conducting regular security assessments and trying to keep up with regulatory requirements are also essential.

2. Biased AI Models

Concern: Post the Google scandal, many organisations are now apprehensive about biases in AI algorithms. Since AI models are only as good as the data they are trained on, business executives are worried that if the training data is biased, the AI models could perpetuate or even exacerbate these biases. This will likely lead to unfair treatment of certain customer segments. Moreover, biased AI models could result in discriminatory practices in areas like credit scoring, customer service, or targeted marketing.

Analysis: Bias and exclusion in AI models can originate from various sources—such as historical data that reflects societal inequalities or from skewed data collection practices. Even more worrisome is the fact that engineers could train the AI to echo existing prejudices. Salesforce Generative AI, like any other, runs the risk of making biased decisions if not properly monitored and refined. The ethical implications of biased AI decisions are significant and could lead to reputational damage, legal liabilities, or a loss of customer trust.

Mitigation: Salesforce has taken steps to mitigate bias by introducing Einstein Discovery—a tool for AI model evaluation and bias detection. Einstein Discovery lets you flag data that could potentially be associated with unfair treatment—such as race, gender, religion, national origin, sexual orientation, disability, or age—as sensitive variables.

However, it is up to businesses to actively use such tools to regularly audit their AI models for fairness. Incorporating diverse data sets during the training phase and ensuring that AI outputs are continuously monitored and adjusted can help in minimising bias. Ethical AI practices also involve transparency in how AI decisions are made—your customers should be provided with explanations and avenues for recourse if they believe that they have been unfairly treated.

3. The Black Box Problem

Concern: AI’s mysterious black box problem is another major ethical challenge that bothers business leaders. The black box problem is one where the decision-making process of the AI is not transparent or it is unclear how the deep learning system comes to its conclusions easily. This lack of explainability can lead to mistrust among users and customers, especially when the AI makes decisions that significantly impact individuals or businesses.

Analysis: Generative AI, while powerful, can sometimes produce outputs without clear reasoning, making it difficult for users to trust the results. In sectors where decisions can have profound consequences (like finance, healthcare, and law), the inability to explain how an AI tool arrived at a particular conclusion can be problematic. This opacity can hinder the adoption of AI and may also raise ethical concerns regarding accountability.

Mitigation: As a best-in-breed CRM solution, Salesforce is investing in technologies that enhance the trustworthiness of AI models. Salesforce’s Einstein Trust Layer is designed to help customers make sure that their prompts and their data are grounded in their brands, their own preferences, their policies, and business priorities. Business leaders should leverage these features to ensure that their AI-driven decisions are transparent.

Additionally, fostering a culture of ethical AI usage, where decision-makers are encouraged to question and understand AI outputs, is crucial. In cases where AI decisions are critical, a humans-at-the-helm approach can be employed to validate AI-generated outcomes.

4. Questionable Data Ownership

Concern: Salesforce Generative AI relies on large datasets, some of which may include proprietary or third-party information. There is a growing concern about the ownership and use of data and AI-generated content. Questions arise about who owns the outputs generated by AI, especially when they are derived from third-party data. Concerns about how to protect intellectual property rights in this context also worry business owners.

Analysis: The integration of third-party data into Salesforce Generative AI complicates the issue of data ownership. If AI-generated content inadvertently includes proprietary information from another source, it could lead to legal disputes and ethical dilemmas. Furthermore, the question of who is the owner versus who is the custodian of AI-generated insights (is it Salesforce, the business using the AI, or the end customer?) remains a grey area that needs to be clarified.

Mitigation: Businesses should clearly define data ownership and usage rights in their contracts with Salesforce and any third-party data providers. It’s also important to establish clear guidelines for the use of AI-generated content, particularly when it comes to proprietary information and intellectual property rights. Salesforce offers options for data management, including data governance and data security, which businesses can use to enforce strict control over how data is used and shared within AI systems.

5. Workforce Displacement

Concern: For the longest time, concerns about AI centred around one topic for most people—will AI make their jobs redundant? The fear is that Generative AI, while automating tasks and reducing costs, could likely replace job roles that involve routine as well as intellectual commitment. As AI tools take over tasks that were hitherto manually performed by humans, there is a risk that jobs, particularly in administration, data entry, product assembling, inventory management, content, design, and analytics, could be significantly reduced or altered.

Analysis: While AI promises to enhance productivity, it also poses ethical challenges related to the workforce. There is a tangible threat posed by AI to certain types of jobs, especially those that can easily be automated. This could pose resistance within companies towards adopting AI to its full capacity. Furthermore, with AI advancing outputs in many fields, the skills required for jobs may shift towards more technical roles, potentially excluding those without the necessary training or background.

Mitigation: To address this concern, businesses should focus on reskilling and upskilling their workforce. For instance, Salesforce offers Trailhead—a library of courses on every topic encompassing the Salesforce ecosystem—that could help your employees understand AI fundamentals and adapt to new roles created by AI. Organisations should also focus on helping their teams transition to roles where creativity, empathy, and complex problem-solving skills are required and where their human expertise remains irreplaceable.

What Steps is Salesforce Taking to Build Ethical AI?

Earlier this year, Salesforce’s President & Chief Legal Officer, Sabastian Niles, spoke about Salesforce’s commitment to trusted data and how Salesforce is serving its customers ethically.

He remarked, “In order to have the AI future we want, we must prioritise trust today. This means establishing a trust-first culture and the right regulations and public policies that balance innovation and safety.”

Since trusting AI starts with trusting the data source, Salesforce prioritises the quality, privacy, and protection of customer data, and focuses on ensuring data accuracy to eliminate bias and break down data silos.

Salesforce’s AI Acceptable Use Policy

In their ‘AI Acceptable Use Policy’, Salesforce has listed the following key points to ensure customers receive a truly ethical AI experience from product development to deployment.

  • Prohibition of using Salesforce AI for automated decision-making processes with legal effects unless a human makes the final decision.
  • Restriction on generating individualised advice typically provided by licensed professionals, especially in financial, legal, and medical contexts.
  • Explicit prohibition on predicting protected characteristics of individuals to prevent discrimination based on factors like race, gender, sexual orientation, and health status.
  • Guidelines against engaging in deceptive activities such as creating manipulated digital media, plagiarism, and child sexual exploitation, with emphasis on integrity and ethical conduct.
  • Cautionary notice that urges Salesforce AI users to consider safety implications before deploying AI.

Salesforce’s Ethical AI Practice Maturity Model

As part of their mission to ensure Generative AI remains safe and inclusive for all, Salesforce developed the Ethical AI Practice Maturity Model. This model analyses how practices start and develop over time, and leverages insights from top technology. It also leans heavily on Salesforce’s experience as a leader in ethical AI practice.

The Ethical AI Practice Maturity Model consists of four stages: Ad Hoc, Organised & Repeatable, Managed & Sustainable, and Optimised & Innovative.

​In the Ad Hoc stage, the focus is on identifying unintended consequences and starting the conversation around ethical AI practices.

Individuals within the organisation informally advocate for considering bias, fairness, accountability, and transparency in AI. Awareness is raised among teams to answer the question “Should we do this?”. Ad hoc reviews and risk assessments take place among engaged teams.

​The Organised & Repeatable stage is where executive buy-in is established, and a culture promoting responsible AI practices is developed.

A diverse team of experts is formed that includes professionals from various backgrounds (such as human rights, ethics, data science, policy, and more). Company-wide education is conducted to ensure every employee understands their role in responsible AI practices. Formal processes like ethics reviews are integrated into the product development lifecycle.

In the Managed & Sustainable stage, ethical considerations are integrated into the beginning of product development, and reviews occur throughout the product lifecycle.

Bias assessment and mitigation tools are built or acquired to address potential biases in AI systems. Metrics are identified to track progress and impact post-market for regular audits, and employee training is emphasised.

Formal processes like consequence scanning workshops, ethics canvas, and model cards are implemented to document ethical considerations, and ethics reviews become a standard part of the development process to prevent the accumulation of “ethical debt.”

The Optimised & Innovative stage aims for continuous improvement in ethical AI practices.

​It consists of end-to-end inclusive design practices that combine ethical AI product and engineering development with privacy, legal, user research, and accessibility considerations. New features are introduced to help customers use AI responsibly, and ethical debt is addressed in product roadmaps.

Salesforce’s Trusted AI Principles

Last year, Salesforce also published guidelines for responsible Generative AI development in order to mitigate risks and foster inclusiveness, ethics, and safety in the use of Salesforce Einstein AI. Below, we list the 5 key guidelines for ethical Generative AI development, based on Salesforce’s Trusted AI Principles:

1. Accuracy: Salesforce will ensure that their AI models deliver accurate and reliable results by allowing customers to train models on their own data and share their concerns regarding the veracity of the AI’s response.

This can be done by citing sources for the AI’s responses (e.g., chain-of-thought prompts), highlighting points that need to be verified (e.g., statistics, recommendations, dates), and creating guardrails that prevent important tasks from being fully automated (e.g., launch code into a production environment without a human review).

2. Safety: Salesforce will focus on mitigating bias, toxicity, and harmful output by conducting bias, explainability, and robustness assessments, and red teaming. PII (personally identifying information) will be protected and guardrails created to prevent additional harm (e.g., force publishing code to a sandbox rather than automatically pushing to production).

3. Honesty: When collecting data to train and evaluate their AI models, Salesforce will ensure that they have consent to use data (e.g., open-source, user-provided). They will focus on ensuring transparency when an AI autonomously delivers content (e.g., chatbot response to a consumer, use of watermarks, etc.).

4. Empowerment: In cases where AI is only expected to play a supporting role to your workforce, or where human judgment is required, Salesforce will focus on identifying the appropriate balance and make their AI solutions accessible to all (e.g., generate ALT text to accompany images).

5. Sustainability: Another major focus of Salesforce is to develop more accurate and right-sized models where possible. This is not only to reduce carbon footprint— there have been instances where smaller, better-trained AI models have been known to outperform larger, more sparsely trained models.

Key Takeaway

In spite of the pressing issues around data privacy and bias within AI algorithms, Salesforce is taking every measure possible to ensure that Generative AI is utilised safely, ethically, and accurately. Salesforce’s Einstein Trust Layer is built around the 5 core principles of responsibility, transparency, accountability, inclusivity, and empowerment. This level of commitment towards the ethical use of AI sets Salesforce Einstein AI apart from the other AI products in the market today.

Leverage the Full Power of Salesforce Generative AI with Corptec Technology Partners

As demonstrated in this article, Salesforce is committed to the integration of ethical AI practices that help enhance their customers’ myriad business functions while maintaining trust and fairness. With responsible and trustworthy AI, your organisation can reap the benefits of this world-class CRM platform.

Moreover, with the additional advantage of the Einstein Trust Layer and the Ethical AI Maturity Model, you can now boldly trust Salesforce AI with your company secrets and your customer data while balancing the benefits of Generative AI.

If you would like to commence your journey with Generative AI powered by Salesforce, Corptec Technology Partners can show you the way.

Corptec can empower you to strategically adopt AI in your various business functions by ensuring you integrate Salesforce Einstein in your CRM. Our AI Coach can help you:

  • Assess your AI readiness
  • Explore the capabilities and value of AI for your specific business model
  • Build an actionable roadmap to get you started with Salesforce Generative AI

Adopt Generative AI without worry with Salesforce Einstein’s Generative AI and Copilot services. Book your discovery session today!

Improve Sales Performance Using Generative AI by Salesforce

As a trusted Salesforce consulting partner since 2018, Corptec Technology Partners has designed customer journeys for organisations across Australia by helping them maximise the potential of this leading CRM platform. We offer full-cycle implementation of Sales, Service, Marketing, and Financial Services Clouds, and support custom Salesforce integrations with any ERP or third-party applications.

Interested in learning how Corptec can help you optimise Salesforce Sales Cloud and ace your sales goals? Book a meeting with our Salesforce expert today!

Share This Blog

Facebook
Twitter
LinkedIn
Email

About Corptec

We collaborate with businesses to use technology to manage and transform their operations. Our focus is to provide customized technology solutions that combine the latest advances in digital transformation with a deep understanding of your business goals.

Recent Posts

Trusted by Our Clients

Updates