Salesforce has responded to a White House Executive Order by setting out seven actions companies can take to build trust in AI.
On October 30, 2023, President Biden issued an executive order establishing new standards for AI safety and security to protect the privacy of American citizens, consumers, and workers.
In response to this, Salesforce has issued AI guidance, including privacy protections, informing users when they are interacting with AI, using smaller models, updating policies, encouraging inclusivity, preventative measures, and adding the necessary protections.
Paula Goldman, Chief Ethical and Human Use Officer at Salesforce, and Eric Loeb, EVP of Government Affairs at Salesforce, welcomed the news in their response to the AI Executive Order: “It’s energizing to see governments take definitive and coordinated action toward building trust in AI.
“From the EU’s AI Act in 2021 to this week’s U.S. Executive Order, governments recognize that they have an essential role to play at the intersection of technology and society.
Creating risk-based frameworks, pushing for commitments to ethical AI design and development, and convening multi-stakeholder groups are just a few key areas where policymakers must help lead the way.
According to Goldman and Loeb, the advancements Salesforce has been making to ensure AI safety is aligned with the White House’s proposal across privacy, safety, equity, global cooperation, and government adoption.
For example, Salesforce has been calling for data privacy legislation and offering
guidance and expertise to governing bodies and governments around the world, which are all key components of the Executive Order.
Salesforce has already been investing in ethical AI for over a decade, including through its $500 million venture fund for AI start-ups.
In September this year, Salesforce also agreed to acquire Airkit.ai, a low-code, AI-powered self-service application builder.
Seven Ways to Build AI Trust
Here are Salesforce’s seven AI recommendations, designed to help companies foster trust in AI:
- Privacy Protections: Salesforce believes companies should only use datasets that respect privacy and consent. The AI revolution requires comprehensive privacy legislation to keep people’s data safe, along with further refinements to AI legislation in the future.
- Keeping Users Informed: Companies should let users know if they are interacting with AI, Salesforce says. This includes making it clear when recommendations are deriving from AI, particularly for decisions with big consequences.
- Using Smaller Models: Bigger models are not always the best option. Smaller models can offer high-quality responses, particularly when used for domain-specific purposes, and they can also carry a smaller carbon footprint.
- Updating Policies: Although much attention is being given to models, data, and apps should also be examined in order to address high-risk use cases.
- Encouraging Inclusivity: As well as protecting their citizens, governments should also be encouraging inclusive innovation, which involves creating and providing access to privacy-preserving datasets that are tailored to their countries.
- Preventative Measures: Advanced AI should be grouped together with ordinary AI so that we ensure AI safety in the future as these tools shift from optional extras to technologies we depend upon.
- Innovating with Safety: To ensure AI safety and security and simultaneously increase the rate of innovation, it is important to make data privacy a priority and create standards for AI systems transparency.
Elsewhere, Salesforce announced the general availability of its Anypoint Code Builder last month.
The solution – which is part of MuleSoft – offers a visual studio where developers can build APIs and integrations with “modern tooling”.