How to Find Security Vulnerabilities in Your Salesforce Apex Code
Salesforce Apex security vulnerabilities: four patterns that appear most often in production orgs, why manual review misses them, and how to prioritize.
Salesforce admins spend a lot of time on configuration security: MFA settings, permission sets, sharing rules, IP restrictions. That attention is warranted. But it leaves a whole category of risk uncovered: the code running inside your org.
Apex classes, triggers, Lightning Web Components, and Aura bundles can contain vulnerabilities that land just as hard as a misconfigured permission. A hardcoded API key sits in your org's metadata, visible to every developer and every export tool that touches it. A class running without sharing bypasses the permission model you spent months designing. A SOQL query built from user input is a data exfiltration path that does not show up anywhere in your audit log.
This post covers why Apex code is a security surface most admins overlook, the four vulnerability patterns that show up most often in production orgs, why manual code review does not reliably catch them, and how to decide which classes to look at first.
Why Apex Code Is a Security Risk Most Admins Overlook
Salesforce admins are trained to think about security at the configuration layer: who has what permissions, what sharing rules are in place, what session settings are active. The Setup Audit Trail captures all of that. Configuration changes are visible, logged, and reviewable.
Apex code works differently. It runs server-side with elevated trust. It executes under the running user's permissions by default, but it can be written to bypass that model. It can make callouts to external systems. It can query and modify data across object boundaries. And code changes do not appear in audit-friendly formats that make their risk obvious.
The specific problem: many admins either do not write Apex themselves, or they review it primarily for functional correctness. Does it do what the developer intended? Security review requires a different question: what can this code do that it should not be able to do?
A few things make this harder in practice.
A developer under deadline pressure adds without sharing to get past a governor limit error. The fix works, the ticket closes, and the security implication never comes up. An integration with an external system needs an API key. Putting it directly in the code is the fastest path, so that is what happens. The key is now in version history and visible to anyone with Metadata API access. A Lightning component that displays user input without encoding it works correctly in testing. It just also happens to be exploitable.
Then the org grows. You inherit 200 Apex classes, many written before your team arrived. No one quite knows what all of them do.
These vulnerabilities do not require a skilled attacker to exploit. Many are reachable by any user with basic Salesforce access who knows what to look for.
The Four Vulnerability Patterns That Show Up Most Often
These patterns appear repeatedly in Salesforce org security reviews. They are not theoretical.
1. Hardcoded credentials and API keys
The most common finding. A developer needs to call an external API and takes the path of least resistance:
String apiKey = 'sk-prod-a1b2c3d4e5f6...';
It also shows up as:
- Cleartext tokens in code comments (left in during testing, never removed)
- Integration usernames and passwords hardcoded in
HttpRequestsetup - Named Credential workarounds where the credential ends up in a static variable
- AWS access keys, Stripe secret keys, internal service tokens
The risk is not just that someone might find the key. The key is now in your org's source metadata, visible to any developer with Metadata API access, any tool that reads component source, and any org backup or export. Rotating it requires a code change, not an admin update.
Where to look: Apex classes that make HTTP callouts, any class with variable names like apiKey, secretKey, token, password, or credential. Also check for Base64-encoded strings, which are sometimes used to obscure hardcoded values.
2. Classes running without sharing
Apex classes run in two modes. By default, a class respects the running user's sharing rules. A class declared without sharing has full access to all records regardless of what the running user can see.
public without sharing class DataHelper {
There are legitimate uses for this. A class that creates a record on behalf of a user who lacks create access, for example. But without sharing is frequently added as a performance shortcut or a debugging workaround that never gets cleaned up.
The security problem: a user with limited visibility can invoke a method in a without sharing class and get back records they are not supposed to see. If that class returns data to the UI or passes it to a trigger that sends it externally, the user has bypassed your sharing model.
Where to look: classes that query multiple objects, trigger handler classes, any class called from a Lightning component or Visualforce page. Also check for classes with no sharing declaration at all. Those inherit from their caller, which produces unpredictable behavior when the caller changes.
3. SOQL injection
SOQL injection is the Salesforce equivalent of SQL injection. It happens when user-supplied input gets concatenated directly into a SOQL query string:
String query = 'SELECT Id, Name FROM Account WHERE Name = \'' + userInput + '\'';
List<Account> results = Database.query(query);
An attacker who controls userInput can close the string literal early and append additional query logic. Depending on the class's sharing mode and which objects it queries, this can pull records the user should not see, enumerate user data, or return results that break downstream logic.
Dynamic SOQL is not inherently dangerous. It is necessary in many real scenarios. The issue is a dynamic portion that includes unvalidated user input.
Where to look: any class using Database.query() with string concatenation. Specifically, patterns where user input from a component controller, a REST API parameter, or a trigger context variable feeds directly into a query string.
4. Cross-site scripting in Lightning components
LWC and Aura components render data in the browser. When a component renders user-controlled content without encoding it, that content can include executable script:
// Aura component (unsafe)
component.set('v.messageHTML', event.getParam('userContent'));
// Template (unsafe)
<lightning-formatted-rich-text value={rawUserInput}></lightning-formatted-rich-text>
LWC's template engine escapes values bound with {} by default, but that protection disappears when components use innerHTML, lwc:ref with manual DOM manipulation, lightning-formatted-rich-text with unsanitized input, or Aura's v.* bindings feeding into aura:unescapedHtml.
Impact varies by context. In a customer portal or community, an XSS vulnerability lets an attacker run arbitrary JavaScript in authenticated sessions.
Where to look: any component that renders user-supplied content, any component with HTML-rendering attributes, any use of innerHTML or unescaped bindings in component JavaScript controllers.
Why Manual Code Review Misses These Patterns
Manual review works when a reviewer knows what to look for and has time to look carefully. In most orgs, neither holds.
Volume is the obvious issue. A mature Salesforce org can have hundreds of Apex classes. A developer doing a security review is focused on functional bugs, not security patterns. The without sharing declaration on line 1 of a 400-line class is easy to miss when the business logic starts on line 50.
Context is harder. A SOQL injection risk in a utility class is only dangerous if that class is reachable from somewhere that accepts user input. Tracing that call chain across a large codebase, manually, is where reviews break down.
Familiarity is the subtler problem. A hardcoded credential that has been in the same class for two years stops looking dangerous. It just looks like the way things work. Developers reviewing code they or their teammates wrote see what they expect to see.
And review cadence is inconsistent by nature. Code gets reviewed when it ships. It does not get reviewed again when someone modifies a class six months later to add a new callout, or changes a sharing declaration to fix an unrelated bug. The vulnerability introduced in that update slips through.
Pattern matching does not replace good development practices. It catches the things that accumulate between deliberate reviews, which in a real org is a significant amount.
How to Prioritize Which Classes to Scan
You cannot look at everything at once. Start with the classes that carry the most risk if they turn out to be vulnerable.
Sort by LastModifiedDate, descending. A class modified last week is more likely to carry a recently introduced vulnerability than one untouched for three years. Salesforce stores LastModifiedDate on every component. Any systematic review should start at the top of that list.
Prioritize classes that handle external data. Apex classes that receive data from outside the org (REST API endpoints, inbound callouts, webhook handlers) are the most likely targets for injection attacks, and the most common location for hardcoded credentials.
Pay attention to public-facing components. Any LWC or Aura component accessible in a community, portal, or site.force.com URL is reachable by unauthenticated or low-privilege users. An XSS vulnerability there is materially worse than the same issue in an internal admin component.
Look at trigger handlers. Triggers execute on data changes and frequently run with elevated access to handle cross-object logic. A without sharing declaration in a trigger handler means any data change (including one initiated by a user with minimal permissions) can touch the full record set.
Check recently onboarded developers' work. Developers new to Salesforce are more likely to apply patterns from other platforms that are unsafe in Apex. SOQL injection shows up frequently in code written by developers with SQL backgrounds who are not yet familiar with Apex's parameterized query options.
One practical rhythm: each sprint, scan the classes modified in that sprint. That way security review happens close to when the code was written, not as an annual retrospective.
How AuditForce Fits Here
AuditForce's Code Scanner gives you a list of your Apex classes, LWC bundles, and Aura components sorted by last modified date. Click a component and you get a security score plus the specific findings that drove it.
Each component is scored 0 to 100. A Critical finding (hardcoded credential, active SOQL injection) deducts 40 points. A High finding (unsafe without sharing, potential XSS) deducts 20. The score is a triage signal, not a compliance badge. It tells you where to start.
Results are cached by LastModifiedDate. If a class has not changed since the last scan, the previous result comes back instantly, with no scan credit consumed. You only use a credit when the class actually changed. That makes it practical to scan modified classes regularly without burning through your weekly limit.
Component source is never stored. Code is pulled on-demand via the Metadata API, scanned, and discarded. Only the findings and score are retained.
This covers the blind spot in the change monitoring picture described in The 5 Most Dangerous Salesforce Security Changes. The Setup Audit Trail tells you what changed in your configuration. It does not tell you what is wrong in your code. They cover different parts of the same attack surface.
A Practical Starting Point
If you have not reviewed your org's Apex code for security issues, start with three questions:
- Which classes were modified in the last 90 days?
- Do any of those classes make HTTP callouts or handle external input?
- Do any of those classes run
without sharing?
That cuts a large org down to a workable list. From there, look for the four patterns: hardcoded credentials, unsafe sharing declarations, dynamic SOQL with unvalidated input, unescaped rendering in Lightning components.
You will not fix everything in one pass. That is fine. Find the highest-risk classes, fix the clear issues, and set up a process that catches new ones as they get introduced. Apex code security is not a project with a finish line. It is part of running a mature Salesforce org.