Published • JSON Web Tools Editorial Team
JSON is the backbone of modern APIs, but it introduces real security risks when handled carelessly. This guide covers the most common JSON security vulnerabilities — injection, prototype pollution, JWT weaknesses, PII leakage, and deserialization flaws — with concrete code examples and fixes for each.
JSON injection is the most straightforward JSON security vulnerability. It occurs when user-supplied input is concatenated directly into a JSON string instead of being serialized by a proper JSON encoder. The result is that an attacker can break out of the intended string value and inject their own JSON structure.
Consider a server that builds a log entry by concatenating user input:
// VULNERABLE — never do this
const username = req.body.username;
const logEntry = '{"user": "' + username + '", "action": "login"}';
db.insertLog(JSON.parse(logEntry));
// Attacker sends username = alice", "role": "admin", "x": "
// Result:
// {"user": "alice", "role": "admin", "x": "", "action": "login"}
// The attacker injected a "role": "admin" field!
The correct fix is always to build the object first and then serialize it. Never construct JSON with string concatenation when any value comes from user input:
// SAFE — let the serializer handle encoding
const logEntry = JSON.stringify({
user: req.body.username, // safely escaped
action: "login"
});
db.insertLog(JSON.parse(logEntry));
The OWASP JSON Injection page documents several real-world cases where this pattern was exploited to bypass authorization checks. JSON injection is particularly dangerous when the injected payload is later evaluated by a MongoDB query operator, a NoSQL database, or a templating engine.
For server-side validation, always use a schema validation library such as Ajv to verify the structure and types of parsed JSON before using any values in business logic. Our JSON Validator can help you catch structural issues during development.
Prototype pollution is a JavaScript-specific vulnerability where an attacker manipulates a JSON payload to include special keys — __proto__, constructor, or prototype — that modify the global Object.prototype when the payload is merged into an object. Because all JavaScript objects inherit from Object.prototype, this can affect every object in the application.
Here is a typical attack scenario using a recursive merge function:
// VULNERABLE recursive merge
function mergeDeep(target, source) {
for (const key of Object.keys(source)) {
if (typeof source[key] === 'object') {
if (!target[key]) target[key] = {};
mergeDeep(target[key], source[key]);
} else {
target[key] = source[key];
}
}
}
// Attacker sends this JSON:
const malicious = JSON.parse('{"__proto__": {"isAdmin": true}}');
mergeDeep({}, malicious);
// Now every plain object inherits isAdmin: true
console.log({}.isAdmin); // true — pollution successful
The most robust defenses against prototype pollution are: (1) use Object.create(null) to create dictionary objects that have no prototype chain; (2) validate incoming JSON keys against a strict whitelist before merging; (3) use libraries like lodash 4.17.21+ or deepmerge which have explicit prototype pollution protections; (4) freeze the Object prototype in environments where that is feasible (Object.freeze(Object.prototype)).
The OWASP Top 10 (A03: Injection) covers prototype pollution as part of the broader injection category. GitHub Security Lab has documented dozens of prototype pollution CVEs in popular npm packages, many triggered through JSON merge operations on untrusted payloads.
JSON Web Tokens (JWTs) are themselves JSON objects — the header and payload are both JSON, base64url-encoded. The security of a JWT depends entirely on signature verification. When that verification is misconfigured, attackers can forge tokens with arbitrary claims.
The most notorious JWT vulnerability is the alg:none attack. The JWT specification allows "alg": "none" in the header to indicate an unsigned token. Vulnerable libraries accept such tokens as valid without checking a signature:
// Attacker crafts this header + payload:
// Header: {"alg":"none","typ":"JWT"}
// Payload: {"sub":"1","role":"admin","iat":1711440000}
// Base64url-encoded (no signature segment):
// eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0
// .eyJzdWIiOiIxIiwicm9sZSI6ImFkbWluIn0
// .
// VULNERABLE library accepts this as valid:
jwt.verify(token, secret, function(err, decoded) {
// decoded.role === 'admin' — attacker is now admin
});
// SAFE — explicitly whitelist allowed algorithms:
jwt.verify(token, secret, { algorithms: ['HS256'] }, callback);
A second common JWT vulnerability is using a weak or guessable secret. HS256-signed JWTs can be brute-forced offline once an attacker obtains a valid token. Any secret shorter than 256 bits or composed of dictionary words is at risk. Use a cryptographically random secret of at least 256 bits and rotate it regularly.
The authoritative reference is RFC 7519 (JWT specification). For algorithm confusion attacks — where an RS256 key is accepted as an HS256 secret — see the OWASP JWT Security Cheat Sheet. Use our JWT Decoder to inspect token headers and claims without verifying the signature.
One of the most common — and most overlooked — JSON security failures is logging sensitive personal information. Application logs routinely capture request bodies, API responses, and database query results. When those payloads contain passwords, tokens, credit card numbers, SSNs, or email addresses, logging them creates serious compliance and breach risks under regulations like GDPR, HIPAA, and PCI-DSS.
The fix is a centralized log sanitizer that strips or masks known sensitive fields before any JSON is written to a log sink:
const SENSITIVE_KEYS = new Set([
'password', 'passwd', 'secret', 'token', 'authorization',
'apiKey', 'api_key', 'creditCard', 'cardNumber', 'cvv',
'ssn', 'socialSecurity', 'dob', 'dateOfBirth'
]);
function sanitizeForLog(obj, depth = 0) {
if (depth > 10 || typeof obj !== 'object' || obj === null) return obj;
const clean = Array.isArray(obj) ? [] : {};
for (const [key, val] of Object.entries(obj)) {
if (SENSITIVE_KEYS.has(key.toLowerCase())) {
clean[key] = '[REDACTED]';
} else if (typeof val === 'object') {
clean[key] = sanitizeForLog(val, depth + 1);
} else {
clean[key] = val;
}
}
return clean;
}
// Usage:
logger.info('request', sanitizeForLog(req.body));
Beyond manual sanitization, consider structured logging libraries (Pino, Winston) that support field-level redaction via configuration. The MDN Content Security Policy documentation is useful context for understanding the broader category of information disclosure risks in web applications.
NIST Privacy Framework guidelines recommend treating log data with the same sensitivity classification as the production data it represents. Apply role-based access controls to log storage, enable log encryption at rest, and set retention policies that automatically purge logs containing PII after the minimum required period.
Accepting JSON without validating the Content-Type header opens the door to cross-site request forgery (CSRF) attacks and content-type sniffing exploits. When a server processes a request body as JSON regardless of the declared content type, an attacker can submit form-encoded data or multipart data that happens to parse as JSON in some libraries.
Always explicitly require Content-Type: application/json for JSON endpoints and reject requests that do not match:
// Express.js middleware — validate Content-Type
app.use('/api', (req, res, next) => {
if (req.method !== 'GET' && req.method !== 'HEAD') {
const ct = req.headers['content-type'] || '';
if (!ct.includes('application/json')) {
return res.status(415).json({
error: 'Unsupported Media Type — use application/json'
});
}
}
next();
});
CORS (Cross-Origin Resource Sharing) is equally important. A permissive CORS policy (Access-Control-Allow-Origin: *) on authenticated JSON endpoints allows any origin to read the response. For authenticated APIs, always restrict the allowed origin to your own domains and use Access-Control-Allow-Credentials: true only when necessary. Avoid wildcards on credentialed endpoints.
Set the X-Content-Type-Options: nosniff response header on all JSON responses to prevent browsers from ignoring the declared content type and attempting to execute JSON as JavaScript. This mitigates content-type sniffing attacks that can occur when JSON is inadvertently served from a script-accessible URL.
Calling JSON.parse() on untrusted input without error handling will crash your application when the input is malformed. But there is a subtler danger: some applications apply regular expressions to JSON strings before parsing — for example, to strip comments or detect patterns. Maliciously crafted JSON can trigger catastrophic backtracking in poorly written regex patterns, causing a ReDoS (Regular Expression Denial of Service) that blocks the event loop for seconds or minutes.
Always use a try/catch around JSON.parse() and avoid applying complex regex to large JSON strings from untrusted sources:
// Safe JSON.parse with type checking and size limit
function safeParseJSON(input, maxBytes = 1_000_000) {
if (typeof input !== 'string') return { ok: false, error: 'Input must be a string' };
if (Buffer.byteLength(input, 'utf8') > maxBytes) {
return { ok: false, error: 'Input exceeds maximum allowed size' };
}
try {
const data = JSON.parse(input);
return { ok: true, data };
} catch (e) {
return { ok: false, error: e.message };
}
}
// Using the reviver for type safety:
JSON.parse(userInput, (key, value) => {
// Reject unexpected types for specific keys
if (key === 'age' && typeof value !== 'number') {
throw new TypeError('age must be a number');
}
return value;
});
For very large JSON payloads — log exports, data migrations, API bulk operations — prefer streaming parsers such as stream-json (Node.js) or ijson (Python). Streaming parsers process JSON incrementally without loading the entire document into memory, eliminating both memory exhaustion and the window for ReDoS via regex on large strings.
Enforce input size limits at the HTTP server layer. Express.js's express.json({ limit: '100kb' }) rejects oversized bodies before they reach application code. Nginx and HAProxy support equivalent client_max_body_size directives. Never rely solely on application-level checks — layer your defenses. Use our JSON Security Scanner to detect common vulnerabilities in JSON payloads.
Unlike languages such as Java or Python (with pickle), JavaScript's JSON.parse() does not execute code during deserialization, so it is not vulnerable to the classic "deserialization gadget chain" attacks. However, JSON deserialization vulnerabilities do occur in other languages and through indirect execution paths.
In Python, using pickle or yaml.load() (unsafe loader) to deserialize what appears to be JSON-like data can execute arbitrary code. Even json.loads() in Python is safe, but developers sometimes accidentally pass JSON strings to eval() or ast.literal_eval() on non-JSON-compatible input, creating code execution paths.
In Java, frameworks that use reflection-based deserialization (Jackson with polymorphic type handling, Gson) can be exploited if the @class or @type field in JSON is trusted without validation. Disable polymorphic deserialization unless it is strictly necessary:
// Java — Jackson: disable default typing (prevents deserialization attacks)
ObjectMapper mapper = new ObjectMapper();
// DO NOT enable this without careful allowlisting:
// mapper.enableDefaultTyping(); // DANGEROUS
// SAFE — use explicit type allowlisting if polymorphism is needed:
mapper.setDefaultTyping(
new ObjectMapper.DefaultTypeResolverBuilder(
ObjectMapper.DefaultTyping.NON_FINAL
).init(JsonTypeInfo.Id.CLASS, null)
.inclusion(JsonTypeInfo.As.PROPERTY)
// Only allow specific class names:
// .typeIdResolver(new AllowlistTypeIdResolver(ALLOWED_CLASSES))
);
// Simplest safe approach — deserialize to a specific POJO:
UserProfile profile = mapper.readValue(jsonString, UserProfile.class);
Always deserialize JSON into the most specific type possible rather than into generic Object, Map<String, Object>, or dict. The stricter the target type, the smaller the attack surface for unexpected gadget chains or type confusion. Validate the deserialized object with a schema validator immediately after parsing.
Use this checklist when reviewing any application that produces or consumes JSON:
JSON.stringify(), json.dumps(), or equivalent.application/json for JSON endpoints.alg: none; explicitly specify { algorithms: ['HS256'] } or equivalent.Object.create(null) for hash maps.* on authenticated endpoints.Use our free tools to validate, decode, and inspect JSON payloads for common vulnerabilities.
Also useful: JWT Tutorial | JSON Schema Validator | JSON Repair | What is JSON? | How to Validate JSON