Published • JSON Web Tools Editorial Team
Whether you are preparing for a frontend, backend, or full-stack interview, JSON knowledge is tested at every level. This guide covers 50 questions — from "what is JSON" to advanced topics like JSON Schema, JSONPath, streaming parsers, and JWT security — all with detailed answers and code examples.
JSON stands for JavaScript Object Notation. It was created by Douglas Crockford in the early 2000s and published as RFC 4627 in 2006. The current authoritative specification is RFC 8259 (2017) and the ECMA-404 standard. Despite having "JavaScript" in the name, JSON is completely language-independent.
JSON supports exactly six primitive and structural types:
"hello"42, 3.14, -7, 1e10true or falsenull[]: [1, "a", true]{}: {"key": "value"}JSON has no date type, no undefined, no function, no regex, and no binary type.
The most important rules to memorize for an interview: (1) keys must be strings in double quotes; (2) no trailing commas after the last item; (3) no comments; (4) strings use double quotes only — single quotes are invalid; (5) boolean and null are lowercase; (6) NaN and Infinity are not valid JSON numbers. Use our JSON Validator to check any JSON against these rules.
JavaScript object literals are more permissive than JSON: they allow unquoted keys, single-quoted strings, trailing commas, comments, and values like undefined, functions, and NaN. JSON is a strict serialization format — every key must be double-quoted, no trailing commas are allowed, and only the six JSON types are valid. JSON.parse() enforces these rules strictly. A JavaScript object literal like { name: 'Alice', fn: () => {} } is not valid JSON.
JSON, XML, and YAML are all text-based data serialization formats. JSON is more compact than XML, natively supported by JavaScript, and the dominant choice for REST APIs. XML is verbose but powerful for document-centric data and has robust namespace and schema support (XSD). YAML is the most human-readable and supports comments, making it ideal for configuration files (Docker Compose, Kubernetes, Ansible). YAML 1.2 is technically a superset of JSON. See the JSON vs YAML guide and JSON vs XML guide for detailed comparisons.
JSON.parse() converts a JSON string into a JavaScript value. You use it whenever you receive JSON from an external source — an API response, localStorage, a file, or a WebSocket message — and need to work with it as a JavaScript object. It accepts an optional second argument called a reviver function that transforms values during parsing:
const json = '{"name":"Alice","dob":"1990-05-15"}';
// Basic usage:
const obj = JSON.parse(json);
console.log(obj.name); // "Alice"
// With reviver — convert date strings to Date objects:
const obj2 = JSON.parse(json, (key, value) => {
if (key === 'dob') return new Date(value);
return value;
});
console.log(obj2.dob instanceof Date); // true
JSON.stringify(value, replacer, space) converts a JavaScript value into a JSON string. The replacer parameter can be an array of key names to include (whitelist), or a function that transforms each value. The space parameter controls indentation for pretty-printing:
const user = { name: "Alice", password: "secret", age: 30 };
// Whitelist — only include name and age:
JSON.stringify(user, ['name', 'age']);
// '{"name":"Alice","age":30}'
// Replacer function — redact password:
JSON.stringify(user, (key, val) => key === 'password' ? undefined : val);
// '{"name":"Alice","age":30}'
// Pretty print with 2-space indent:
JSON.stringify(user, null, 2);
// '{
// "name": "Alice",
// "password": "secret",
// "age": 30
// }'
JSON.stringify() silently omits or converts several JavaScript values that have no JSON equivalent:
null in arraysnull in arraysnull.toISOString()A common interview trick: JSON.parse(JSON.stringify(obj)) creates a deep clone of an object. However, it has several important limitations — it drops undefined, functions, and Symbol properties; it converts Dates to strings; and it throws on circular references. For production code, prefer structuredClone() (available in Node 17+ and modern browsers) which handles these cases correctly:
// Quick but lossy deep clone (interview answer):
const clone = JSON.parse(JSON.stringify(original));
// Production-safe deep clone (preserves Dates, handles more types):
const clone = structuredClone(original);
// Limitation demonstration:
const obj = { a: 1, fn: () => 'hello', d: new Date() };
const jsonClone = JSON.parse(JSON.stringify(obj));
console.log(jsonClone.fn); // undefined — function dropped
console.log(typeof jsonClone.d); // "string" — Date became string
Circular references cause JSON.stringify() to throw a TypeError: Converting circular structure to JSON. To handle them, use a replacer function with a WeakSet to track seen objects:
function stringifyCircular(obj) {
const seen = new WeakSet();
return JSON.stringify(obj, (key, value) => {
if (typeof value === 'object' && value !== null) {
if (seen.has(value)) return '[Circular]';
seen.add(value);
}
return value;
});
}
const a = { name: 'Alice' };
a.self = a; // circular reference
console.log(stringifyCircular(a));
// '{"name":"Alice","self":"[Circular]"}'
JSON Schema is a declarative language for describing the expected structure of a JSON document. It defines required fields, data types, value constraints (minimum, maximum, pattern, enum), and structural rules (nested objects, array items). It is used for API contract validation, automatic documentation generation, code generation, and form validation. The current version is Draft 2020-12. The specification is maintained at json-schema.org. Try our JSON Schema Validator to validate JSON against a schema.
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["id", "name", "email"],
"properties": {
"id": { "type": "integer", "minimum": 1 },
"name": { "type": "string", "minLength": 1, "maxLength": 100 },
"email": { "type": "string", "format": "email" },
"age": { "type": "integer", "minimum": 0, "maximum": 150 },
"role": { "type": "string", "enum": ["admin", "user", "guest"] }
},
"additionalProperties": false
}
JSONPath is a query language for JSON, analogous to XPath for XML. It allows you to extract values from a JSON document using path expressions. The root element is represented by $. Common operators: . for child access, [] for array index or filter, * for wildcard, .. for recursive descent, ? for filter expressions. Use our JSONPath Online tool to test expressions interactively.
// Given JSON: { "store": { "books": [
// {"title":"A","price":9.99,"author":"Alice"},
// {"title":"B","price":14.99,"author":"Bob"}
// ]}}
$.store.books[0].title // "A"
$.store.books[*].author // ["Alice", "Bob"]
$.store.books[?(@.price > 10)] // [{"title":"B","price":14.99,"author":"Bob"}]
$..title // ["A", "B"] (recursive)
A JSON Web Token (JWT) is a compact, URL-safe token format for securely transmitting claims between parties. It consists of three base64url-encoded JSON parts separated by dots: header.payload.signature. The header specifies the algorithm (e.g., HS256). The payload contains claims (sub, iat, exp, custom fields). The signature verifies the token has not been tampered with. JWTs are defined in RFC 7519. Use our JWT Decoder to inspect any token.
// Header (base64url decoded):
{ "alg": "HS256", "typ": "JWT" }
// Payload (base64url decoded):
{
"sub": "1234567890",
"name": "Alice",
"role": "admin",
"iat": 1711440000,
"exp": 1711526400
}
// Signature = HMACSHA256(base64url(header) + "." + base64url(payload), secret)
null is a valid JSON value representing the intentional absence of a value. undefined does not exist in JSON — it is a JavaScript concept. When you call JSON.stringify() on an object with undefined values, those properties are silently omitted. When you receive JSON and parse it, you will never get undefined from JSON.parse() — missing keys evaluate to undefined only when you try to access them on the resulting JavaScript object, not because they were in the JSON.
If an object has a toJSON() method, JSON.stringify() calls it and serializes the return value instead of the object. This is useful for controlling serialization — for example, converting a Date or a class instance into a plain representation:
class Money {
constructor(amount, currency) {
this.amount = amount;
this.currency = currency;
}
toJSON() {
return `${this.amount} ${this.currency}`;
}
}
const price = new Money(9.99, 'USD');
JSON.stringify({ price }); // '{"price":"9.99 USD"}'
// Date.prototype.toJSON() is built in:
JSON.stringify(new Date('2026-01-01'));
// '"2026-01-01T00:00:00.000Z"'
NDJSON (also called JSONL — JSON Lines) is a format where each line is a complete, independent JSON value separated by newline characters. Unlike a single large JSON array, NDJSON is ideal for streaming, log files, and large dataset processing because each line can be parsed independently without loading the entire file. Many data pipeline tools (Kafka, BigQuery, Elasticsearch bulk API) use NDJSON. Try our NDJSON Converter to convert between NDJSON and standard JSON arrays.
In JSON, "well-formed" and "valid" are often used interchangeably to mean "syntactically correct JSON that any compliant parser can read." In a stricter sense borrowed from XML terminology: well-formed means the syntax is correct; valid means it additionally conforms to a schema. When interviewers ask "is this valid JSON?", they usually mean syntactically correct. Always clarify whether they also want schema validation.
Major relational databases have native JSON support: PostgreSQL has JSON (stored as text) and JSONB (binary, indexable, faster queries) column types. MySQL 5.7.8+ has a native JSON type with validation and indexing. SQLite supports JSON via the json() function family. In PostgreSQL, you can query JSON fields with operators like -> (return JSON), ->> (return text), and #> (path access).
BSON (Binary JSON) is the binary-encoded serialization format used by MongoDB. BSON extends JSON with additional data types not present in JSON: Date (native binary date), ObjectId, BinData (binary blobs), Int32, Int64, Decimal128, Regex, and undefined. BSON is more compact than JSON for binary data and supports richer types, but it is not human-readable and requires a BSON codec to read. JSON is human-readable text; BSON is a binary format.
Flattening converts a nested object into a single-level object where nested keys are represented with a separator (typically a dot or underscore). This is useful for search indexing, spreadsheet exports, and some NoSQL operations:
function flatten(obj, prefix = '', sep = '.') {
return Object.entries(obj).reduce((acc, [key, val]) => {
const fullKey = prefix ? `${prefix}${sep}${key}` : key;
if (typeof val === 'object' && val !== null && !Array.isArray(val)) {
Object.assign(acc, flatten(val, fullKey, sep));
} else {
acc[fullKey] = val;
}
return acc;
}, {});
}
flatten({ a: { b: { c: 1 }, d: 2 }, e: 3 });
// { "a.b.c": 1, "a.d": 2, "e": 3 }
Use our JSON Flatten tool to flatten and unflatten JSON online.
Industry-standard conventions include: (1) use camelCase for key names (preferred in JavaScript contexts) or snake_case (preferred in Python/Ruby); (2) always return a consistent envelope structure ({ data: ..., error: ..., meta: ... }); (3) use null for absent optional fields rather than omitting them; (4) represent dates in ISO 8601 format ("2026-03-26T10:30:00Z"); (5) use meaningful HTTP status codes alongside JSON error objects; (6) paginate arrays with cursor or offset/limit metadata.
JSON Merge Patch (defined in RFC 7396) is a simple format for describing changes to a JSON document. A merge patch is applied by recursively merging the patch object into the target: non-null values overwrite, null values delete, and missing keys are left unchanged. It is used in HTTP PATCH operations as an alternative to JSON Patch (RFC 6902). It is simpler than JSON Patch but less expressive — it cannot represent array operations.
JSON Patch is a format for describing a sequence of operations to apply to a JSON document. Each operation is a JSON object with an op field (add, remove, replace, move, copy, test). It is more powerful than JSON Merge Patch and supports array operations. Example: [{"op":"replace","path":"/user/name","value":"Bob"},{"op":"remove","path":"/user/age"}]. JSON Patch is idiomatic for PATCH HTTP endpoints where precise, transactional changes are needed.
MongoDB stores data as BSON documents — a binary superset of JSON. The MongoDB shell and most drivers work with JSON-like syntax. Documents are equivalent to JSON objects, collections are equivalent to arrays of objects. MongoDB's query language uses JSON notation: db.users.find({"age": {"$gt": 25}}). The $-prefixed operators are MongoDB extensions. The document model allows embedding nested objects and arrays natively, which is a major advantage over the flat, normalized model of traditional relational databases.
JSON itself imposes no limit on number size — the specification allows any number. However, in JavaScript, JSON.parse() converts numbers to IEEE 754 double-precision floats. The maximum safe integer in this representation is 2^53 - 1 (Number.MAX_SAFE_INTEGER = 9007199254740991). Integers larger than this lose precision. For large integers (e.g., database IDs from distributed systems), the convention is to transmit them as strings and parse them with BigInt on the receiving end.
JSON objects are unordered by specification — parsers are not required to preserve key order. However, sorting keys is useful for diffing, caching, and canonical representation. In JavaScript: JSON.stringify(obj, Object.keys(obj).sort()). For a recursive solution that sorts all nested objects, use a replacer function or a dedicated library. Use our JSON Sort Keys tool to sort any JSON document online.
Standard JSON.parse() loads the entire JSON string into memory before returning. For large datasets (multi-megabyte log files, data exports, large API responses), this causes excessive memory usage. Streaming JSON parsers (like stream-json in Node.js, ijson in Python, jackson-streaming in Java) parse the document token by token, emitting events as each object or array is completed. This reduces peak memory by orders of magnitude and allows processing to begin before the full response is received.
import json
import jsonschema
# Syntax validation:
def is_valid_json(s: str) -> bool:
try:
json.loads(s)
return True
except json.JSONDecodeError as e:
print(f"Invalid: {e.msg} at line {e.lineno}, col {e.colno}")
return False
# Schema validation using jsonschema library:
schema = {
"type": "object",
"required": ["id", "name"],
"properties": {
"id": {"type": "integer"},
"name": {"type": "string"}
}
}
data = {"id": 1, "name": "Alice"}
try:
jsonschema.validate(instance=data, schema=schema)
print("Schema valid")
except jsonschema.ValidationError as e:
print(f"Schema invalid: {e.message}")
The $ref keyword in JSON Schema allows schemas to reference other schemas, enabling reuse and composition. A $ref value is a URI that points to another schema — either within the same document (using JSON Pointer syntax like "$ref": "#/$defs/Address") or to an external schema URL. This prevents duplication when the same sub-schema (e.g., an Address type) is used in multiple places. In Draft 2020-12, $defs replaces the older definitions keyword for storing reusable sub-schemas.
Prototype pollution occurs when a JSON payload contains keys like __proto__, constructor, or prototype, and the application naively merges this payload into a JavaScript object. These special keys modify Object.prototype, affecting every object in the application. This is a serious security vulnerability. Prevention: validate all incoming JSON keys against a whitelist; use Object.create(null) for hash maps; use safe merge libraries; or call JSON.parse() with a reviver that rejects forbidden keys.
Never use eval() to parse JSON. eval() executes arbitrary JavaScript — a JSON string containing {"fn": (function(){ /* malicious code */ })()} would run that code. JSON.parse() only recognizes the six JSON types and raises a SyntaxError on anything else. It is a data deserializer, not a code executor. Before JSON.parse() was standardized, some developers used eval() as a shortcut — this was always wrong. The only acceptable use of eval() is in controlled, sandboxed environments where you fully control the input.
JSON strings are Unicode sequences. Special characters that must be escaped: \" (double quote), \\ (backslash), \/ (forward slash, optional), \b (backspace), \f (form feed), \n (newline), \r (carriage return), \t (tab). Non-ASCII Unicode characters can be represented as-is in UTF-8 encoded JSON or as \uXXXX escape sequences. Surrogate pairs are needed for characters outside the BMP (Basic Multilingual Plane): characters above U+FFFF require a pair of \uXXXX escapes. The MDN JSON reference has a complete table of escape sequences.
JSON-LD (JSON for Linking Data) is a method of encoding Linked Data using JSON. It allows you to embed structured data in web pages that search engines (Google, Bing) can understand to generate rich results — product prices, event dates, FAQ snippets, breadcrumbs, and article metadata. JSON-LD is injected in a <script type="application/ld+json"> tag. Google recommends JSON-LD over microdata and RDFa for structured data. Common schemas include Article, FAQPage, BreadcrumbList, Product, and Event from schema.org.
JSON is human-readable text — the most verbose format but universally supported without special tooling. MessagePack is a binary format that serializes JSON-compatible data 1.5–4x smaller than JSON with faster encode/decode. Protocol Buffers (protobuf) uses a strongly-typed schema and produces the most compact output (often 3–10x smaller than JSON), with the fastest parse times, but requires schema compilation and is not self-describing. For high-throughput internal microservice communication, protobuf or MessagePack are preferred. For external APIs that need developer-friendly payloads, JSON remains the standard.
A JSON diff compares two JSON documents and produces a description of the differences. A naive approach recursively walks both documents and reports added, removed, and changed keys. Production implementations like RFC 6902 JSON Patch emit operations that transform the source into the target. Key challenges: handling array element reordering (requires a longest common subsequence algorithm), preserving identity of moved objects, and producing a minimal patch. Use our JSON Diff tool to compare any two JSON documents visually.
GraphQL API responses are always JSON. The response envelope is always { "data": ..., "errors": [...] }. GraphQL queries themselves are strings (not JSON), but variables sent with a query are JSON objects. The __typename field in GraphQL responses is analogous to @type in JSON-LD. Tools that auto-generate TypeScript types from GraphQL schemas produce the same result as our JSON to TypeScript converter, but with richer union type support derived from the schema.
A reviver is an optional second argument to JSON.parse(). It is called with each key-value pair during parsing, bottom-up. Whatever the reviver returns replaces the value in the output. Returning undefined deletes the key. Common uses: (1) converting ISO date strings back to Date objects; (2) converting number strings to BigInt; (3) filtering out sensitive fields; (4) transforming string-encoded binary data (hex or base64) into Uint8Array. The reviver is called with an empty string key for the root value at the end.
WebSocket frames are binary or text. When using JSON over WebSockets, you send text frames containing JSON strings and parse them on receipt. A common pattern is to include a type or event field in every message to allow the receiver to dispatch to the correct handler. Libraries like Socket.io wrap this pattern with automatic serialization. For high-frequency real-time applications (trading, gaming), the overhead of JSON serialization per frame can be significant — binary protocols like CBOR or MessagePack are sometimes used instead.
JSON Canonicalization Scheme (JCS, RFC 8785) defines a deterministic serialization of JSON for use in cryptographic signatures. Standard JSON serialization is non-deterministic — key order may vary between implementations. JCS specifies: keys sorted lexicographically, no insignificant whitespace, Unicode escape sequences normalized, and numbers in a specific format. This ensures that two parties serializing the same data will always produce the same byte sequence, enabling reliable signatures and hashes over JSON documents.
const fs = require('fs');
const path = require('path');
// Read and parse JSON file:
const data = JSON.parse(
fs.readFileSync(path.join(__dirname, 'config.json'), 'utf8')
);
// Write JSON to file:
fs.writeFileSync(
path.join(__dirname, 'output.json'),
JSON.stringify(data, null, 2), // pretty-print
'utf8'
);
// Async version with error handling:
async function readJSON(filePath) {
const raw = await fs.promises.readFile(filePath, 'utf8');
return JSON.parse(raw);
}
// require() can import JSON directly in CommonJS:
const config = require('./config.json'); // auto-parsed
The official MIME type registered with IANA for JSON is application/json. The text/json variant is an unofficial alias used by some older implementations. Always use application/json in Content-Type and Accept headers. For JSON Patch documents, use application/json-patch+json. For JSON Merge Patch, use application/merge-patch+json. For JSON-LD, use application/ld+json. For NDJSON, use application/x-ndjson.
For deeply nested JSON, recursive algorithms can hit JavaScript's call stack limit (~10,000 frames). Use iterative approaches with an explicit stack for deep traversal. Alternatively, use JSON.stringify() with a replacer or a library like flat to flatten first. For query operations, prefer JSONPath which handles depth transparently. For UI display, use our JSON Tree View which renders nested JSON in a collapsible tree.
JSON5 is an unofficial superset of JSON that adds JavaScript-style features: unquoted keys, single-quoted strings, trailing commas, comments (// and /* */), hexadecimal numbers (0xDEAD), and multi-line strings. JSON5 is useful for configuration files that humans edit manually (where comments and trailing commas improve usability). It is not a standard — never use JSON5 for API payloads or data interchange. The json5 npm package provides parsing and serialization.
There are several ways depending on the depth of merge needed:
const a = { x: 1, y: { z: 2 } };
const b = { y: { w: 3 }, v: 4 };
// Shallow merge (spread operator) — b overwrites a's 'y' entirely:
const shallow = { ...a, ...b };
// { x: 1, y: { w: 3 }, v: 4 } — z is lost!
// Object.assign — same as spread, also shallow:
const shallow2 = Object.assign({}, a, b);
// Deep merge (manual or with library):
// npm install deepmerge
import merge from 'deepmerge';
const deep = merge(a, b);
// { x: 1, y: { z: 2, w: 3 }, v: 4 } — z preserved
// structuredClone approach for deep copy then merge:
const result = structuredClone(a);
deepMergeInPlace(result, b);
Use our JSON Merge tool to merge JSON online with configurable deep/shallow strategy.
JSON Pointer (RFC 6901) is a string syntax for identifying a specific value within a JSON document. A pointer is a string of zero or more reference tokens separated by /. For example, /user/address/street navigates to doc.user.address.street. Array elements are accessed with their zero-based index: /items/0/name. The - character refers to the element after the last array item (used in JSON Patch add operations). The empty string "" refers to the root document. Tilde escaping: ~0 represents ~ and ~1 represents / in key names.
TypeScript does not know the shape of JSON at runtime — JSON.parse() returns any. To get type safety, you must either cast to a known type (unsafe) or use a runtime validator. The recommended approach is to define a TypeScript interface and validate with Zod, Ajv, or io-ts, which generate both the TypeScript type and the runtime validator from a single schema definition. Our JSON to TypeScript converter generates TypeScript interfaces from a sample JSON document, which you can then combine with a Zod schema for runtime safety.
JSON Resume is an open source initiative that defines a standard JSON schema for representing a professional resume. The schema specifies fields for basics (contact info), work experience, education, skills, projects, and publications. A single resume.json file can be rendered into different visual themes using the JSON Resume command-line tool. It demonstrates a practical application of JSON Schema in a real-world context and is a popular open source project for developers learning schema design.
A JSON document can have any JSON value at its root — an object, array, string, number, boolean, or null are all valid. REST APIs almost always return an object at root (allowing easy addition of metadata like pagination). Returning a bare array at root is valid JSON but makes it harder to add envelope data without breaking clients. Some older APIs return arrays directly for list endpoints; modern API design guides (Google API Design Guide, Stripe) recommend always returning an object with a named array field: { "users": [...], "total": 100 }.
Professional developers use a combination of: (1) online tools like JSON Web Tools for quick validation, formatting, and conversion; (2) jq for command-line JSON processing and transformation; (3) Ajv or Zod for runtime schema validation in JavaScript/TypeScript; (4) Postman or Insomnia for API testing with JSON payloads; (5) VS Code with JSON schema support for editing config files with IntelliSense; (6) JSONPath editors for querying complex nested documents; (7) JSON diff tools for comparing API responses during debugging. Knowing these tools is as important as knowing the specification.
Use these tools to reinforce what you learned — validate, decode JWTs, query with JSONPath, and generate schemas.
Also useful: What is JSON? | JSON Schema Tutorial | JWT Tutorial | JSONPath Tutorial | JSON vs YAML