SQL Interview Questions for Intermediate Developers

In this page, I have compiled a handpicked selection of SQL interview questions that are designed to assess your proficiency and problem-solving skills. Whether you are preparing for a job interview or simply looking to enhance your SQL knowledge, these questions will provide valuable insights and help you refine your expertise.

Each question is accompanied by a detailed answer, ensuring you not only grasp the solution but also comprehend the underlying concepts and best practices. Additionally, where applicable, we highlight any syntax differences that may exist between various popular relational database management systems (RDBMS).

Let’s dive into the world of SQL and unlock the knowledge that will propel you towards success in your SQL career!

For a comprehensive understanding, I encourage you to explore the other pages in this interview question series:

Could you explain the concepts of UNION, MINUS, and INTERSECT commands in SQL?

In SQL, UNION, MINUS, and INTERSECT are set operations that allow you to combine or compare the results of multiple queries. These commands operate on multiple SELECT statements and enable you to perform operations on the sets of rows retrieved by those queries.

  1. UNION: The UNION command combines the results of two or more SELECT statements into a single result set. It merges the rows from the individual SELECT statements and removes any duplicate rows. The columns in each SELECT statement must have the same data type and be in the same order. The UNION command is typically used when you want to retrieve rows that exist in either of the SELECT statements.
  2. MINUS (or EXCEPT): The MINUS command returns the rows that exist in the result set of the first SELECT statement but not in the result set of the second SELECT statement. It subtracts the rows of the second SELECT statement from the first one. Like UNION, the columns and their data types must match between the SELECT statements. MINUS is useful when you want to find the difference between two sets of rows.
  3. INTERSECT: The INTERSECT command retrieves the common rows that exist in both the result sets of the two SELECT statements. It returns only the rows that are present in both result sets. The columns and their data types must be the same in both SELECT statements. INTERSECT is handy when you want to find the intersection or commonality between two sets of rows.

It’s important to note that the UNION, MINUS, and INTERSECT commands operate on the result sets of SELECT statements and not on individual tables. These set operations provide powerful capabilities to combine, compare, and analyze data from different sources or conditions.

Here’s an example of the syntax for each command:

SELECT column1, column2 FROM table1
SELECT column1, column2 FROM table2;

SELECT column1, column2 FROM table1
SELECT column1, column2 FROM table2;

SELECT column1, column2 FROM table1
SELECT column1, column2 FROM table2;

Can you explain the concept of a cursor in SQL and provide guidance on how to use a cursor?

In SQL, a cursor is a database object that allows you to retrieve and manipulate data row by row from a result set. It provides a mechanism for sequential traversal and processing of query results within a database.

Here is a general outline of how to use a cursor:

  1. Declare the Cursor: To begin using a cursor, you need to declare it and define its characteristics, such as the SQL query it will execute and the result set it will retrieve. You specify the SELECT statement that forms the basis of the cursor, along with any parameters or conditions. For example:
    DECLARE cursor_name CURSOR FOR SELECT column1, column2 FROM TableName WHERE condition;


  2. Open the Cursor: Once the cursor is declared, you need to open it to execute the associated query and retrieve the result set. Opening the cursor establishes a connection between the cursor and the result set. For example:
    OPEN cursor_name;


  3. Fetch Rows: After opening the cursor, you can fetch rows from the result set one at a time. This process retrieves the current row and advances the cursor to the next row. You can use a FETCH statement to retrieve the data into variables for further processing. For example:
    FETCH NEXT FROM cursor_name INTO variable1, variable2;


  4. Process the Data: Once you have fetched a row from the result set, you can perform operations on the retrieved data using the variables. You can manipulate the data, perform calculations, or apply business logic as required.
  5. Repeat Fetching: To continue processing the remaining rows, you can repeat the FETCH statement until all rows in the result set have been processed. This is typically done in a loop structure. For example:
      -- Process the data
      FETCH NEXT FROM cursor_name INTO variable1, variable2;


  6. Close the Cursor: Once you have finished processing the result set, it’s important to close the cursor to release associated resources. Closing the cursor terminates the connection with the result set. For example:
    CLOSE cursor_name;


  7. Deallocate the Cursor: After closing the cursor, you should deallocate it to free up memory resources. Deallocating the cursor removes it from the database session. For example:
    DEALLOCATE cursor_name;


It’s important to note that the specific syntax and functionality of cursors can vary depending on the database system you are using. Additionally, excessive or inefficient use of cursors can impact performance, so it’s recommended to consider alternative set-based operations when possible.

Can you provide an explanation of entities and relationships in the context of databases?

In the context of databases, entities and relationships are fundamental concepts used to model and represent real-world scenarios and their associations.

An entity refers to a distinct object, concept, or thing that is represented in a database. It can be a physical entity, such as a person or a product, or an abstract entity, such as an event or an order. Entities are typically represented as tables in a relational database, where each row represents an instance or a record of that entity, and each column represents a specific attribute or property of the entity.

On the other hand, relationships define the associations or connections between entities. They represent how entities interact or relate to each other. Relationships capture the dependencies, dependencies, associations, or interactions between entities and provide valuable insights into the underlying data.

There are different types of relationships commonly used in database modeling:

  1. One-to-One (1:1): In a one-to-one relationship, each record in one entity is associated with exactly one record in another entity, and vice versa. For example, in a database representing employees and their contact information, each employee record may have a corresponding record in the contact information table, and vice versa.
  2. One-to-Many (1:N): In a one-to-many relationship, a record in one entity can be associated with multiple records in another entity, but each record in the second entity can be associated with only one record in the first entity. For instance, in a database modeling students and their courses, one student can be enrolled in multiple courses, but each course is associated with only one student.
  3. Many-to-Many (N:N): In a many-to-many relationship, multiple records in one entity can be associated with multiple records in another entity. This type of relationship requires an intermediate table, often called a junction or associative table, to represent the associations. For example, in a database representing students and classes, multiple students can be enrolled in multiple classes, and the enrollment information is stored in the junction table.

Entities and relationships form the foundation of entity-relationship modeling, which is widely used in database design and conceptualization. By understanding the entities and their relationships, you can effectively represent and organize data in a structured manner, enabling efficient storage, retrieval, and analysis of information.

Can you explain the concept of an alias in SQL?

In SQL, an alias is a temporary name assigned to a table or column in a query to make the output more meaningful or to simplify the syntax of the query. Aliases provide a way to rename tables or columns for the duration of a query without permanently changing their names in the database schema.

Here are some key points about aliases:

  1. Table aliases: Table aliases are used to assign a temporary name to a table in a query. They are helpful when working with complex queries involving multiple tables or when you want to give a shorter or more descriptive name to a table. Table aliases are specified after the table name using the AS keyword or simply by placing a space between the table name and the alias.
  2. Column aliases: Column aliases allow you to provide a temporary name for a column in the query result. They are particularly useful when performing calculations, aggregations, or joining columns with different names. Column aliases are specified after the column expression using the AS keyword or by placing a space between the column expression and the alias.
  3. Syntax simplification: Aliases can simplify the syntax of queries by providing shorter or more meaningful names for tables and columns. They make the query more readable and concise.
  4. Use in complex queries: Aliases are often used in complex queries involving self-joins or subqueries, where the same table is referenced multiple times or where columns need to be renamed for clarity or to avoid naming conflicts.

Here are examples of using aliases in SQL queries:

Table alias:

SELECT t1.column1, t2.column2
FROM TableName AS t1
JOIN AnotherTable AS t2 ON t1.id = t2.id;

Column alias:

SELECT column1 + column2 AS total_sum
FROM TableName;

By using aliases in SQL queries, you can improve the readability, simplify the syntax, and provide meaningful names for tables and columns without altering the underlying database schema.

Can you explain the concept of a view in SQL?

In SQL, a view is a virtual table that is derived from the result of a query. It does not store any data itself but instead provides a logical representation of data stored in one or more underlying tables. A view is created based on a query and can be treated like a regular table for querying, filtering, and manipulating data.

Here are some key points about views:

  1. Definition and Structure: A view is defined by a SELECT statement that specifies the columns, expressions, and conditions used to retrieve data from one or more tables. The result of the SELECT statement forms the structure and content of the view.
  2. Virtual Table: A view does not store data separately. Instead, it is a dynamically generated result set based on the underlying tables’ data. Any changes made to the underlying tables are reflected in the view.
  3. Data Abstraction: Views provide a way to present a simplified or customized view of the data to the users. They can be used to hide certain columns, restrict access to specific rows, or combine data from multiple tables into a single view.
  4. Data Manipulation: In addition to querying data, views can be used for data manipulation operations such as inserting, updating, or deleting data. However, certain views may have restrictions on the types of modifications allowed, especially if they involve multiple underlying tables.
  5. Security and Access Control: Views can be used to enforce security by limiting the data that users can access. By granting permissions to a view instead of the underlying tables, you can control what data users can see and interact with.
  6. Simplifying Complex Queries: Views are useful for simplifying complex queries. They allow you to encapsulate frequently used or complex query logic into a reusable and easily understandable view, reducing the complexity of individual queries.

To create a view, you use the CREATE VIEW statement and specify the SELECT statement that defines the view’s structure and data. For example:

SELECT column1, column2
FROM TableName
WHERE condition;

Views offer flexibility, data abstraction, and enhanced security in database systems. They allow for easier data access and manipulation, provide a simplified view of complex data structures, and aid in managing user permissions and access control.

Can you explain the concept of normalization in the context of databases?

Normalization is a process in database design that aims to organize and structure relational databases efficiently, reduce redundancy, and improve data integrity. It involves breaking down a database into multiple tables and applying a set of rules called normalization forms to ensure data is stored in a logical and non-repetitive manner.

The main goals of normalization are:

  1. Eliminating data redundancy: Redundancy occurs when the same data is duplicated in multiple places within a database. By normalizing the database, redundant data is minimized or eliminated, which reduces storage space and avoids inconsistencies that can arise from duplicate information.
  2. Minimizing data modification anomalies: Data modification anomalies occur when a change to data in one place results in inconsistencies or errors elsewhere in the database. Normalization helps to minimize such anomalies by structuring data in a way that ensures modifications can be made in one place and automatically propagate to related data.
  3. Improving data integrity: Normalization helps maintain data integrity by enforcing relationships and dependencies between tables through the use of primary keys and foreign keys. This ensures that data remains consistent and accurate throughout the database.

Normalization follows a series of normalization forms, each building upon the previous form. The most commonly used normalization forms are:

  • First Normal Form (1NF): Eliminates repeating groups and ensures atomicity by organizing data into separate columns and rows.
  • Second Normal Form (2NF): Builds upon 1NF by eliminating partial dependencies, ensuring that each non-key column is functionally dependent on the entire primary key.
  • Third Normal Form (3NF): Builds upon 2NF by eliminating transitive dependencies, ensuring that no non-key column depends on another non-key column.

There are higher levels of normalization, such as Fourth Normal Form (4NF) and Fifth Normal Form (5NF), which address more complex scenarios.

By applying normalization techniques, databases are designed to efficiently store and retrieve data, minimize data redundancy, and maintain data integrity. It is important to strike a balance between normalization and performance, as overly normalized databases may require more complex queries to retrieve information.

Can you explain the concept of denormalization?

Denormalization is a database optimization technique in which the structure of a relational database is intentionally modified to improve performance by reducing the number of joins and simplifying data retrieval. It involves adding redundant data or duplicating information from related tables into a single table, thereby trading off some normalization benefits for improved query performance.

Here are some key points about denormalization:

  1. Performance optimization: The primary goal of denormalization is to enhance the performance of database queries, especially in read-heavy scenarios, by reducing the number of table joins required to retrieve data. By eliminating or minimizing complex joins, denormalization can significantly speed up query execution.
  2. Data redundancy: Denormalization introduces redundancy by duplicating data across tables. Redundant information is stored in multiple tables to eliminate the need for joins, resulting in faster data retrieval. However, this redundancy increases storage requirements and poses challenges in maintaining data consistency.
  3. Simplified data retrieval: Denormalization simplifies the data retrieval process by eliminating the need for complex join operations across multiple tables. By consolidating related data into a single table, queries can be formulated more straightforwardly, resulting in improved query execution time.
  4. Impact on data modification: While denormalization improves query performance, it can complicate data modification operations, such as inserts, updates, and deletions. As redundant data is stored in multiple places, modifications need to be applied consistently across all denormalized copies to maintain data integrity.
  5. Considerations for denormalization: Denormalization is typically applied in scenarios where the benefits of improved query performance outweigh the trade-offs of increased storage and potential maintenance complexity. It is commonly used in data warehousing, reporting, and analytical systems where read-intensive operations dominate.

It’s important to note that denormalization should be applied judiciously and based on a thorough analysis of the specific performance requirements and trade-offs of the database system. Proper indexing, query optimization techniques, and careful consideration of the data access patterns are also crucial factors in achieving optimal database performance.

Can you explain the differences between the TRUNCATE, DELETE, and DROP statements in SQL?

In SQL, the TRUNCATE, DELETE, and DROP statements are used for different data manipulation operations, and they have distinct purposes and effects on the database.

  • TRUNCATE statement: The TRUNCATE statement is used to quickly delete all rows from a table, effectively removing all data within the table. Unlike the DELETE statement, which removes rows one by one, TRUNCATE performs the operation in a more efficient manner by deallocating the data pages used by the table. TRUNCATE also resets any identity columns or sequence values in the table. However, it is important to note that TRUNCATE is a DDL (Data Definition Language) statement, and it cannot be rolled back. Once TRUNCATE is executed, the data is permanently deleted.


TRUNCATE TABLE table_name;
  • DELETE statement: The DELETE statement is used to remove one or more specific rows from a table based on specified conditions. It allows for more targeted deletion operations compared to TRUNCATE. DELETE is a DML (Data Manipulation Language) statement, and it can be rolled back using transaction control statements (e.g., ROLLBACK). When DELETE is executed, the specified rows are removed from the table, and it is possible to use a WHERE clause to filter which rows to delete. Deleted rows can also be captured in a trigger or logged in a transaction log for auditing purposes.


DELETE FROM table_name
WHERE condition;
  • DROP statement: The DROP statement is used to remove database objects entirely, such as tables, views, indexes, or even entire databases. It permanently deletes the specified object from the database schema. When DROP is executed on a table, all data, indexes, triggers, and associated objects are removed. It is a DDL statement that cannot be rolled back, so caution must be exercised when using it.


DROP TABLE table_name;

To summarize, TRUNCATE is used to remove all data from a table quickly, DELETE is used to remove specific rows based on conditions, and DROP is used to remove entire database objects from the schema. Each statement serves different purposes and should be used accordingly, considering the impact and the specific requirements of the data manipulation task at hand.

Can you explain the concepts of aggregate and scalar functions in SQL?

Rephrased question: Can you explain the concepts of aggregate and scalar functions in SQL?

Answer: In SQL, aggregate and scalar functions are essential components used for data manipulation and analysis. They provide powerful tools to perform calculations and retrieve specific information from the database.

  1. Aggregate Functions: Aggregate functions operate on a set of values and return a single value that summarizes the data. These functions are commonly used in conjunction with the GROUP BY clause to perform calculations on groups of rows. Examples of aggregate functions include:
    • SUM: Calculates the sum of a numeric column.
    • AVG: Computes the average value of a numeric column.
    • COUNT: Counts the number of rows or non-null values in a column.
    • MAX: Returns the maximum value in a column.
    • MIN: Returns the minimum value in a column.

    Aggregate functions allow you to derive meaningful insights from data, such as calculating totals, averages, or identifying maximum and minimum values within a dataset.

  2. Scalar Functions: Scalar functions operate on a single value and return a modified or derived value based on the input. They can be used in various SQL statements, such as SELECT, WHERE, and UPDATE, to manipulate and transform data. Scalar functions can perform operations on different data types, including strings, numbers, dates, and more. Examples of scalar functions include:
    • CONCAT: Concatenates two or more strings together.
    • UPPER: Converts a string to uppercase.
    • LOWER: Converts a string to lowercase.
    • DATEPART: Extracts a specific part (year, month, day, etc.) from a date.
    • ROUND: Rounds a numeric value to a specified decimal place.

    Scalar functions allow you to perform data transformations, apply formatting, manipulate strings, perform calculations, and extract specific parts from values.

Both aggregate and scalar functions are integral to SQL’s data manipulation capabilities. They provide flexibility and power when it comes to deriving insights, performing calculations, and transforming data according to specific requirements.

Can you explain the concept of a user-defined function in SQL and discuss its various types?

In SQL, a user-defined function (UDF) is a programming construct that allows users to define their own custom functions to perform specific operations or calculations. A UDF encapsulates a set of SQL statements or code logic and can accept parameters, perform computations, and return a result.

There are three main types of user-defined functions in SQL:

  1. Scalar Functions: Scalar functions are UDFs that accept one or more input parameters and return a single value. They perform calculations on the input parameters and produce a result. Scalar functions can be used in SQL queries and expressions wherever a single value is expected. Examples of scalar functions include functions that calculate the square root of a number, convert data types, or perform string manipulations.
  2. Table-Valued Functions: Table-valued functions are UDFs that return a result set in the form of a table. They can accept input parameters and generate a table of data as output. Table-valued functions can be used in the FROM clause of a SELECT statement, allowing you to treat the function as if it were a table. They are useful for encapsulating complex logic and performing operations that return multiple rows and columns of data.
    • Inline Table-Valued Functions: Inline table-valued functions return a table variable directly without intermediate steps. They are defined using a single SELECT statement and can include joins, filtering, and other operations.
    • Multi-Statement Table-Valued Functions: Multi-statement table-valued functions involve multiple SQL statements to define the logic and structure of the returned table. They use a BEGIN…END block to enclose the statements and explicitly define the table variable.
  3. Aggregate Functions: Aggregate functions are specialized UDFs that perform calculations on a set of values and return a single result. They are typically used with the GROUP BY clause to compute summaries or aggregate values from multiple rows. Examples of aggregate functions include functions that calculate the sum, average, minimum, maximum, or count of a set of values.

By using user-defined functions, you can encapsulate frequently used logic, improve code reusability, and simplify complex calculations within SQL queries. Functions allow you to extend the functionality of SQL and tailor it to your specific requirements. Different database systems may have variations in the syntax and features supported for creating user-defined functions, so it’s important to consult the documentation of your specific database system for details.

Can you explain the concept of OLTP?

OLTP, or Online Transaction Processing, refers to a type of database system and processing methodology that focuses on managing and processing high volumes of transactional data in real-time. It is designed to handle day-to-day operational activities and supports frequent, short-lived transactions, typically involving individual data modifications or retrievals.

Here are some key points about OLTP:

  1. Transactional Processing: OLTP systems are optimized for handling transactional operations, such as inserting, updating, and deleting individual records in a database. They prioritize the processing of these transactions in real-time, ensuring the responsiveness and reliability of the system.
  2. Concurrent Access: OLTP systems are designed to handle multiple concurrent users or applications accessing the database simultaneously. They employ mechanisms such as locking and transaction isolation levels to maintain data consistency and prevent conflicts between concurrent transactions.
  3. Normalized Data Structure: OLTP databases are often structured using normalization techniques to minimize data redundancy and ensure data integrity. Normalization reduces data duplication by breaking down information into smaller, atomic units stored in related tables.
  4. Query Optimization: OLTP systems prioritize the efficiency of individual transactional operations and query processing. Indexing, query optimization techniques, and efficient data access paths are employed to provide fast response times for retrieving and modifying small portions of data.
  5. Real-time Decision Making: OLTP systems facilitate real-time decision making by providing up-to-date information on business operations. They enable fast and accurate data retrieval and support transactional workflows critical for day-to-day business processes.

OLTP systems are commonly used in various industries and applications where real-time transaction processing is essential, such as banking, e-commerce, inventory management, and airline reservation systems.

Can you highlight the distinctions between OLTP and OLAP?

OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) are two distinct approaches to processing and managing data, each serving different purposes in the realm of data processing and analysis.

As discussed earlier, OLTP focuses on transactional processing and is designed to handle real-time, day-to-day business operations. On the other hand, OLAP focuses on analytical processing and supports complex data analysis and reporting. Here are the key characteristics of OLAP:

  1. Purpose: OLAP systems are designed for analytical tasks such as multidimensional analysis, data mining, and business intelligence. They facilitate decision-making by allowing users to analyze large volumes of data, identify trends, and generate meaningful insights.
  2. Data Structure: OLAP systems typically employ a denormalized or partially denormalized database design. Data is organized into multidimensional structures, such as cubes, hierarchies, and dimensions, which enable efficient and flexible analysis of data from multiple perspectives.
  3. Workload: OLAP systems handle complex queries and aggregations that involve large datasets. They support ad-hoc queries, data slicing, drill-down, and other operations that enable in-depth analysis of data. Examples of OLAP operations include trend analysis, forecasting, and data mining for business intelligence.
  4. Response Time: OLAP systems prioritize providing timely and comprehensive responses to complex analytical queries, even if it requires longer query execution times compared to OLTP systems.

In summary, OLTP focuses on transactional processing for day-to-day operations, emphasizing real-time data processing and concurrency control. OLAP, on the other hand, focuses on analytical processing, supporting complex data analysis and decision-making through multidimensional structures and ad-hoc queries. Understanding the distinctions between OLTP and OLAP is crucial for designing appropriate data processing systems to meet specific operational or analytical requirements.

Can you explain the concept of collation in SQL and discuss the different types of collation sensitivity?

In SQL, collation refers to the set of rules that determine how character data is sorted and compared in a database. It defines the order, comparison, and case-sensitivity rules for string values.

Collation sensitivity refers to the level of distinction made between characters or strings based on their linguistic or cultural differences. There are three common types of collation sensitivity:

  1. Case-sensitive collation: In a case-sensitive collation, distinctions are made between uppercase and lowercase characters. For example, ‘A’ and ‘a’ are considered different characters, and the sorting order would reflect this distinction. Case-sensitive collations are often used in languages where case differences significantly impact sorting and comparison.
  2. Case-insensitive collation: In a case-insensitive collation, no distinction is made between uppercase and lowercase characters. Sorting and comparisons are performed without considering case differences. For example, ‘A’ and ‘a’ are treated as the same character. Case-insensitive collations are commonly used in languages where case differences do not affect sorting or comparison, or when case-insensitivity is desired for simplicity or convenience.
  3. Accent-sensitive collation: An accent-sensitive collation considers accent marks or diacritical marks as significant distinctions between characters. For example, ‘é’ and ‘e’ would be treated as different characters, and their sorting order would reflect this distinction. Accent-sensitive collations are commonly used in languages that have significant accent mark differences, such as French or Spanish.

Additionally, there are collations that combine different levels of sensitivity. For example, a case-insensitive collation may still be accent-sensitive, distinguishing between characters with or without accent marks.

The choice of collation sensitivity depends on the language, cultural requirements, and specific sorting or comparison needs of your database.


In conclusion, this page has covered a range of SQL interview questions specifically tailored for intermediate developers. By delving into these questions, you can enhance your SQL knowledge and confidently tackle interviews in the field.

Understanding the mentioned concepts in this page and their practical applications will empower you to handle complex SQL scenarios and demonstrate your expertise to potential employers. Feel free to explore our SQL Interview Questions page for related content and additional resources.

Posted in: