SQL Interview Questions for Experienced PDF

SQL Interview Questions for 3, 5, 10 years Experience

This SQL interview questions and answers guide helps experienced database professionals to prepare for advanced job interviews with questions designed specifically for 3, 5, 7, and 10 years of experience levels. We included challenging SQL questions about complex queries, performance tuning, and database optimization that senior developers and database administrators handle in their work.

he guide covers everything from advanced joins and subqueries to stored procedures and database design that companies expect from experienced candidates. Use this guide to practice advanced SQL skills and secure your next senior database position.

SQL Interview Questions and Answers for 3 years Experience

Que 1. How would you optimize a slow-running SQL query in a production environment?

Answer: Optimizing a slow query involves identifying performance bottlenecks and addressing them without breaking business logic. Key steps include:

  • Analyze the execution plan using EXPLAIN or EXPLAIN ANALYZE to understand index usage and join order
  • Ensure proper indexing on columns used in WHERE, JOIN, and ORDER BY clauses
  • Avoid using SELECT *; explicitly select required columns
  • Rewrite subqueries as joins where beneficial
  • Break down complex queries into smaller steps using temporary tables
  • Use appropriate filtering conditions early in the query to reduce scanned rows
  • Consider denormalization or materialized views for heavy read operations

Example:

EXPLAIN ANALYZE
SELECT order_id, customer_id
FROM orders
WHERE status = 'SHIPPED';

Que 2. What is the difference between INNER JOIN and LEFT JOIN in terms of performance and result set?

Answer:

  • INNER JOIN returns only matching rows from both tables based on the join condition
  • LEFT JOIN returns all rows from the left table and matches from the right table; unmatched rows contain NULLs

Performance impact:

  • INNER JOIN is generally faster because it processes only matching records
  • LEFT JOIN can be slower as it must retain all rows from the left table, even without matches
AspectINNER JOINLEFT JOIN
Matching behaviorOnly matching rowsAll left table rows, match or NULL
PerformanceUsually fasterCan be slower
Null handlingNot applicableRight table columns may be NULL

Que 3. How would you find the second highest salary from an Employees table without using LIMIT or TOP?

Answer: You can use a correlated subquery:

SELECT MAX(salary) AS second_highest_salary
FROM employees
WHERE salary < (SELECT MAX(salary) FROM employees);

This approach works in most SQL dialects and avoids database-specific keywords like LIMIT or TOP.

Que 4. How do you detect and handle deadlocks in SQL?

Answer:
To detect a deadlock:

  • Use database system logs or monitoring tools (e.g., SQL Server Profiler, MySQL Performance Schema)
  • Most RDBMS automatically detect deadlocks and terminate one of the transactions

To handle:

  • Ensure consistent access order to resources in transactions
  • Keep transactions short to reduce locking time
  • Use appropriate isolation levels like READ COMMITTED instead of SERIALIZABLE when possible
  • Break large updates into smaller batches

Que 5. How would you update millions of rows without locking the table for a long time?

Answer: Break the update into smaller batches to minimize locks:

WHILE 1 = 1
BEGIN
    UPDATE TOP (1000) orders
    SET status = 'ARCHIVED'
    WHERE status = 'DELIVERED';
    
    IF @@ROWCOUNT = 0
        BREAK;
END

This pattern reduces blocking and helps maintain database responsiveness.

Que 6. What is the difference between WHERE and HAVING clauses in SQL?

Answer:

  • WHERE filters rows before grouping and aggregation
  • HAVING filters rows after grouping and aggregation

Example:

SELECT department_id, COUNT(*) AS emp_count
FROM employees
WHERE active = 1
GROUP BY department_id
HAVING COUNT(*) > 5;

Here, WHERE removes inactive employees before grouping, and HAVING ensures only departments with more than five employees are returned.

Que 7. How can you identify missing indexes in a database?

Answer:

  • Use database-specific DMVs or tools:
  • SQL Server: sys.dm_db_missing_index_details
  • MySQL: Performance Schema and EXPLAIN output
  • Monitor slow query logs and identify queries with full table scans
  • Evaluate execution plans for high-cost operations without index usage

Que 8. How do you handle duplicate rows in a table?

Answer: You can use ROW_NUMBER() to identify and delete duplicates:

WITH cte AS (
    SELECT *,
           ROW_NUMBER() OVER (PARTITION BY email ORDER BY id) AS rn
    FROM customers
)
DELETE FROM cte WHERE rn > 1;

This approach retains the first occurrence and deletes subsequent duplicates.

Que 9. What is the difference between clustered and non-clustered indexes?

Answer:

  • Clustered index defines the physical order of data in the table and there can be only one per table
  • Non-clustered index stores a separate structure with pointers to the actual data rows
FeatureClustered IndexNon-Clustered Index
Physical orderData stored in index orderData stored separately
Count per tableOneMany
StorageLargerSmaller
PerformanceFaster for range queriesFaster for lookups on key

Que 10. How do you find queries causing the highest load on the database?

Answer:

  • Check slow query logs and identify statements with long execution times
  • Use database-specific monitoring tools:
  • SQL Server: sys.dm_exec_query_stats
  • MySQL: slow_query_log and performance_schema
  • PostgreSQL: pg_stat_statements
  • Analyze execution plans and tune indexing, query structure, or caching strategies accordingly

SQL Interview Questions and Answers for 5 years Experience

Que 11. How would you identify and resolve performance bottlenecks in a SQL query?

Answer: Performance bottlenecks are often caused by inefficient query design, missing indexes, or poor execution plans. A systematic approach includes:

  • Use the EXPLAIN or EXPLAIN ANALYZE statement to check the execution plan.
  • Identify full table scans, nested loops, or expensive joins.
  • Add appropriate indexes, especially on columns used in WHERE, JOIN, and GROUP BY clauses.
  • Rewrite subqueries as joins where possible to avoid unnecessary repeated scans.
  • Use indexing strategies such as covering indexes for frequently accessed queries.
  • Partition large tables when dealing with big datasets to limit the scanned data range.

Example using PostgreSQL:

EXPLAIN ANALYZE
SELECT customer_id, COUNT(*) 
FROM orders
WHERE order_date >= '2024-01-01'
GROUP BY customer_id;

Que 12. Explain the difference between Clustered and Non-Clustered Indexes with examples.

Answer:

FeatureClustered IndexNon-Clustered Index
Data StorageData is physically stored in the order of the index.Index contains pointers to the physical data.
Number per tableOnly one per table.Multiple allowed.
Speed for range queriesGenerally faster.Slower compared to clustered for range queries.
Use CaseIdeal for primary keys and frequently sorted columns.Useful for frequently searched columns not in the clustered index.

Example:

-- Clustered Index (default for PRIMARY KEY)
CREATE TABLE Employees (
    EmpID INT PRIMARY KEY, -- Clustered index
    Name VARCHAR(50),
    DepartmentID INT
);

-- Non-Clustered Index
CREATE NONCLUSTERED INDEX idx_Department
ON Employees(DepartmentID);

Que 13. How do you handle deadlocks in SQL, and what strategies can prevent them?

Answer: Deadlocks occur when two or more transactions hold locks and wait for each other indefinitely.

Prevention strategies:

  • Access tables in a consistent order across all transactions.
  • Keep transactions short and reduce the lock footprint.
  • Use a lower isolation level if possible (READ COMMITTED instead of SERIALIZABLE).
  • Avoid user interaction during a transaction.

Detection:

  • Enable deadlock tracing (e.g., in SQL Server, DBCC TRACEON(1204, 1222)).
  • Analyze the deadlock graph and rework queries or indexes.

Que 14. What is the difference between WHERE and HAVING clauses, and when would you use each?

Answer:

CriteriaWHEREHAVING
PurposeFilters rows before grouping.Filters groups after aggregation.
Use with Aggregate FunctionsNot allowed directly.Allowed.
PerformanceApplied first, improves performance by reducing rows early.Applied after grouping, potentially slower.

Example:

-- WHERE filters before aggregation
SELECT department_id, COUNT(*) 
FROM employees
WHERE hire_date >= '2020-01-01'
GROUP BY department_id;

-- HAVING filters after aggregation
SELECT department_id, COUNT(*) 
FROM employees
GROUP BY department_id
HAVING COUNT(*) > 5;

Que 15. How would you optimize a query that involves multiple joins and large datasets?

Answer:

  • Ensure proper indexing on join keys in both tables.
  • Use selective WHERE clauses to reduce the dataset before joining.
  • Prefer INNER JOIN over LEFT JOIN unless null handling is required.
  • Break complex queries into smaller, temporary result sets using Common Table Expressions (CTEs).
  • Check for unnecessary columns in the SELECT list to avoid extra I/O.

Example:

WITH FilteredOrders AS (
    SELECT * 
    FROM Orders
    WHERE OrderDate >= '2024-01-01'
)
SELECT c.CustomerName, COUNT(*) 
FROM Customers c
JOIN FilteredOrders o ON c.CustomerID = o.CustomerID
GROUP BY c.CustomerName;

Que 16. How do you implement pagination efficiently in SQL for large datasets?

Answer:

  • Use indexed columns in the ORDER BY clause.
  • Avoid large OFFSET values as they cause unnecessary scans.
  • Use keyset pagination for better performance.

Example of OFFSET pagination:

SELECT * 
FROM Orders
ORDER BY OrderID
OFFSET 50 ROWS FETCH NEXT 20 ROWS ONLY;

Example of keyset pagination:

SELECT * 
FROM Orders
WHERE OrderID > 1000
ORDER BY OrderID
LIMIT 20;

Que 17. How would you handle data deduplication in a table with millions of rows?

Answer:

  • Use ROW_NUMBER or RANK to identify duplicates.
  • Delete rows based on duplicate criteria.

Example:

WITH Duplicates AS (
    SELECT *,
           ROW_NUMBER() OVER (PARTITION BY email ORDER BY id) AS rn
    FROM Customers
)
DELETE FROM Duplicates WHERE rn > 1;

Performance tips:

  • Ensure indexes exist on columns used for duplicate detection.
  • Deduplicate in batches to avoid locking the entire table.

Que 18. What is the difference between a CTE and a temporary table, and when would you use each?

Answer:

CriteriaCTETemporary Table
LifetimeExists only during query execution.Exists until session ends or explicitly dropped.
StorageIn memory, unless large enough to spill to disk.Stored in tempdb (SQL Server) or similar temp storage.
RecursionSupports recursion.Does not support recursion directly.
ReusabilityNot reusable in multiple queries in the same batch.Can be queried multiple times.

Use CTE for:

  • Recursive queries.
  • Improving readability in single queries.

Use Temporary Table for:

  • Storing intermediate results for multiple uses.
  • Repeated complex joins or aggregations.

Que 19. How do you write an efficient query to find the second highest salary from an Employee table?

Answer: Multiple methods exist:

Using LIMIT and ORDER BY:

SELECT DISTINCT salary
FROM Employees
ORDER BY salary DESC
LIMIT 1 OFFSET 1;

Using ROW_NUMBER:

WITH Ranked AS (
    SELECT salary, ROW_NUMBER() OVER (ORDER BY salary DESC) AS rn
    FROM Employees
)
SELECT salary FROM Ranked WHERE rn = 2;

Que 20. How would you design a database to handle high write throughput without locking issues?

Answer:

  • Use horizontal partitioning (sharding) to distribute writes across multiple servers.
  • Optimize indexes to reduce write overhead.
  • Apply batch inserts or updates to reduce transaction overhead.
  • Use appropriate isolation levels to minimize locking.
  • Employ optimistic concurrency control where applicable.
  • Consider write-optimized storage engines (e.g., InnoDB over MyISAM in MySQL).
SQL Interview Questions for Experience

SQL Interview Questions and Answers for 7 years Experience

Que 21. How would you optimize a query suffering from parameter sniffing issues in SQL Server?

Answer: Parameter sniffing occurs when SQL Server caches an execution plan based on the first parameter value and reuses it for subsequent executions, leading to poor performance for other parameter values.

Common solutions:

  • Use OPTION (RECOMPILE) to generate a fresh plan for each execution.
  • Use OPTIMIZE FOR to hint the optimizer toward a specific parameter value.
  • Assign parameters to local variables inside procedures to avoid direct sniffing.
  • Create multiple stored procedures optimized for different data patterns.

Example:

SELECT * FROM Orders
WHERE CustomerID = @CustID
OPTION (OPTIMIZE FOR (@CustID = 1001));

Que 22. Explain your approach to indexing a highly transactional table without causing write performance degradation.

Answer: In high-write workloads, indexes should be carefully chosen to avoid slowing down inserts, updates, and deletes.

Best practices:

  • Limit the number of indexes to those that serve critical queries.
  • Use filtered indexes for selective queries.
  • Employ covering indexes to reduce lookups.
  • Schedule index rebuilds or reorganizations during low-traffic hours.
  • Consider partitioning large transactional tables for better performance.

Que 23. How do you identify and fix implicit conversions that cause index scans instead of seeks?

Answer: Implicit conversions happen when SQL Server automatically converts data types between query parameters and table columns, preventing index usage.

Steps:

  • Check the execution plan for CONVERT or CAST operations.
  • Ensure query parameter data types match table column definitions.
  • Modify application code or schema to standardize data types.

Example:

-- Causes implicit conversion
SELECT * FROM Orders WHERE OrderID = '100';

-- Correct
SELECT * FROM Orders WHERE OrderID = 100;

Que 24. How do you design an indexing strategy for a table used for both OLTP and OLAP workloads?

Answer: Balancing OLTP and OLAP requires indexing for both fast writes and analytical reads.

Approach:

  • Use a clustered index on a narrow, sequential key to optimize writes.
  • Add nonclustered indexes for common analytical queries.
  • Consider filtered indexes to target specific OLAP queries.
  • For large datasets, implement columnstore indexes for aggregated queries.
  • Periodically review index usage statistics and adjust accordingly.

Que 25. How would you troubleshoot a query with sudden performance degradation despite no code changes?

Answer:

  • Check for statistics updates; outdated stats can cause poor execution plans.
  • Investigate parameter sniffing or cached plan regression.
  • Look for recent data growth affecting indexes.
  • Review blocking or deadlocks impacting query execution.
  • Compare the current execution plan with a historical one to identify differences.

Que 26. How can you implement real-time reporting without impacting the OLTP database performance?

Answer:

  • Use replication or Change Data Capture (CDC) to feed a reporting database.
  • Offload reporting queries to a read replica.
  • Use indexed views for pre-aggregated data.
  • Schedule ETL jobs for near-real-time data into a separate OLAP system.

Que 27. What strategies do you use to manage partitioned tables effectively?

Answer:

  • Choose partition keys based on query patterns and data distribution.
  • Align index partitioning with table partitions.
  • Implement partition switching for fast data archiving or loading.
  • Monitor partition size and distribution to avoid skew.
  • Merge small partitions to optimize performance when needed.

Que 28. How would you detect and handle blocking issues in a production environment?

Answer:

  • Use system views like sys.dm_exec_requests and sys.dm_exec_sessions to identify blocking chains.
  • Capture blocking session IDs and queries.
  • Kill or adjust long-running blocking transactions if necessary.
  • Implement shorter transactions and proper indexing to reduce blocking.
  • Use READ COMMITTED SNAPSHOT isolation to reduce lock contention.

Que 29. How do you optimize queries that rely heavily on window functions over large datasets?

Answer:

  • Use indexes that match the PARTITION BY and ORDER BY clauses.
  • Limit the dataset with a WHERE clause before applying window functions.
  • Pre-aggregate data in CTEs or temp tables when possible.
  • For very large datasets, consider batch processing with ROW_NUMBER filters.

Example:

WITH SalesRank AS (
    SELECT OrderID, CustomerID,
           ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY OrderDate DESC) AS rn
    FROM Orders
)
SELECT * FROM SalesRank WHERE rn = 1;

Que 30. How would you design a database for high availability and disaster recovery in SQL Server?

Answer:

  • Implement Always On Availability Groups for automatic failover.
  • Use database mirroring or log shipping as secondary failover options.
  • Maintain regular full, differential, and transaction log backups.
  • Store backups in geographically separate locations.
  • Periodically test restore procedures to validate recovery time objectives (RTO) and recovery point objectives (RPO).

SQL Interview Questions and Answers for 10 years Experience

Que 31. How do you handle performance tuning for a mission-critical SQL system that operates 24/7 without downtime?

Answer: In high-availability systems, tuning must be non-disruptive.

Approach:

  • Use live query statistics and execution plans to identify slow-running queries in real time.
  • Apply index changes online if supported by the RDBMS (e.g., ONLINE = ON in SQL Server).
  • Implement query hints or plan guides to optimize without changing application code.
  • Partition large tables and indexes to reduce scan sizes.
  • Update statistics incrementally to avoid full-table reads.
  • Leverage read replicas or Always On secondaries for running heavy reports without affecting primary performance.

Que 32. How would you approach database sharding in a large-scale, write-intensive system?

Answer: Sharding distributes data across multiple servers to reduce load and improve scalability.

Key considerations:

  • Choose a shard key that evenly distributes data and queries (e.g., customer_id instead of timestamp for time-heavy inserts).
  • Implement application-side shard routing logic or use middleware that supports shard management.
  • Maintain consistent schema across all shards.
  • Implement cross-shard query strategies carefully to avoid excessive network latency.
  • Plan for re-sharding when data distribution changes over time.

Que 33. How do you design an indexing strategy for a multi-terabyte table with both OLTP and analytical workloads?

Answer:

  • Create a clustered index on a narrow, monotonically increasing key for OLTP performance.
  • Use filtered nonclustered indexes for common selective OLTP queries.
  • For analytics, add columnstore indexes to speed up aggregations.
  • Employ partitioned indexes to allow index maintenance on specific ranges without locking the entire table.
  • Monitor sys.dm_db_index_usage_stats to remove unused indexes that increase write cost.

Que 34. How do you investigate and fix a query with a parameter-sensitive plan problem in a critical stored procedure?

Answer:

  • Check the execution plan for discrepancies between parameter values.
  • Use OPTION (RECOMPILE) for queries that vary significantly based on input parameters.
  • Apply OPTIMIZE FOR UNKNOWN to generate a more generic plan when parameter distribution is skewed.
  • Consider splitting the stored procedure into multiple versions optimized for different parameter value ranges.
  • Keep statistics updated with FULLSCAN for skewed columns to improve optimizer accuracy.

Que 35. How do you implement and manage Change Data Capture (CDC) for near-real-time analytics?

Answer:

  • Enable CDC on source tables to track inserts, updates, and deletes.
  • Use staging tables to store and transform captured data before pushing it to analytics systems.
  • Schedule incremental ETL jobs to load only changed data.
  • Purge CDC logs periodically to prevent excessive storage use.
  • Monitor latency between change capture and reporting to meet business SLAs.

Que 36. How do you diagnose and resolve high I/O wait times in a database under peak load?

Answer:

  • Check wait statistics (e.g., sys.dm_os_wait_stats in SQL Server) to identify I/O bottlenecks.
  • Examine missing indexes or inefficient queries causing large scans.
  • Move large or heavily accessed tables to faster storage tiers (e.g., SSD).
  • Use compression to reduce I/O volume.
  • Optimize tempdb configuration with multiple data files to reduce contention.

Que 37. How would you implement a disaster recovery strategy with an RPO of under 5 minutes and RTO under 15 minutes?

Answer:

  • Use synchronous replication for high availability and asynchronous replication to a geographically distant site.
  • Implement frequent transaction log backups or log shipping with minimal intervals.
  • Regularly test failover procedures to ensure RTO compliance.
  • Maintain automated scripts for DNS or application connection string updates during failover.
  • Keep a warm standby server ready with near-real-time data synchronization.

Que 38. What strategies would you use to optimize a query with complex window functions over billions of rows?

Answer:

  • Filter the dataset before applying window functions.
  • Use indexes that match PARTITION BY and ORDER BY columns.
  • Pre-aggregate data in staging tables or materialized views.
  • Consider parallel query execution if supported by the RDBMS.
  • Break the query into smaller batches to process data incrementally.

Example:

WITH RankedSales AS (
    SELECT OrderID, CustomerID,
           ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY OrderDate DESC) AS rn
    FROM Sales
    WHERE OrderDate >= '2024-01-01'
)
SELECT * FROM RankedSales WHERE rn <= 3;

Que 39. How do you ensure query plan stability in a system where execution plans fluctuate due to changing data patterns?

Answer:

  • Use plan guides or forced plan execution features (e.g., Query Store in SQL Server).
  • Keep statistics updated regularly to prevent suboptimal plan choices.
  • Identify and address skewed data distributions that affect the optimizer’s estimates.
  • Pin frequently used queries to stable, well-performing plans and monitor their performance.
  • Review plan regressions periodically and adjust indexes or query design.

Que 40. How do you design a hybrid transactional and analytical processing (HTAP) system without degrading OLTP performance?

Answer:

  • Use change streaming technologies (e.g., Debezium, CDC, replication) to feed an OLAP store.
  • Keep OLTP schema normalized for write efficiency while denormalizing OLAP structures for read efficiency.
  • Use indexed views or materialized views for pre-computed analytics.
  • Offload heavy aggregations to a separate analytics engine such as Azure Synapse, BigQuery, or Snowflake.
  • Schedule intensive analytical jobs during off-peak OLTP hours when possible.

PL/SQL Interview Questions and Answers for Experience

Que 41. How do you optimize a PL/SQL block that processes millions of rows without degrading performance?

Answer: For large data volumes, avoid row-by-row processing (slow-by-slow). Use bulk processing to minimize context switching between SQL and PL/SQL.

Best practices:

  • Use BULK COLLECT to fetch multiple rows into a collection.
  • Use FORALL to perform bulk DML operations.
  • Limit bulk collection size to avoid excessive PGA memory usage.
  • Commit in controlled batches to prevent undo tablespace overflow.

Example:

DECLARE
    TYPE t_ids IS TABLE OF employees.employee_id%TYPE;
    v_ids t_ids;
BEGIN
    SELECT employee_id BULK COLLECT INTO v_ids
    FROM employees
    WHERE department_id = 50;

    FORALL i IN v_ids.FIRST..v_ids.LAST
        UPDATE employees
        SET salary = salary * 1.1
        WHERE employee_id = v_ids(i);
END;

Que 42. Explain the difference between an autonomous transaction and a regular transaction in PL/SQL with use cases.

Answer:

CriteriaRegular TransactionAutonomous Transaction
Commit ScopeShares commit/rollback with the main transaction.Commits independently of the main transaction.
Rollback ImpactRollback affects all operations in the transaction.Rollback affects only the autonomous block.
Use CasesStandard DML operations.Logging, audit trail inserts, sending notifications.

Example:

PRAGMA AUTONOMOUS_TRANSACTION;

Use when you need to commit changes regardless of the outcome of the main transaction, such as writing to an error log table.

Que 43. How do you handle exceptions in a PL/SQL package to ensure both error logging and transaction integrity?

Answer:

  • Use a WHEN OTHERS exception handler at appropriate levels to log errors.
  • Implement autonomous transactions for error logging to avoid rollback of log entries.
  • Re-raise the exception if it must be handled by the calling program.

Example:

EXCEPTION
    WHEN OTHERS THEN
        log_error(SQLCODE, SQLERRM); -- autonomous transaction
        ROLLBACK;
        RAISE;

Que 44. How do you debug performance issues in a PL/SQL procedure that runs slower over time?

Answer:

  • Use DBMS_PROFILER or DBMS_HPROF to identify bottlenecks at the line or function level.
  • Check SQL queries within the procedure for suboptimal execution plans.
  • Review bind variable usage to prevent hard parsing.
  • Examine indexes and statistics on queried tables.
  • Look for unintentional nested loops or repeated queries in loops.

Que 45. What is the difference between a function and a procedure in PL/SQL, and when would you use each?

Answer:

AspectFunctionProcedure
Return ValueMust return a value.Does not return a value (can use OUT parameters).
SQL UsabilityCan be used in SQL statements if it is deterministic and free from side effects.Cannot be directly used in SQL.
PurposeCalculation, returning single values.Executing complex business logic, multiple operations.

Example:

  • Use a function to calculate tax.
  • Use a procedure to generate monthly payroll and update multiple tables.

Que 46. How do you improve the performance of a cursor that returns millions of rows?

Answer:

  • Use BULK COLLECT with a LIMIT clause to fetch rows in batches.
  • Avoid unnecessary column retrieval by selecting only required fields.
  • Apply appropriate WHERE filters to minimize the dataset.
  • Use cursor variables for flexibility in passing queries dynamically.

Example:

LOOP
    FETCH c1 BULK COLLECT INTO v_data LIMIT 1000;
    EXIT WHEN v_data.COUNT = 0;
    -- process data
END LOOP;

Que 47. How do you manage package state in a multi-session environment to avoid unexpected behavior?

Answer:

  • Understand that package variables persist for a session’s lifetime.
  • Avoid using package state for session-independent data.
  • Use DBMS_SESSION.RESET_PACKAGE to clear package state when necessary.
  • Design stateless packages for scalability in application servers using connection pooling.

Que 48. How do you prevent mutating table errors in a row-level trigger?

Answer:

  • Use a compound trigger to separate row-level operations from statement-level logic.
  • Store affected row IDs in a collection during the row-level section, then process them after the statement completes.

Example:

CREATE OR REPLACE TRIGGER trg_update
FOR UPDATE ON employees
COMPOUND TRIGGER
    TYPE t_ids IS TABLE OF employees.employee_id%TYPE;
    v_ids t_ids := t_ids();
AFTER EACH ROW IS
BEGIN
    v_ids.EXTEND;
    v_ids(v_ids.LAST) := :NEW.employee_id;
END AFTER EACH ROW;

AFTER STATEMENT IS
BEGIN
    -- process v_ids here
END AFTER STATEMENT;
END;

Que 49. How do you secure PL/SQL code from being viewed or altered by unauthorized users?

Answer:

  • Use the WRAP utility to obfuscate PL/SQL source code before deployment.
  • Restrict object privileges to authorized roles.
  • Use database roles and VPD (Virtual Private Database) for access control.
  • Store sensitive logic in packages and avoid exposing it in ad-hoc queries.

Que 50. How do you implement error logging across multiple PL/SQL modules without repeating code?

Answer:

  • Create a centralized error logging package that contains procedures for writing errors to a log table.
  • Use PRAGMA AUTONOMOUS_TRANSACTION inside the logging procedure.
  • Call the logging procedure in exception handlers across different modules.

Example:

PACKAGE error_logger IS
    PROCEDURE log_error(p_code NUMBER, p_msg VARCHAR2);
END;

PACKAGE BODY error_logger IS
    PROCEDURE log_error(p_code NUMBER, p_msg VARCHAR2) IS
        PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
        INSERT INTO error_log (error_code, error_message, log_time)
        VALUES (p_code, p_msg, SYSDATE);
        COMMIT;
    END;
END;
SQL Interview Questions for FreshersTop Coding Interview Questions
Spring Boot Interview QuestionsFull Stack Developer Interview Questions

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *