Back to Blog
Oracle Database

Why Your Oracle SQL Is Slow And How to Fix It: A DBA's Introduction to SQL Tuning

SQL tuning is the iterative process of diagnosing and repairing SQL statements that fail to meet a performance standard. But knowing where to start is harder than it sounds. In this first post of a 12-part series based on the Oracle AI Database SQL Tuning Guide (26ai), I walk through the complete SQL tuning framework the six tuning tasks every DBA must know, every major tool available, the difference between hard and soft parsing, and the application design decisions that create performance problems before a single query runs. With working code, real diagnostics, and DBA insight from production experience.

April 1, 2026
10
Oracle Database
0 views0 readers0 comments
T

by Thiwanka SenarathnaApril 1, 2026

I have been working with Oracle EBS environments for over six years, and the same conversation repeats itself in almost every organization: a developer or business user complains that "the system is slow," a manager escalates it, and someone points at the database. What happens next determines whether your team spends the next two hours resolving the problem or the next two weeks chasing the wrong cause.

This post is the start of a 12-part series where I go deep into Oracle SQL tuning based directly on the Oracle AI Database SQL Tuning Guide, Release 26ai (January 2026), which is the most current and comprehensive Oracle document on this topic. Each post in this series is written for working DBAs and developers, with verified code samples, real diagnostic queries, and the practical insight that comes from working with production Oracle environments.

Let us start at the beginning.

What SQL Tuning Actually Is

The Oracle documentation defines SQL tuning precisely: it is the iterative process of improving SQL statement performance to meet specific, measurable, and achievable goals.

Two words in that definition matter more than the others: iterative and measurable.

Iterative means this is not a one-time fix. You tune, you measure, you tune again. The optimizer changes as statistics age, data volumes grow, and execution environments shift. A query that runs in 0.3 seconds today may take 45 seconds six months from now same SQL, different data, stale statistics.

Measurable means you must define success before you start. "Make it faster" is not a tuning goal. "Reduce response time from 12 seconds to under 2 seconds for the order entry transaction" is a tuning goal. Without a measurable target, you cannot know when you are done, and you cannot justify the effort to management.

It is also worth being precise about what SQL tuning is not. SQL tuning is about fixing problems in deployed applications. It is a reactive discipline. The proactive discipline designing applications and schemas for good SQL performance before deployment is SQL performance methodology, and we cover that in this post as well.

The Two Tuning Goals You Will Always Be Chasing

When a SQL statement becomes a problem, it has failed to meet a predetermined performance standard. After identifying the problem, your tuning session will have one of two goals:

Goal 1: Reduce user response time. This means decreasing the time between when a user issues a statement and receives a response. This is the goal for OLTP environments the order entry screen that hangs for three minutes, the inventory lookup that times out.

Goal 2: Improve throughput. This means using the least amount of resources necessary to process all rows accessed by a statement. This is the goal for batch and reporting environments the overnight ETL job that consumes every CPU on the database host, blocking every other user's queries.

The Oracle documentation uses a powerful example to illustrate why this distinction matters. Suppose two separate SQL statements both take three minutes to complete. In the first case, it is a customer-facing OLTP query that hangs a shopping cart. In the second, it is a parallel data warehouse query that monopolizes the CPU. The user response time is identical three minutes but the cause is completely different, and so is the correct solution. Getting this wrong wastes time and can make things worse.

The Six SQL Tuning Tasks Every DBA Must Know

Whether you are tuning proactively (regularly reviewing SQL with advisors) or reactively (fixing something that is on fire right now), a complete tuning session involves these six tasks. Skipping any one of them is how you end up solving the wrong problem.

Task 1: Identify High-Load SQL Statements

Before you can fix anything, you need to find the right SQL to fix. This means reviewing execution history to find the statements responsible for a large share of workload and system resources.

The most direct way to find high-load SQL from the command line is to query V$SQL:

sql

-- Find top 10 SQL statements by total elapsed time
-- Run as DBA or with SELECT ANY DICTIONARY privilege

SELECT sql_id,
       executions,
       elapsed_time,
       ROUND(elapsed_time / NULLIF(executions, 0) / 1e6, 2) AS avg_secs,
       buffer_gets,
       disk_reads,
       ROUND(buffer_gets / NULLIF(executions, 0)) AS avg_buffer_gets,
       SUBSTR(sql_text, 1, 80) AS sql_text_sample
FROM   v$sql
WHERE  executions > 0
ORDER BY elapsed_time DESC
FETCH FIRST 10 ROWS ONLY;

For a broader workload view, query the Automatic Workload Repository:

sql

-- Find top SQL by CPU time from AWR (last 24 hours)
-- Requires Diagnostics Pack license

SELECT s.sql_id,
       s.executions_delta AS executions,
       ROUND(s.cpu_time_delta / 1e6, 2) AS cpu_secs,
       ROUND(s.elapsed_time_delta / 1e6, 2) AS elapsed_secs,
       ROUND(s.cpu_time_delta / NULLIF(s.executions_delta,0) / 1e6, 4) AS avg_cpu_secs,
       SUBSTR(t.sql_text, 1, 80) AS sql_text
FROM   dba_hist_sqlstat s
JOIN   dba_hist_sqltext  t ON s.sql_id = t.sql_id
JOIN   dba_hist_snapshot sn ON s.snap_id = sn.snap_id
WHERE  sn.begin_interval_time >= SYSDATE - 1
AND    s.executions_delta > 0
ORDER BY s.cpu_time_delta DESC
FETCH FIRST 10 ROWS ONLY;

Task 2: Gather Performance-Related Data

Once you have identified a problematic SQL statement, gather all the data relevant to understanding its performance: optimizer statistics on the tables and indexes involved, the current execution plan, any SQL profiles or baselines attached to the statement, and the structure of any objects it references.

Optimizer statistics are the single most important data point. If statistics do not exist or are stale, the optimizer is making decisions based on inaccurate information, and no amount of hint-based tuning will fix that root problem sustainably.

sql

-- Check statistics freshness for a specific table
SELECT table_name,
       num_rows,
       last_analyzed,
       ROUND(last_analyzed - SYSDATE) AS days_old,
       stale_stats
FROM   dba_tab_statistics
WHERE  owner = 'HR'
AND    table_name = 'EMPLOYEES';

-- Check if a SQL statement has a SQL profile attached
SELECT name,
       sql_text,
       status,
       force_matching
FROM   dba_sql_profiles
WHERE  sql_text LIKE '%employees%'  -- adjust to match your SQL
ORDER BY created DESC;

Task 3: Determine the Root Cause

The Oracle documentation identifies five root causes for SQL performance problems. In my experience, every production performance issue maps to one of these and frequently to more than one simultaneously.

Root Cause

Description

What You Will See

Inefficiently designed SQL

The SQL itself is written badly

Unnecessary full table scans, Cartesian joins, UNION instead of UNION ALL

Suboptimal execution plan

The optimizer chose the wrong access path

Wrong join method, wrong index, wrong join order

Missing SQL access structures

No index or materialized view where one would help

Full table scans on large tables with selective predicates

Stale optimizer statistics

DBMS_STATS cannot keep up with DML changes

Cardinality misestimates, bad join order

Hardware problems

Memory, I/O, or CPU constraints

High wait events on I/O or memory allocation

The most common mistake I see junior DBAs make is jumping straight to adding an index (missing access structures) without first checking whether the existing execution plan is correct and whether statistics are fresh. Adding an index to a table with stale statistics may make the plan worse.

Task 4: Define the Scope of the Problem

The Oracle documentation makes a subtle but critical point here: the scope of the solution must match the scope of the problem.

Consider two scenarios: a shared pool that is too small (causing cursors to age out rapidly and forcing repeated hard parses across all sessions), versus a single SQL statement that ignores a useful index. The first problem requires a database-level fix increasing the shared pool via MEMORY_TARGET or SGA_MAX_SIZE. The second requires a statement-level fix a hint, a SQL profile, or schema restructuring.

Applying a database-level fix (such as changing an optimizer initialization parameter for the entire database) to a statement-level problem can damage performance for every other user. Always match the scope of the fix to the scope of the problem.

Task 5: Implement Corrective Actions

The corrective actions depend on the root cause. Based on production experience, here are the most common fixes mapped to their root causes:

Root Cause

Corrective Action

Inefficient SQL design

Rewrite using equijoins, remove functions from WHERE clause, use UNION ALL instead of UNION, use bind variables

Suboptimal execution plan

SQL profile, SQL plan baseline, hint (test first), gather fresh statistics

Missing access structures

Add index on selective column, add composite index for covering queries, create materialized view

Stale statistics

Run DBMS_STATS.GATHER_TABLE_STATS with appropriate options

Hard parse storm

Introduce bind variables, increase shared pool size, enable cursor_sharing if emergency

sql

-- Example: Correct SQL design -- using bind variable instead of literal
-- BAD: causes a unique hard parse for every different name value
SELECT employee_id, first_name, last_name
FROM   hr.employees
WHERE  last_name LIKE 'KING';

-- GOOD: one parse, reused for all executions
-- (Use this pattern in your application code with a bind variable)
SELECT employee_id, first_name, last_name
FROM   hr.employees
WHERE  last_name LIKE :last_name_search;

The difference in scalability is measurable. The Oracle documentation cites a test on a four-CPU server showing the following user capacity under different parse conditions:

Parse Strategy

Users Supported

No parsing (pre-parsed statements)

270

Soft parsing all statements

150

Hard parsing all statements

60

Reconnecting for every transaction

30

This is why bind variables are not optional in any serious Oracle application. The difference between 270 users and 60 users on the same hardware is entirely a function of parse strategy.

Task 6: Prevent SQL Performance Regressions

Fixing the problem once is not enough. You need to ensure that optimal execution plans persist through database upgrades, statistics refreshes, and data volume changes. The primary tools for this are SQL plan baselines (SQL Plan Management), SQL profiles, and properly maintained optimizer statistics.

SQL Tuning Tools: Automated and Manual

Oracle provides two categories of tools for SQL tuning. Understanding what each tool does and what it cannot do prevents you from reaching for the wrong one.

Automated SQL Tuning Tools

Automatic Database Diagnostic Monitor (ADDM)

ADDM is self-diagnostic software that automatically analyzes AWR snapshots to find root causes of performance problems. It runs automatically after each AWR snapshot (by default, every hour) and produces findings and recommendations.

ADDM can identify high-load SQL, recommend running SQL Tuning Advisor on specific statements, and quantify the expected benefit of its recommendations in minutes saved per AWR interval. This makes ADDM the right first stop when investigating a performance problem you have just been alerted to check the most recent ADDM finding before doing anything else.

SQL Tuning Advisor

SQL Tuning Advisor takes SQL statements as input and invokes the Automatic Tuning Optimizer to analyze them. It performs four types of analysis:

  • Checks for missing or stale statistics

  • Builds SQL profiles containing corrections for suboptimal optimizer estimates

  • Explores alternative access paths

  • Identifies SQL statements prone to suboptimal plans

The output is a set of recommendations, each with a rationale and expected benefit. You can accept recommendations to create SQL profiles, gather statistics, or restructure SQL.

sql

-- Run SQL Tuning Advisor on demand for a specific SQL_ID
-- Replace 'your_sql_id_here' with the actual SQL ID from V$SQL

DECLARE
  l_task_name VARCHAR2(30);
BEGIN
  l_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
    sql_id      => 'your_sql_id_here',
    scope       => DBMS_SQLTUNE.SCOPE_COMPREHENSIVE,
    time_limit  => 300,  -- 5 minutes maximum
    task_name   => 'tune_my_sql',
    description => 'Tuning task for high-load query'
  );

  DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'tune_my_sql');
END;
/

-- View the tuning recommendations
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('tune_my_sql')
FROM   DUAL;

SQL Access Advisor

SQL Access Advisor recommends which materialized views, indexes, and materialized view logs to create, drop, or retain based on your actual workload. It considers the trade-off between storage overhead and query performance before making recommendations.

Automatic Indexing

Automatic Indexing is one of the most operationally significant features added in recent Oracle releases. The process runs in the background every 15 minutes and does the following:

  1. Identifies index candidates based on column usage patterns in SQL workload

  2. Creates candidate indexes as invisible and unusable (metadata only zero impact on DML)

  3. Test-parses workload SQL to determine which candidates the optimizer finds beneficial

  4. Test-executes SQL to verify actual performance improvement

  5. Makes beneficial indexes visible to the application workload

  6. Creates SQL plan baselines to protect against any regressions when new indexes become visible

  7. Drops unused automatic indexes after the configured retention period (default: 373 days)

sql

-- Enable Automatic Indexing (requires Enterprise Edition + Oracle Database 19c+)
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE', 'IMPLEMENT');

-- Report on current configuration settings
SELECT parameter_name, parameter_value
FROM   dba_auto_index_config;

-- Check automatically created indexes
SELECT index_name,
       table_name,
       auto,
       visibility,
       status,
       last_analyzed
FROM   dba_indexes
WHERE  auto = 'YES'
ORDER BY last_analyzed DESC;

-- Change retention period for unused auto indexes (default is 373 days)
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '180');

-- Drop a specific automatic index (allow recreate = TRUE)
EXEC DBMS_AUTO_INDEX.DROP_AUTO_INDEXES('HR', '"SYS_AI_c0cmdvbzgyq94"', TRUE);

-- Drop all automatic indexes for a schema (allow recreate)
EXEC DBMS_AUTO_INDEX.DROP_AUTO_INDEXES('HR', NULL, TRUE);

-- Suspend automatic indexing for adhoc sessions
-- (indexes won't be considered for SQL in this session)
ALTER SESSION SET optimizer_session_type = 'ADHOC';

SQL Plan Management (Automatic SPM)

Automatic SPM detects and repairs plan performance regressions automatically. When a SQL statement executes with a plan that is new relative to what is stored in the Automatic SQL Tuning Set (ASTS), the execution performance is compared against historical plans.

sql

-- Configure Automatic SPM in background verification mode
-- (inspects AWR/ASTS, test executes alternatives, creates baselines)
EXEC DBMS_SPM.CONFIGURE('AUTO_SPM_EVOLVE_TASK', 'ON');

-- Configure Automatic SPM in real-time mode
-- (evaluates plan quality immediately during SQL execution)
EXEC DBMS_SPM.CONFIGURE('AUTO_SPM_EVOLVE_TASK', 'AUTO');

The difference between ON and AUTO is important: ON runs the evolve advisor in background as a maintenance task; AUTO runs it in real-time during query execution, creating plan baselines immediately when a regression is detected.

Manual SQL Tuning Tools

Execution Plans

The execution plan is the primary diagnostic tool in manual SQL tuning. Understanding what the optimizer chose and why is the foundation of every manual tuning effort.

sql

-- Generate and display an execution plan
EXPLAIN PLAN FOR
SELECT e.last_name, j.job_title, d.department_name
FROM   hr.employees e
JOIN   hr.departments d ON e.department_id = d.department_id
JOIN   hr.jobs j        ON e.job_id = j.job_id
WHERE  e.last_name LIKE 'A%';

-- Display the plan
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY());

-- Display the actual plan used by a running or recently run statement
-- (requires V$SQL_PLAN — shows actual row counts, not estimates)
SELECT * FROM TABLE(
  DBMS_XPLAN.DISPLAY_CURSOR(
    sql_id        => 'your_sql_id',
    cursor_child_no => 0,
    format        => 'ALLSTATS LAST'
  )
);

The format ALLSTATS LAST is critical for tuning. It shows both the optimizer's estimated row counts and the actual row counts from the last execution. A large gap between estimated and actual rows at any step in the plan almost always indicates a statistics problem and that is your root cause.

Optimizer Hints

Hints are instructions to the optimizer embedded in SQL comments. They are useful in test environments for verifying whether a specific access path would improve performance, and in production for emergency fixes while a proper solution is prepared.

sql

-- Force a specific index
SELECT /*+ INDEX(employees emp_department_ix) */
       employee_id, department_id
FROM   hr.employees
WHERE  department_id > 50;

-- Force a full table scan (useful for large range queries)
SELECT /*+ FULL(employees) */
       employee_id, last_name
FROM   hr.employees
WHERE  hire_date > DATE '2020-01-01';

-- Force a specific join method
SELECT /*+ USE_HASH(e d) */
       e.last_name, d.department_name
FROM   hr.employees e
JOIN   hr.departments d ON e.department_id = d.department_id;

Real-Time SQL Monitoring

SQL monitoring starts automatically when a statement runs in parallel, or when it has consumed at least 5 seconds of CPU or I/O time in a single execution. This is one of the most operationally useful tools for catching long-running queries in action.

sql

-- Generate an HTML SQL Monitor report for a specific SQL_ID
-- (save to file and open in a browser for the full visual view)
SELECT DBMS_SQLTUNE.REPORT_SQL_MONITOR(
  sql_id       => 'your_sql_id',
  report_level => 'ALL',
  type         => 'HTML'
) AS report
FROM DUAL;

-- Check currently monitored SQL (active long-running statements)
SELECT sql_id,
       status,
       elapsed_time / 1e6 AS elapsed_secs,
       cpu_time     / 1e6 AS cpu_secs,
       buffer_gets,
       disk_reads,
       SUBSTR(sql_text, 1, 60) AS sql_text
FROM   v$sql_monitor
WHERE  status = 'EXECUTING'
ORDER BY elapsed_time DESC;

Application Tracing (SQL Trace + TKPROF)

SQL trace produces a detailed trace file with parse counts, physical and logical reads, and wait events for every SQL statement in a session. TKPROF converts the raw trace file into a human-readable report.

sql

-- Enable SQL tracing for the current session with bind variables and waits
EXEC DBMS_SESSION.SET_SQL_TRACE(sql_trace => TRUE);
-- or with more detail:
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12';
-- Level 12 = binds + waits (most useful for tuning)

-- After running your problematic queries, disable tracing:
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

-- Find the trace file location
SELECT value FROM v$diag_info WHERE name = 'Default Trace File';

Then from the OS command line:

bash

# Format the trace file with TKPROF
# Replace trace_file.trc with the actual filename
tkprof /path/to/trace_file.trc output.txt explain=apps/apps sort=exeela

# Key TKPROF output columns:
# call    = PARSE, EXECUTE, FETCH
# count   = number of times this call was made
# cpu     = CPU time in seconds
# elapsed = wall-clock time in seconds
# disk    = physical reads
# query   = consistent reads (logical)
# current = current mode reads
# rows    = rows processed

The SQL Transpiler — A New Tool in Oracle 23ai

The Oracle AI Database 26ai documentation introduces the SQL Transpiler a feature that automatically converts PL/SQL functions called from SQL into equivalent SQL expressions, without requiring any code changes.

Why does this matter for performance? Every time a SQL statement calls a PL/SQL function, the execution must switch from the SQL runtime to the PL/SQL runtime. For a query returning millions of rows, this context switching per row adds up to significant overhead. The SQL Transpiler eliminates this overhead by rewriting the function into pure SQL at compile time.

sql

-- Example: A PL/SQL function called from SQL
CREATE OR REPLACE FUNCTION get_month_abbreviation (
  date_value DATE
) RETURN VARCHAR2 IS
BEGIN
  RETURN TO_CHAR(date_value, 'MON', 'NLS_DATE_LANGUAGE=English');
END;
/

-- Enable the SQL Transpiler (disabled by default)
ALTER SESSION SET sql_transpiler = ON;
-- or at system level:
ALTER SYSTEM SET sql_transpiler = ON;

-- Query using the PL/SQL function
SELECT employee_id, first_name, last_name
FROM   hr.employees
WHERE  get_month_abbreviation(hire_date) = 'MAY';

-- Check whether transpilation occurred by viewing the execution plan
-- If transpiled, the predicate section will show the SQL expression,
-- NOT the PL/SQL function call name:
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT => 'BASIC +PREDICATE'));

-- Transpiled predicate looks like this (function replaced with SQL):
-- filter(TO_CHAR(INTERNAL_FUNCTION("HIRE_DATE"),
--        'MON','NLS_DATE_LANGUAGE=English')='MAY')

-- NOT transpiled predicate looks like this (still using PL/SQL call):
-- filter("GET_MONTH_ABBREVIATION"("HIRE_DATE")='MAY')

The transpiler supports basic scalar types, string types, numeric types, date types, local variables, constants, and standard expressions. It does not support embedded SQL in the function, package variables, collections, or recursive function calls.

Automatic Error Mitigation The Safety Net You Did Not Know You Had

Oracle 23ai includes a feature that most DBAs have never heard of: automatic error mitigation. When a SQL statement fails during the parse phase with an ORA-00600 internal error, the database automatically attempts to find an alternative execution approach and creates a SQL patch if it succeeds.

sql

-- Enable automatic error mitigation for the current session
ALTER SESSION SET sql_error_mitigation = 'ON';

-- After a query that previously failed with ORA-00600 succeeds,
-- check whether a SQL patch was automatically created:
SELECT name, signature, origin
FROM   dba_sql_patches
ORDER BY created DESC;

-- Check the mitigation actions the database performed:
SELECT sql_id,
       signature,
       problem_key,
       problem_type
FROM   dba_sql_error_mitigations;

When this works, it is completely transparent to the application the ORA-00600 is never raised, the query executes normally, and a SQL patch is silently created to ensure the fix persists.

Application Design for SQL Performance: Getting It Right Before Deployment

The most cost-effective place to address SQL performance is before the application is deployed. Once an application is in production, every performance fix has a cost developer time, testing cycles, change management overhead. Here are the key design and deployment guidelines from Oracle's documentation.

Good Connection Management

Connecting to the database is expensive and does not scale. The best practice is to minimize the number of concurrent connections and use connection pooling, particularly in web-based or multi-tier applications. Never design an application that opens a new database connection for every user request.

Cursor Reuse and Parse Minimization

Parse SQL statements once and execute them many times. The cost of hard parsing accumulates quickly in high-concurrency environments. Recall the benchmark figures from earlier: a system that hard-parses every statement can support only 60 users, while one that eliminates unnecessary parses can support 270 users on the same hardware.

Testing Guidelines Before Production Deployment

Testing Rule

Why It Matters

Use realistic data volumes and distributions

Optimizer plans change dramatically based on data volume and cardinality

Test with correct optimizer mode

Plan choices depend on the optimizer mode (ALL_ROWS vs FIRST_ROWS)

Test single-user performance first

If one user cannot perform well under ideal conditions, many users cannot perform well under real conditions

Obtain and document execution plans for all SQL

Baseline documentation — you need this when plans change later

Perform multi-user testing for DML

Find locking conflicts and serialization problems before production

Use production-equivalent hardware

I/O subsystem performance and memory size are critical variables

Measure steady-state performance

Always include ramp-up and ramp-down phases in benchmarks

Application Rollout Strategy

The Oracle documentation acknowledges two common rollout approaches: the Big Bang (all users migrate at once) and the Trickle (gradual user migration). Both have risks. The Trickle approach allows observation of real user behaviour and real performance as the load builds, making it easier to identify and fix scalability problems before they affect all users. In environments where I have seen both approaches used, the Trickle strategy consistently produces a better outcome when adequate monitoring is in place.

Key Takeaways

This post covered the complete SQL tuning framework from Oracle's official SQL Tuning Guide. Here is a summary of the most important points:

The six tuning tasks identify, gather, determine, scope, implement, prevent are not optional steps. Skipping any one of them creates risk of solving the wrong problem or failing to prevent the problem from recurring.

Bind variables are not a nice-to-have. They are the difference between a database that supports 270 concurrent users and one that supports 60 users on identical hardware.

The scope of your fix must match the scope of the problem. A statement-level issue requires a statement-level solution. Database-level changes to solve statement-level problems create collateral damage.

Automated tools like ADDM, SQL Tuning Advisor, and Automatic Indexing should be your first stop, not your last resort. They are built to find the problems that are easy to miss manually.

The SQL Transpiler and Automatic Error Mitigation are new capabilities in Oracle 23ai that deliver real performance and reliability improvements with zero code changes. If you are on 23ai, know they exist and how to use them.

What Is Next in This Series

In Post 2, I will go inside Oracle's SQL processing engine following a SQL statement from the moment you press Enter through syntax checking, semantic validation, the shared pool check, optimization, row source generation, and execution. This is the foundation of understanding why hard parses are expensive, how read consistency works, and why DDL is handled so differently from DML.

Reference

Source: Oracle AI Database SQL Tuning Guide, Release 26ai, Document G43585-02, January 2026.

Contributing Authors: Glenn Maxey, Frederick Kush, Nigel Bayliss, Maria Colgan, Tom Kyte, and others.

Discussion

Loading comments...