Oracle DBA Interview Guide 2025 — Practical Q&A (32 Key Questions)
- AiTech
- Nov 5, 2025
- 10 min read
1) We need to create a user whose password will expire every month
Create a profile with PASSWORD_LIFE_TIME = 30 (days ≈ monthly), then create the user and assign the profile.
Example:
CREATE PROFILE monthly_pwd LIMIT PASSWORD_LIFE_TIME 30;
CREATE USER app_user IDENTIFIED BY "StrongPass1!" PROFILE monthly_pwd;
GRANT CREATE SESSION TO app_user;
Notes: PASSWORD_LIFE_TIME is in days; for exact calendar-month enforcement use a scheduled job that forces ALTER USER ... IDENTIFIED BY EXPIRE on the desired date (but profile is standard).
2) Steps when Oracle inventory (oraInventory) is corrupted during patching activity
High-level steps:
Stop any running patching/opatch processes.
Check oraInst.loc to find inventory location.
Inspect oraInventory/ContentsXML and inventory.log.
Restore the inventory directory from backup (always take inventory backup before patching).
If no backup, run opatch lsinventory -invPtrLoc /etc/oraInst.loc to see errors, and try opatch detect/opatch version.
If inconsistent, use OPatch reconcile options or rebuild central inventory (test on clone). If unsure, open SR with Oracle MOS — don’t proceed blindly.Always validate ORACLE_HOME / permissions / SELinux / diskspace.
3) Patching has been ongoing for 2 hours and not complete — where to look and common reasons
Where to look:
Patch logs: $ORACLE_HOME/cfgtoollogs/opatch/opatch2010*.log or patch-specific logs.
oraInventory/Logs and patch extract directory.
System resources: CPU, IO, memory, and disk space.
Background processes and DB alert log.Common reasons:
Waiting for DB to be quiesced or dependent sessions; long SQL executions (datapatch).
Insufficient disk space or slow I/O.
Locks held by sessions; missing prerequisites; incorrect patch version or OPatch mismatch.
Post-patch SQL (datapatch) waiting on recompiles or long-running DDL.
4) Database patching failed — steps to revert changes
Read patch logs and opatch lsinventory to determine partially applied components.
If OPatch supports rollback: run opatch rollback -id <patch_id> (follow documented procedure).
If partial DB SQL (datapatch) failed, fix SQL issues (invalid objects, grants), then re-run datapatch -verbose.
If rollback not safe, restore DB binaries/HOME from pre-patch backup (recommended approach): stop DB, switch to previous Oracle home (if you had a full home backup), restart and validate.
Always validate application connectivity, listeners, and DB health after rollback.
Open MOS SR for complex situations; capture logs, inventory, and alert logs.
5) Prechecks before patching activity
Verify backup of DB binaries / Oracle Home and RMAN full backup & controlfile/archivelogs.
Confirm opatch version is supported for the patch.
Check opatch lsinventory, opatch prereq if available.
Validate disk space, permissions, and oraInventory accessibility.
Ensure database state required by patch (mounted/up/down) and take pre-patch DB export if needed.
Check compatibility matrix and pre-patch notes from vendor (MOS or patch README).
Validate listener/config and run health checks (ADDM, invalid objects).
Ensure change window, rollback plan, and communication ready.
6) How to untar the Oracle Home backup
From shell:
cd /u01/backup
tar -xvf oracle_home_backup.tar # for uncompressed tar
# or if compressed:
tar -xvzf oracle_home_backup.tar.gz
# or for .tar.xz
tar -xvJf oracle_home_backup.tar.xz
Verify file ownership and permissions after extraction (chown -R oracle:oinstall $ORACLE_HOME) and run ls -l to confirm.
7) Datapatch has failed — possible reasons
Invalid PL/SQL objects or missing grants required by SQL in the patch.
Database not at required patch level or incompatible OPatch.
ORACLE_HOME mismatch (datapatch runs against wrong home).
Patch applied to binaries but SQL change failed (DB not in required mode).
Missing or invalid timezone/data dictionary version.
Lack of privileges or insufficient TEMP/UNDO space.Fixes: inspect datapatch logs, recompile invalid objects, ensure correct home, and re-run datapatch -verbose.
8) How to multiplex controlfile
Multiplexing Control File steps:
Update CONTROL_FILES parameter in SPFILE/PFILE to include new file paths:
ALTER SYSTEM SET CONTROL_FILES='/u01/oradata/DB/control01.ctl','/u02/oradata/DB/control02.ctl' SCOPE=SPFILE;
Shutdown database and copy existing controlfile to the new path (or use OS cp with correct ownership).
Startup database; ON restart Oracle will use multiple control files.Alternative in some setups: create new controlfiles while DB is mounted, but safest is: set parameter + copy while DB is down or use RMAN BACKUP CURRENT CONTROLFILE to create copies.
9) How to increase the redo size
To increase online redo log size:
Add new larger log groups and drop old groups after switching logs:
ALTER DATABASE ADD LOGFILE GROUP 4 ('/u01/oradata/DB/redo04.log') SIZE 2G;
ALTER SYSTEM SWITCH LOGFILE; -- ensure rotation
-- Once safe, drop old group:
ALTER DATABASE DROP LOGFILE GROUP 1;
Or recreate redo groups with desired sizes (careful in RAC; coordinate across nodes). Always ensure enough redo space for peak activity.
10) How to restore control file without backup
If no physical controlfile backup exists:
If CONTROL_FILE_RECORD_KEEP_TIME or trace exists, but safest approach:
If CREATE CONTROLFILE from trace available (you can create controlfile from ALTER DATABASE BACKUP CONTROLFILE TO TRACE earlier), edit the trace and run STARTUP NOMOUNT then CREATE CONTROLFILE using trace SQL.
If no trace, reconstruct metadata may be complex — use RMAN if possible: RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP; (if autobackup enabled).If none available, involve Oracle Support; this is potentially destructive — follow documented recovery steps.
11) How to restore pfile/spfile without backup
For PFILE: recreate manually from saved parameters or use CREATE PFILE FROM SPFILE; if SPFILE is still present.
For SPFILE lost but PFILE exists: CREATE SPFILE FROM PFILE='/u01/app/oracle/product/.../dbs/initDB.ora';
If both lost: reconstruct minimal PFILE with required parameters (DB_NAME, CONTROL_FILES, MEMORY_TARGET, PROCESSES, etc.), start in NOMOUNT to recreate SPFILE:
STARTUP NOMOUNT PFILE='/path/to/new_init.ora';
CREATE SPFILE FROM PFILE='/path/to/new_init.ora';
Always keep copies of spfile/pfile under version control.
12) Primary DB has 2 DRs — one in sync, one out of sync. Can we perform switchover? What is switchover_status on primary?
Switchover requirement: All standby(s) do NOT necessarily have to be fully synchronized, but for a safe planned switchover the target standby must be SYNCHRONIZED and in READ ONLY/APPLY as required. If one DR is out of sync, you can still switchover to the in-sync standby. However, ensure you understand business RTO/RPO and that the out-of-sync DR will need resync afterward.
SWITCHOVER_STATUS (from V$DATABASE on primary) could be TO STANDBY (ready to switch), NOT ALLOWED, or other; check SELECT SWITCHOVER_STATUS FROM V$DATABASE;. Use DGMGRL or Data Guard views (V$DATAGUARD_STATUS) to validate.
13) DR has huge gap — how to make it in sync
Options depend on gap size:
Small gap: let apply process catch up: ensure log transport and apply are running; check V$ARCHIVED_LOG and V$DATAGUARD_STATS.
Large gap / broken apply: copy required archived redo logs from primary (or fast-recovery area) and register them on standby (ALTER DATABASE REGISTER LOGFILE), then RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;.
Very large gap: consider a resync using RMAN incremental (incremental physical backup to the standby):
Take incremental level 0/1 on primary, apply via RMAN to standby (Block Change Tracking/incremental backups).
For extreme cases: rebuild standby from backup (convert to new physical standby).Always ensure network/transfer bandwidth and space are available.
14) DR drill failed during switchover — steps to make DB available
Identify failure reason (logs, alert.log, Data Guard errors).
If primary still available, abort drill and revert to original roles (failback) if needed.
If target DB partially opened, you may need to:
Bring instance down cleanly and restart in correct mode (STARTUP MOUNT / ALTER DATABASE OPEN as appropriate).
If controlfile/state mismatch, restore controlfile or use RECOVER as needed.
If role change partially applied, consult DGMGRL to FAILOVER or SWITCHOVER to consistent state.
If stuck, restore from last good backup or recreate standby from primary.Always test DR drills in non-prod and document runbooks.
15) How to restore 2-node RAC database
High-level:
If restoring to same cluster: ensure ASM disks are available and GNS/SCAN configured.
Restore ASM diskgroup metadata if needed.
Restore Oracle binaries (correct Oracle home) on both nodes (or use single shared ORACLE_HOME).
Use RMAN to restore controlfile, datafiles, and SPFILE to ASM:
CONNECT TARGET /
SET DBID <dbid>;
RUN {
RESTORE CONTROLFILE FROM AUTOBACKUP;
ALTER DATABASE MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
}
Recreate OCR and CRS if clusterware lost; run crsctl and srvctl to configure services.
Validate cluster services, listeners, and start instances on both nodes.Cluster restore requires careful coordination of ASM, OCR, and CRS — test on DR environment.
16) If we truncate a table, how do we restore data?
Truncate is DDL (fast, no undo for large deletes). Recovery options:
If point-in-time RMAN or Flashback is enabled:
Use Flashback Table: FLASHBACK TABLE schema.table TO TIMESTAMP (or SCN);
Or restore from RMAN / datapump export prior to truncate.
If no flash/backup: restore the table from latest export (Data Pump) or restore whole DB to alternate location and export the table.Always enable Flashback/Recycling and regular exports for critical tables.
17) Prerequisites for index creation activity
Ensure sufficient tablespace for index segments.
Consider index type (B-tree, bitmap) appropriate for cardinality and workload.
Check concurrent DML impact — use ONLINE index creation for minimal downtime: CREATE INDEX ... ONLINE; (Enterprise Edition required).
Statistics collection after creation (DBMS_STATS.GATHER_INDEX_STATS) or gather table stats.
For partitioned tables, create local/global indexes based on partitioning strategy.
Validate sorting space (TEMP) size for large index builds.
18) Precheck and postcheck for DR drill activity
Precheck:
Validate archived logs transfer and apply status.
Check V$DATAGUARD_STATS, V$ARCHIVE_DEST_STATUS.
Ensure standby redo logs configured and sufficient space.
Verify network connectivity and DNS/SCAN resolution.
Confirm backups and restore point availability.Postcheck:
Validate role transition (V$DATABASE role).
Check datafile consistency and apply lag zeros.
Run application smoke tests and data integrity checks.
Validate scheduled jobs, services, and listener endpoints.
19) How to increase the speed of RMAN backup
Use parallelism (CONFIGURE DEVICE TYPE DISK PARALLELISM N and BACKUP AS COMPRESSED BACKUPSET ... PARALLEL).
Use FRA and fast disks (SSD) to reduce I/O bottleneck.
Use compression (AS COMPRESSED BACKUPSET) or dedup appliances.
Use BACKUP INCREMENTAL LEVEL 1 FROM SCN / incremental backups to reduce volume.
Use block change tracking (ALTER DATABASE ENABLE BLOCK CHANGE TRACKING) to speed incremental backups.
Offload to backup appliance (Data Domain) or use RMAN multistreaming to multiple channels.
Schedule backups in low activity windows and ensure good network bandwidth for remote backups.
20) How to increase speed of patching activity
Pre-stage patches to local fast storage on each server.
Validate OPatch version and prechecks to avoid mid-run failures.
Use parallel patching when supported (multiple nodes in RAC can be patched in parallel with orchestrated approach).
Ensure low DB activity (quiesce sessions), stop non-essential services, and increase CPU/I/O priority for patching process.
Use automation/orchestration (Ansible/shell scripts) for repeatable steps.
Pre-validate prerequisites to reduce human wait time.
21) What grants required to take export/import
For Data Pump (expdp/impdp):
EXP_FULL_DATABASE and IMP_FULL_DATABASE roles for full database export/import.
For schema-level export, the schema owner can do it with CREATE SESSION, READ on objects, or assign a role with SELECT ANY TABLE if needed.
For network/import from remote, CREATE DIRECTORY and READ/WRITE on the OS directory.Always follow least-privilege: grant full roles temporarily and revoke after.
22) How to patch Oracle RAC database
High-level RAC patch flow:
Read the patch README and MOS notes for RAC-specific instructions.
Prechecks on all nodes (inventory, opatch version, backups).
Apply patch binaries to each node’s ORACLE_HOME (use OPatchAuto or OPatch in parallel if supported).
Run datapatch on the database home — ensure clusterwide coordination (run on one node but validate on all).
For Grid Infrastructure patches, use OPatchauto/giunzip and patch Grid Infrastructure first using root.sh where needed.
Validate services with srvctl and cluster status crsctl.
Run post-patch validations across nodes (listener, services, asm, crs).Coordinate with rolling patch strategy to avoid downtime: patch nodes one at a time.
23) What is the use of prepatch.sh script
prepatch.sh typically:
Performs environment checks prior to applying a patch (prereq checks).
Verifies paths, permissions, OPatch version, available space, and running services that must be stopped.
Collects logs and system info to help patch process or troubleshooting.It’s a safety step to reduce the chance of mid-patch failures.
24) Startup sequence of Oracle RAC
Typical sequence:
Start Clusterware (CRS) on each node: crsctl start crs
Start ASM instances (if using ASM) and mount diskgroups.
Start Oracle Grid Infrastructure components (OCR, Voting disks must be available).
Start database instances via srvctl start database -d <DB> or direct startup on each node.
Verify services: crsctl status resource -t and srvctl status database -d <DB>.Order is important: Clusterware → ASM → Database instances → services.
25) Background processes used in ASM
Key ASM processes:
ASMB / ASMB0 — ASM background process (varies by version)
LMSn (lock manager server processes) — for cache fusion and inter-node communication
RBAL — rebalancer process for disk rebalancing
SMON/PMON equivalents are database-side; ASM also has ASM instance processes like DBWR/LGWR but ASM-specific names vary by Oracle version.(Recommend: check ps -ef | grep asm or V$ASM_* views for exact processes per release.)
26) Prechecks before adding disk into diskgroup
Verify disk device discovery and permissions (oracle user access).
Ensure disks have consistent size and are unpartitioned or correctly partitioned.
Check ASM compatibility/ASM disk header using oracleasm or asmtoolg.
Validate free space and rebalance impact.
Confirm redundancy level of diskgroup and impact on rebalance time.
Ensure backup or snapshot before changes.
27) Wait events in Oracle with reason and resolution (common ones)
DB CPU — CPU time; resolution: tune SQL, add CPU, or optimize plan.
log file sync — user commit waiting for redo to be flushed; resolution: tune redo log I/O, increase log writer performance.
buffer busy waits — buffer contention; resolution: reduce hot block contention, increase freelists or use ASSM.
db file sequential read — single-block reads / index IO; resolution: ensure proper indexing / storage tuning.
io stuck on call or direct path read — I/O issues; resolution: check storage, ASM rebalance, or hardware.Explain root cause and action to reduce the wait.
28) Scenario: Full backup Mon, incremental Tue, hourly archivelog backups; a datafile corrupted — which backups/logs to restore from? Step-by-step
Assuming you have RMAN full backup (Mon), incremental (Tue), and archived logs:
Identify corrupted datafile: check V$DATAFILE and RMAN> LIST CORRUPTION;
Restore the corrupted datafile from most recent image copy or full/incremental backup that covers changes:
If incremental backups include changed blocks beyond Monday, use the latest backup that contains changes (Tue incremental + apply).
Example RMAN flow:
RMAN> RESTORE DATAFILE <n> FROM BACKUPSET; -- uses latest available backupset
RMAN> RECOVER DATAFILE <n>; -- applies archived redo logs
RMAN> ALTER DATABASE DATAFILE <n> ONLINE; -- bring it back
If restored from Monday’s full, then apply Tuesday incremental + hourly archivelogs covering the point of failure: RECOVER DATAFILE ... UNTIL TIME or APPLY ARCHIVELOG ALL.
Validate DBWR and run DB health checks.Key: use the most recent backup chain that minimizes redo apply and restores consistent data.
29) What is “End of REDO” in DR drill activity
“End of REDO” typically refers to the point (SCN/Timestamp) up to which redo has been transported and applied on standby — i.e., the replay point. In a DR drill you ensure the standby has applied redo up to the END OF REDO and that no in-flight transactions remain; then role conversion can be safe. It’s the cutover SCN when the standby is fully current.
30) How to take export backup at two locations
Use Data Pump with multiple DIRECTORY objects and run two jobs or copy dump after creation.Option A — two jobs:
expdp user/password DIRECTORY=dir1 DUMPFILE=exp1.dmp ...
expdp user/password DIRECTORY=dir2 DUMPFILE=exp2.dmp ...
Option B — create dump to one directory then copy OS file to secondary location (nfs/object store):
cp /u01/exports/exp1.dmp /backup/location/exp1.dmp
Or use Oracle Cloud Object Storage integration to directly write to cloud.
31) How to take RMAN backup to two locations
Configure multiple channels and backup to different locations in the same job:
RUN {
ALLOCATE CHANNEL c1 DEVICE TYPE DISK FORMAT '/backup1/%U';
ALLOCATE CHANNEL c2 DEVICE TYPE DISK FORMAT '/backup2/%U';
BACKUP DATABASE PLUS ARCHIVELOG;
RELEASE CHANNEL c1;
RELEASE CHANNEL c2;
}
Or take a backup to one location and then COPY the backup pieces to a secondary location (or use BACKUP TO for tape + disk). Alternatively configure Media Manager for duplication.
32) What are RAC-related parameters in Data Pump
Data Pump interacts with RAC via parameters that help performance and parallelism:
PARALLEL — increase worker processes across nodes.
DEGREE/NETWORK_LINK — for network imports/exports in RAC scale-out.
DIRECTORY — shared ASM or NFS directories accessible to all nodes.
Use FLASHBACK_SCN or FLASHBACK_TIME for consistent exports in RAC.
Ensure ORACLE_HOME and PATH and TMPDIR are accessible; for parallel jobs, distribute worker processes across RAC nodes with PARALLEL and ENCRYPTION/COMPRESSION supported in RAC environments.Also consider DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS etc. Note: Data Pump itself is instance-aware; ensure the dump directory is shared (ASM or cluster filesystem) when running from different nodes.
Final notes & interview tips
Always mention version specifics in answers when asked (e.g., 19c/21c/23ai/etc.) — behavior and commands can vary.
Emphasize backups, testing in non-prod, and having playbooks for patching and DR drills.
In the interview, when you give commands, preface with “validate on test first” and mention alert.log and MOS as sources.
If asked for a live demo, be ready to show RMAN script examples, srvctl/crsctl commands, and sample datapatch logs.
Comments