Aug 22, 2022
Archived Redo Log- Files

The Oracle database can run in one of two modes: ARCHIVELOG mode and NOARCHIVELOG mode. The difference between these two modes is simply what happens to a redo log file when Oracle goes to reuse it. “Will we keep a copy of that redo or should Oracle just overwrite it, losing it forever?” is an important question to answer. Unless you keep this file, you can’t recover data from a backup to that point in time.

Say you take a backup once a week on Saturday. Now, on Friday afternoon, after you have generated hundreds of redo logs over the week, your hard disk fails. If you have not been running in ARCHIVELOG mode, the only choices you have right now are as follows:

•\ Drop the tablespace(s) associated with the failed disk. Any tablespace that had a file on that disk must be dropped, including the contents of that tablespace. If the SYSTEM tablespace (Oracle’s data dictionary) or some other important system-related tablespace like your UNDO tablespace is affected, you can’t do this. You will have to use the next option instead.

•\ Restore last Saturday’s data and lose all of the work you did that week.

Neither option is very appealing. Both imply that you lose data. If you had been executing in ARCHIVELOG mode, on the other hand, you simply would have found another disk and restored the affected files from Saturday’s backup onto it. Then, you would have applied the archived redo logs and, ultimately, the online redo logs to them (in effect replaying the week’s worth of transactions in fast-forward mode). You lose nothing. The data is restored to the point of the failure.

People frequently tell me they don’t need ARCHIVELOG mode for their production systems. I have yet to meet anyone who was correct in that statement. I believe that a system is not a production system unless it is in ARCHIVELOG mode. A database that is not in ARCHIVELOG mode will, someday, lose data. It is inevitable; you will lose data (not might, but will) if your database is not in ARCHIVELOG mode.

“We are using RAID-5, so we are totally protected” is a common excuse. I’ve seen cases where, due to a manufacturing error, all disks in a RAID set froze, all at about the same time. I’ve seen cases where the hardware controller introduced corruption into the datafiles, so people safely protected corrupt data with their RAID devices. RAID also does not do anything to protect you from operator error, one of the most common causes of data loss. RAID does not mean the data is safe, it might be more available, it might be safer, but data solely on a RAID device will be lost someday; it is a matter of time.

“If we had the backups from before the hardware or operator error and the archives were not affected, we could have recovered.” The bottom line is that there is no excuse for not being in ARCHIVELOG mode on a system where the data is of any value. Performance is no excuse; properly configured archiving adds little to no overhead. This, and the fact that a fast system that loses data is useless, means that even if archiving added 100 percent overhead, you still need to do it. A feature is overhead if you can remove it and lose nothing important; overhead is like icing on the cake. Preserving your data and making sure you don’t lose your data isn’t overhead—it’s the DBA’s primary job!

Only a test or maybe a development system should execute in NOARCHIVELOG mode. Most development systems should be run in ARCHIVELOG mode for two reasons:

•\ This is how you will process the data in production; you want development to act and react as your production system would.
•\ In many cases, the developers pull their code out of the data dictionary, modify it, and compile it back into the database. The development database holds the current version of the code. If the development database suffers a disk failure in the afternoon, what happens to all of the code you compiled and recompiled all morning? It’s lost.

Don’t let anyone talk you out of being in ARCHIVELOG mode. You spent a long time developing your application, so you want people to trust it. Losing their data will not instill confidence in your system.

Note There are some cases in which a large DW could justify being in NOARCHIVELOG mode—if it made judicious use of READ ONLY tablespaces and was willing to fully rebuild any READ WRITE tablespace that suffered a failure by reloading the data.

More Details
Aug 22, 2022
Online Redo Log- Files-1

Every Oracle database has at least two online redo log file groups. Each redo log group consists of one or more redo log members (redo is managed in groups of members). The individual redo log file members of these groups are true mirror images of each other.

These online redo log files are fixed in size and are used in a circular fashion. Oracle will write to log file group 1, and when it gets to the end of this set of files, it will switch to log file group 2 and overwrite the contents of those files from start to end.

When it has filled log file group 2, it will switch back to log file group 1 (assuming we have only two redo log file groups; if we have three, it would, of course, proceed to the third group). This is shown in Figure 3-4.

Figure 3-4.  Writing to log file groups

The act of switching from one log file group to another is called a log switch. It is important to note that a log switch may cause a temporary “pause” in a poorly configured database. Since the redo logs are used to recover transactions in the event of a failure, we must be certain we won’t need the contents of a redo log file before we are able to use it. If Oracle isn’t sure that it won’t need the contents of a log file, it will suspend operations in the database momentarily and make sure that the data in the cache that this redo “protects” is safely written (checkpointed) onto disk. Once Oracle is sure of that, processing will resume and the redo file will be reused.

We’ve just started to talk about a key database concept: checkpointing. To understand how online redo logs are used, you’ll need to know something about checkpointing, how the database buffer cache works, and what a process called databaseblock writer (DBWn) does. Thedatabase buffer cache and DBWn are covered in more detail later on, but we’llskip ahead a little anyway and touch on them now. 

Thedatabase buffer cache is where database blocks are stored temporarily. This isa structure in Oracle’s SGA. As blocks are read, they are stored in this cache, hopefully so we won’t have to physically reread them later.

The buffer cache is first and foremost a performance tuning device. It exists solely to make the very slow process of physical I/O appear to be much faster than it is.

When we modify a block by updating a row on it, these modifications are done in memory to the blocks in the buffer cache. Enough information to redo or to replay this modification is stored in the redo log buffer, another SGA data structure.

When we COMMIT our modifications, making them permanent, Oracle does not go to all of the blocks we modified in the SGA and write them to disk. Rather, it just writes the contents of the redo log buffer out to the online redo logs.

As long as that modified block is in the buffer cache and not on disk, we need the contents of that online redo log in case the database fails. If, at the instant after we committed, the power was turned off, the database buffer cache would be wiped out. If this happens, the only record of our change is in that redo log file.

Upon restart of the database, Oracle will actually replay our transaction, modifying the block again in the same way we did and committing it for us. So, as long as that modified block is cached and not written to disk, we can’t reuse (overwrite) that redo log file. 

This is where DBWn comes into play. This Oracle background process is responsible for making space in the buffer cache when it fills up and, more important, for performing checkpoints. A checkpoint is the writing of dirty (modified) blocks from the buffer cache to disk.

Oracle does this in the background for us. Many things can cause a checkpoint to occur, the most common being a redo log switch. As we filled up log file 1 and switched to log file 2, Oracle initiated a checkpoint.

At this point, DBWn started writing to disk all of the dirty blocks that are protected by log file group 1. Until DBWn flushes all of these blocks protected by that log file, Oracle can’t reuse (overwrite) it. If we attempt to use it before DBWn has finished its checkpoint, we’ll get a message like this in our database’s ALERT log: … 

Thread 1 cannot allocate new log, sequence 16 Checkpoint not complete Current log# 3 seq# 15 mem# 0: /opt/oracle/oradata/CDB/redo03.log…

More Details
Jul 22, 2022
Password Files- Files

The password file is an optional file that permits the remote SYSDBA or administrator access to the database. When you attempt to start Oracle, there is no database available that can be consulted to verify passwords. When you start Oracle on the local system (i.e., not over the network, but from the machine the database instance will reside on), Oracle will use the OS to perform the authentication.

When Oracle was installed, the person performing the installation was asked to specify an OS group for the administrators. Normally, on UNIX/Linux, this group will be DBA by default, and ORA_DBA on Windows. It can be any legitimate group name on that platform, however. That group is “special,” in that any user in that group can connect to Oracle “as SYSDBA” without specifying a username or password.

However, what happens if you attempt to connect as SYSDBA over the network to a remote database:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

OS authentication won’t work over the network for SYSDBA, even if the very unsafe (for security reasons) parameter REMOTE_OS_AUTHENT is set to true. So, OS authentication won’t work, and, as discussed earlier, if you’re trying to start up an instance to mount and open a database, then by definition there’s no database yet in which to look up authentication details. It is the proverbial chicken and egg problem.

Enter the password file. The password file stores a list of usernames and passwords that are allowed to remotely authenticate as SYSDBA over the network. Oracle must use this file to authenticate them, not the normal list of passwords stored in the database.

So, let’s correct our situation. First, verify that the REMOTE_LOGIN_PASSWORDFILE parameter is set to the default of EXCLUSIVE, meaning only one database uses a given password file:

SQL> show parameter remote_login_passwordfile
NAME TYPE VALUE
remote_login_passwordfile string EXCLUSIVE

Note Other valid values for this parameter are NONE, meaning there is no password file (there are no remote SYSDBA connections), and SHARED (more than one database can use the same password file).

The next step is to use the command-line tool (on UNIX/Linux and Windows) named orapwd to create and populate the initial password file:

$ orapwd

Usage: orapwd file= entries= force= asm= dbuniquename= format= sysbackup= sysdg= syskm= delete= input_file=Usage: orapwd describe file=where

There must be no spaces around the equal-to (=) character.

The command we’ll use when logged into the operating system account that owns the Oracle software is

$ orapwd file=orapw$ORACLE_SID password=foo entries=20

This creates a password file named orapwCDB in my case (my ORACLE_SID is CDB). That’s the naming convention for this file on most UNIX/Linux platforms (see your installation/OS admin guide for details on the naming of this file on your platform), and it resides in the $ORACLE_HOME/dbs directory. On Windows, this file is named PW%ORACLE_SID%.ora, and it’s located in the %ORACLE_HOME%\database directory. You should navigate to the correct directory prior to running the command to create that file, or move that file into the correct directory afterward.

Now, currently the only user in that file is SYS, even if there are other SYSDBA accounts on that database (they are not in the password file yet). Using that knowledge, however, we can for the first time connect as SYSDBA over the network:

$ sqlplus sys/foo@localhost:1521/CDB as sysdba
SQL> show user
USER is “SYS”

Note If you experience an ORA-12505 “TNS:listener does not currently know of SID given in connect Descriptor” error during this step, that means that the database listener is not configured with a static registration entry for this server. The DBA has not permitted remote SYSDBA connections when the database instance is not up. You would need to configure static server registration in your listener.ora configuration file. Please search for “Configuring Static Service Information” (in quotes) on the OTN (Oracle Technology Network) documentation search page for the version of the database you are using for details on configuring this static service. If you encounter an ORA-12528 “TNS:listener: all appropriate instances are blocking new connections” error, you can also configure the tnsnames.ora file with the UR=A parameter that will allow you to connect to a blocked instance.

We have been authenticated, so we are in. We can now successfully start up, shut down, and remotely administer this database using the SYSDBA account. Now, we have another user, TKYTE, who has been granted SYSDBA, but will not be able to connect remotely yet:

$ sqlplus tkyte/foobar@PDB1 as sysdba

ERROR:

ORA-01017: invalid username/password; logon denied

The reason for this is that TKYTE is not yet in the password file. In order to get TKYTE into the password file, we need to “regrant” that account the SYSDBA privilege:

$ sqlplus / as sysdba
SQL> alter session set container=PDB1; SQL> grant sysdba to tkyte; Grant succeeded.

SQL> exit
$ sqlplus tkyte/foobar@PDB1 as sysdba
SQL>

This created an entry in the password file for us, and Oracle will now keep the password in sync. If TKYTE alters their password, the old one will cease working for remote SYSDBA connections, and the new one will start working. The same process is repeated for any user who was a SYSDBA but is not yet in the password file.

More Details
Jun 22, 2022
The Storage Hierarchy in an Oracle Database- Files-2

The relationship between segments, extents, and blocks is shown in Figure 3-1.

Figure 3-1.  Segments, extents, and blocks

A segment is made up of one or more extents, and an extent is a logically contiguous allocation of blocks. A database may have up to six different block sizes in it.

Note  This feature of multiple block sizes was introduced for the purpose of making transportable tablespaces usable in more cases. The ability to transport a tablespace allows a DBA to move or copy the already formatted datafiles from one database and attach them to another—for example, to immediately copy all of the tables and indexes from an Online Transaction Processing (OLTP) database to a data warehouse (DW). However, in many cases, the OLTP database might be using a small block size, such as 2KB or 4KB, whereas the DW would be using a much larger one (8KB or 16KB). Without support for multiple block sizes in a single database, you wouldn’t be able to transport this information. Tablespaces with multiple block sizes should be used to facilitate transporting tablespaces; they are not generally used for anything else.

There will be the database default block size, which is the size specified in the initialization file during the CREATE DATABASE command. The SYSTEM tablespace will have this default block size always, but you can then create other tablespaces with nondefault block sizes of 2KB, 4KB, 8KB, 16KB, and, depending on the operating system, 32KB. The total number of block sizes is six if and only if you specified a nonstandard block size (not a power of two) during database creation. Hence, for all practical purposes, a database will have at most five block sizes: the default size and then four other nondefault sizes.

Any given tablespace will have a consistent block size, meaning that every block in that tablespace will be the same size. A multisegment object, such as a table with a LOB column, may have each segment in a tablespace with a different block size, but any given segment (which is contained in a tablespace) will consist of blocks of exactly the same size.

Most blocks, regardless of their size, have the same general format, which looks something like Figure 3-2.

Figure 3-2.  The structure of a block

Exceptions to this format include LOB segment blocks and hybrid columnar compressed blocks in Exadata storage, for example, but the vast majority of blocks in your database will resemble the format in Figure 3-2. The block header contains information about the type of block (table block, index block, and so on), transaction information when relevant (only blocks that are transaction managed have this information—a temporary sort block would not, for example) regarding active and past transactions on the block, and the address (location) of the block on the disk.

The next two block components are found on the most common types of database blocks, those of heap-organized tables. We’ll cover database table types in much more detail in Chapter 10, but suffice it to say that most tables are of this type.

The table directory, if present, contains information about the tables that store rows in this block (data from more than one table may be stored on the same block). The row directory contains information describing the rows that are to be found on the block.

This is an array of pointers to where the rows are to be found in the data portion of the block. These three pieces of the block are collectively known as the block overhead, which is space used on the block that is not available for your data, but rather is used by Oracle to manage the block itself.

The remaining two pieces of the block are straightforward: there may be free space on a block, and then there will generally be used space that is currently storing data.

Now that you have a cursory understanding of segments, which consist of extents, which consist of blocks, let’s take a closer look at tablespaces and then at exactly how files fit into the big picture.

More Details
May 22, 2022
The Storage Hierarchy in an Oracle Database- Files-1

A database is made up of one or more tablespaces. A tablespace is a logical storage container in Oracle that comes at the top of the storage hierarchy and is made up of one or more datafiles.

These files might be cooked files in a file system, raw partitions, ASM-­managed database files, or files on a clustered file system. A tablespace contains segments, as described next.

Segments

Segments are the major organizational structure within a tablespace. Segments are simply your database objects that consume storage—typically objects such as tables, indexes, undo segments, and so on.

Most times, when you create a table, you create a table segment. When you create a partitioned table, you are not creating a table segment, rather you create a segment per partition.

When you create an index, you normally create an index segment, and so on. Every object that consumes storage is ultimately stored in a single segment. There are undo segments, temporary segments, cluster segments, index segments, and so on.

Note It might be confusing to read “Every object that consumes storage is ultimately stored in a single segment.” You will find many CREATE statements that create multisegment objects. The confusion lies in the fact that a single CREATE statement may ultimately create objects that consist of zero, one, or more segments!

For example, CREATE TABLE T ( x int primary key, y clob ) will create four segments: one for the TABLE T, one for the index that will be created in support of the primary key, and two for the CLOB (one segment for the CLOB is the LOB index, and the other segment is the LOB data itself). On the other hand, CREATE TABLE T ( x int, y date ) cluster MY_CLUSTER will create zero segments (the cluster is the segment in this case). We’ll explore this concept further in Chapter 10.

Extents

Segments consist of one or more extents. An extent is a logically contiguous allocation of space in a file. (Files themselves, in general, are not contiguous on disk; otherwise, we would never need a disk defragmentation tool!

Also, with disk technologies such as Redundant Array of Independent Disks (RAID), you might find that a single file also spans many physical disks.) Traditionally, every segment starts with at least one extent. Oracle 11g Release 2 has introduced the concept of a “deferred” segment—a segment that will not immediately allocate an extent, so in that release and going forward, a segment might defer allocating its initial extent until data is inserted into it.

When an object needs to grow beyond its initial extent, it will request another extent be allocated to it. This second extent will not necessarily be located right next to the first extent on disk—it may very well not even be allocated in the same file as the first extent.

The second extent may be located very far away from the first extent, but the space within an extent is always logically contiguous in a file. Extents vary in size from one Oracle data block (explained shortly) to gigabytes.

Blocks

Extents, in turn, consist of blocks. A block is the smallest unit of space allocation in Oracle. Blocks are where your rows of data, or index entries, or temporary sort results are stored. A block is what Oracle typically reads from and writes to disk. Blocks in Oracle are generally one of four common sizes: 2KB, 4KB, 8KB, or 16KB (although 32KB is also permissible in some cases; there are restrictions in place as to the maximum size by operating system).

Note Per Oracle’s database reference documentation, the minimum block size is 2048, and larger block sizes must be a multiple of the operating system physical block size.

More Details
May 22, 2022
Datafiles- Files

Datafiles, along with redo log files, are the most important type of files in the database. This is where all of your data will ultimately be stored. Every database has at least one datafile associated with it and typically has many more than one.

You can view the datafiles for your database by querying the data dictionary. The names of the datafiles will vary depending on whether you’re using RAC (with datafiles on ASM disk) and/ or Oracle Managed Files (OMF).

For example, here are the datafiles in a RAC container database, using ASM disks, and the OMF naming convention:

SQL> select name from v$datafile;

NAME
+DATA/CDB/DATAFILE/system.257.1064287113
+DATA/CDB/DATAFILE/sysaux.258.1064287137
+DATA/CDB/DATAFILE/undotbs1.259.1064287153

+DATA/CDB/86B637B62FE07A65E053F706E80A27CA/DATAFILE/system.265.1064287787
+DATA/CDB/86B637B62FE07A65E053F706E80A27CA/DATAFILE/sysaux.266.1064287787
+DATA/CDB/DATAFILE/users.260.1064287153
+DATA/CDB/86B637B62FE07A65E053F706E80A27CA/DATAFILE/undotbs1.267.1064287787
+DATA/CDB/DATAFILE/undotbs2.269.1064288035
+DATA/CDB/BB1C6AAC137C6A4DE0536638A8C06678/DATAFILE/system.274.1064288517
+DATA/CDB/BB1C6AAC137C6A4DE0536638A8C06678/DATAFILE/sysaux.275.1064288517

The prior output shows three SYSTEM datafiles (where the Oracle data dictionary is stored). One is for the root container database, and the other two are associated with pluggable databases.

The long random string in the directory path is the GUID (unique identifier) associated with the pluggable databases when using OMF files.

Listed next are the datafiles in a single instance container database (not using OMF):

SQL> select name from v$datafile;

NAME

/opt/oracle/oradata/CDB/system01.dbf
/opt/oracle/oradata/CDB/sysaux01.dbf
/opt/oracle/oradata/CDB/undotbs01.dbf
/opt/oracle/oradata/CDB/pdbseed/system01.dbf
/opt/oracle/oradata/CDB/pdbseed/sysaux01.dbf
/opt/oracle/oradata/CDB/users01.dbf /opt/oracle
/oradata/CDB/pdbseed/undotbs01.dbf /opt/oracle
/oradata/CDB/PDB1/system01.dbf
/opt/oracle/oradata/CDB/PDB1/sysaux01.dbf
/opt/oracle/oradata/CDB/PDB1/undotbs01.dbf
/opt/oracle/oradata/CDB/PDB1/users01.dbf
/opt/oracle/oradata/CDB/PDB2/system01.dbf
/opt/oracle/oradata/CDB/PDB2/sysaux01.dbf
/opt/oracle/oradata/CDB/PDB2/undotbs01.dbf
/opt/oracle/oradata/CDB/PDB2/users01.db

The datafiles in this directory /opt/oracle/oradata/CDB belong to the root container.
The datafiles in subdirectories belong to pluggable databases. Each database will usually (minimally) have the following datafiles:

•\ SYSTEM: Stores the Oracle data dictionary
•\ SYSAUX: Contains other non-dictionary objects
•\ UNDO: Stores the undo segments used for rollback operations
•\ USERS: A default tablespace to be used for application data (if no other tablespace is designated)

After a brief review of file system types, we’ll discuss how Oracle organizes these files and how data is organized within them. To understand this, you need to know what tablespaces, segments, extents, and blocks are. These are the units of allocation that Oracle uses to hold objects in the database, and I describe them in detail shortly.

More Details
Apr 22, 2022
Tagging Trace Files- Files-2

PROBLEM_ID PROBLEM_KEY

LAST_INCIDENT LASTINC_TIME

There was recently an ORA-700 error in my database. I can now see what was affected by that error by issuing the show incident command:

adrci> show incident

INCIDENT_ID PROBLEM_KEY CREATE_TIME
———— ————————— ————————-
1 ORA 700 [pga physmem limit] 021-01-24 19:33:31.863000 +00:00
2402 ORA 700 [pga physmem limit] 021-01-24 19:34:50.006000 +00:00
4803 ORA 700 [pga physmem limit] 021-01-24 19:35:20.619000 +00:00
7204 ORA 700 [pga physmem limit] 021-01-24 19:42:03.463000 +00:00
9605 ORA 700 [pga physmem limit] 021-01-24 19:49:46.391000 +00:00

I can see there are several incidents, and I can identify the information related to each incident via the show tracefile command:

adrci> show tracefile -I 2402
diag/rdbms/cdb/CDB/incident/incdir_2402/CDB_ora_5317_i2402.trc

This shows me the location of the trace file for the incident number listed. Further, I can see a lot of detail about the incident if I so choose:

adrci> show incident -mode detail -p “incident_id=2402”
ADR Home = /opt/oracle/diag/rdbms/cdb/CDB:

INCIDENT INFO RECORD 1

INCIDENT_ID 2402

STATUS ready

PROBLEM_ID 1

CLOSE_TIME

FLOOD_CONTROLLED none

ERROR_FACILITY ORA

ERROR_NUMBER 700

ERROR_ARG1 pga physmem limit

And, finally, I can create a “package” of the incident that is useful for support. The package will contain everything a support analyst needs to begin working on the problem.

This section is not intended to be a full overview or introduction to the ADRCI utility, which is documented fully in the Oracle Database Utilities manual. Rather, I just wanted to introduce the existence of the tool—a tool that makes using trace files easy.

The database information is important to have when you go to http://support. oracle.com to file the service request or to search to see if what you are experiencing is a known problem. In addition, you can see the Oracle instance on which the error occurred. It is quite common to have many instances running concurrently, so isolating the problem to a single instance is useful.

Here’s another section of the trace file to be aware of:

*** SESSION ID:(266.55448) 2021-02-15T15:39:09.939349+00:00
*** CLIENT ID:() 2021-02-15T15:39:09.939354+00:00
*** SERVICE NAME:(SYS$USERS) 2021-02-15T15:39:09.939358+00:00
*** MODULE NAME:(sqlplus@localhost (TNS V1-V3)) 2021-02-­ 15T15:39:09.939364+00:00
*** ACTION NAME:() 2021-02-15T15:39:09.939368+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-02-15T15:39:09.939372+00:00
*** CONTAINER ID:(1) 2021-02-15T15:39:09.939376+00:00

This part of the trace file shows the session information available in the columns ACTION and MODULE from V$SESSION. Here, we can see that it was a SQL*Plus session that caused the error to be raised (you and your developers can and should set the ACTION and MODULE information; some environments such as Oracle Forms and APEX already do this for you).

Additionally, we have the SERVICE NAME. This is the actual service name used to connect to the database—SYS$USERS, in this case—indicating we didn’t connect via a TNS service. If we logged in using user/pass@CDB, we might see

*** SERVICE NAME:(CDB) 2021-02-15T15:55:42.704845+00:00

Here, CDB is the service name (not the TNS connect string; rather, it’s the ultimate service registered in a TNS listener to which it connected). This is also useful in tracking down which process or module is affected by this error.

Lastly, before we get to the actual error, we can see the session ID (266 in this example), session serial number (55448 in this example), and related date/time information (all releases) as further identifying information:

*** SESSION ID:(266.55448) 2021-02-15T15:39:09.939349+00:00

From here, you can dig further into the trace file and try to determine what is causing the problem. The other important pieces of information are the error code (typically 600, 3113, or 7445) and other arguments associated with the error code. Using these, along with some of the stack trace information that shows the set of Oracle internal subroutines that were called in order, we might be able to find an existing bug (and workarounds, patches, and so on).

Typically, you’ll do a Google search of any relevant error messages. If you don’t readily find an answer, you can create a service request with Oracle Support and attach the trace files to the request. Oracle Support can help you identify if you’ve hit a bug or provide options on workarounds.

More Details
Mar 22, 2022
Tagging Trace Files- Files-1

There is a way to “tag” your trace file so that you can find it even if you are not permitted access to V$PROCESS and V$SESSION. Assuming you have access to read the trace directory, you can use the session parameter tracefile_identifier.
With this, you can add a uniquely identifiable string to the trace file name, for example:

SQL> alter session set tracefile_identifier = ‘Look_For_Me’; Session altered.
SQL> !ls /opt/oracle/diag/rdbms/cdb/CDB/trace/Look_For_Me

ls: cannot access /opt/oracle/diag/rdbms/cdb/CDB/trace/Look_For_Me: No such file or directory

SQL> exec dbms_monitor.session_trace_enable PL/
SQL procedure successfully completed.
SQL> !ls /opt/oracle/diag/rdbms/cdb/CDB/trace/Look_For_Me /opt/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_15972_Look_For_Me.trc /opt/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_15972_Look_For_Me.trm

The * character is a wildcard instructing ls to look for any files with the string of Look_For_Me included in the file name. As you can see, the trace file is now named in the standard ora format, but it also has the unique string we specified associated with it, making it easy to find “our” trace file name. The trace file ends with the extension of .trc. There’s also a corresponding trace map file (with the extension of .trm) which contains structural information about the trace file. Usually, you’ll only be interested in the contents of the .trc file.

Trace Files Generated in Response to Internal Errors

I’d like to close this section with a discussion about those other kinds of trace files—the ones we did not expect that were generated as a result of an ORA-00600 or some other internal error. Is there anything we can do with them?

The short answer is that, in general, they are not for you and me. They are useful to Oracle Support. However, they can be helpful when we file a service request with Oracle Support. That point is crucial: if you are getting internal errors, the only way they will ever be corrected is if you file a service request (SR). If you just ignore them, they will not get fixed by themselves, except by accident.

For example, in Oracle 10g Release 1, if you create the following table and run the query, you may well get an internal error (or not; it was filed as a bug and is corrected in later patch releases):

SQL> create table t ( x int primary key ); Table created.
SQL> insert into t values ( 1 );

1 row created.

SQL> exec dbms_stats.gather_table_stats( user, ‘T’ ); PL/SQL procedure successfully completed.
SQL> select count(x) over () from t;

ERROR at line 2:

ORA-00600: internal error code, arguments: [12410], [], [], [], [], [], [], []

Now, suppose you are the DBA and all of a sudden this trace file pops up in the trace area. Or you are the developer and your application raises an ORA-00600 error, and you want to find out what happened. There is a lot of information in that trace file (some 35,000 lines, in fact), but in general it’s not useful to you and me. We would generally just compress the trace file and upload it as part of our service request processing.

A command-line tool, in conjunction with a user interface via Enterprise Manager, allows you to review the trace information in the ADR and package and transmit it to Oracle Support. The Automatic Diagnostic Repository Command Interpreter (ADRCI) utility allows you to review “problems” (critical errors in the database) and incidents ­(occurrences of those critical errors) and to package them up for transmission to support. The packaging step includes retrieving not only the trace information but also details from the database alert log and other configuration/test case information. For example, I set up a situation in my database that raised a critical error. (No, I won’t say what it is. You have to generate your own critical errors.) I knew I had a “problem” in my database because the ADRCI tool told me so:

$ adrci
adrci> show problem
ADR Home = /opt/oracle/diag/rdbms/cdb/CDB:

More Details
Dec 22, 2021
Pluggable Databases- Files

Pluggable databases are designed to be a set of files you can move from one root container database to another.

That is, we can unplug a pluggable database, and upon plugging it back into either the same root container database or some other root container database, we would have our original pluggable database back—with all of the ­application schemas, users, metadata, grants, data, and even our pluggable database parameter settings (settings that were not inherited from the root container).

This is achieved by storing pluggable database–specific parameter settings in a data dictionary table: SYS.PDB_ SPFILE$.

You can view parameters modifiable at the PDB level via this query:

SQL> SELECT name, value
FROM v$system_parameter
WHERE ispdb_modifiable = ‘TRUE’
ORDER BY name;

NAME VALUE

adg_account_info_tracking
allow_deprecated_rpcs
allow_rowid_column_type
approx_for_aggregation
approx_for_count_distinct
approx_for_percentile
aq_tm_processes
asm_diskstring
auto_start_pdb_services
awr_pdb_autoflush_enabled
bitmap_merge_area_size
blank_trimming
blockchain_table_max_no_drop

You set the PDB level parameters by using the ALTER SYSTEM … CONTAINER clause or by connecting the pluggable database and issuing the ALTER SYSTEM command:

$ sqlplus / as sysdba

SQL> alter session set container=pdb1; SQL> alter system set statistics_level=all;

It is in this fashion that pluggable databases can override a parameter setting for some parameters in an SPFILE and have those parameter settings travel with them as they move from root container database to root container database.

Parameter File Wrap-Up

In this section, we covered the basics of managing Oracle initialization parameters and parameter files.

We looked at how to set parameters, view parameter values, and have those settings persist across database restarts.

We explored the two types of database parameter files: legacy PFILEs (simple text files) and the newer SPFILEs.

For all existing databases, using SPFILEs is recommended for the ease of administration and clarity they bring.

The ability to have a single source of parameter “truth” for the database, along with the ability of the ALTER SYSTEM command to persist the parameter values, makes SPFILEs a compelling feature. I started using them the instant they became available and haven’t looked back.

More Details
Oct 22, 2021
Trace Files- Files

Trace files are a source of debugging information. When the server encounters a problem, it generates a trace file full of diagnostic information.

When a developer executes DBMS_ MONITOR.SESSION_TRACE_ENABLE, the server generates a trace file full of performance-­related information.

Trace files are available to us because Oracle is a heavily instrumented piece of software. By “instrumented,” I mean that the programmers who wrote the database kernel put in debugging code—lots and lots of it. And they left it in, on purpose.

I’ve met many developers who consider debugging code to be overhead—something that must be ripped out before an application goes into production in a vain attempt to squeeze every ounce of performance out of the code. Later, of course, they discover that their code has a bug or it isn’t running as fast as it should (which end users tend to call a bug as well; to an end user, poor performance is a bug).

At that point, they really wish that the debug code was still in the code (or had been in there if it never was), especially since you can’t drop debug code into the production system. You have to test any new code before putting it into a production environment, and that’s not something you do at the drop of a hat.

The Oracle database (and Application Server and Oracle applications and various tools such as Application Express (APEX)) is heavily instrumented. Signs of this instrumentation in the database are

•\ V$ views: Most V$ views contain “debug” information. V$WAITSTAT, V$SESSION_EVENT, and many others exist solely to let us know what is going on deep in the kernel.

•\ The AUDIT command: This command allows you to specify what events the database should record for later analysis.

•\ Resource Manager (DBMS_RESOURCE_MANAGER): This feature lets you micromanage resources (CPU, I/O, and the like) within the database. What makes a Resource Manager in the database possible is that it has access to all of the runtime statistics describing how the resources are being used.

•\ Oracle events: These enable you to ask Oracle to produce trace or diagnostic information as needed.

•\ DBMS_TRACE: This facility within the PL/SQL engine exhaustively records the call tree of stored procedures, exceptions raised, and errors encountered.

•\ Database event triggers: These triggers, such as ON SERVERERROR, allow you to monitor and log any condition you feel is “exceptional” or out of the ordinary. For example, you can log the SQL that was running when an “out of temp space” error was raised.

•\ DBMS_MONITOR: This is used to view the exact SQL, wait events, and other performance/behavior-related diagnostic information generated by running your application. The SQL Trace facility is also available in an extended fashion via the 10046 Oracle event.

Instrumentation is vital in application design and development, and the Oracle database becomes better instrumented with each release. Oracle 10g took code instrumentation in the kernel to a whole new level with the introduction of the Automatic Workload Repository (AWR) and Active Session History (ASH) features.

Oracle 11g took that further with options such as the Automatic Diagnostic Repository (ADR) and the SQL Performance Analyzer (SPA). Oracle 12c advanced even further with the addition of a DDL log to track all DDL operations in a database (something that shouldn’t be happening in many typical production databases day to day) and the debug log to track exceptional conditions in the database.

In this section, we’re going to focus on the information you can find in various types of trace files. We’ll cover what they are, where they are stored, and what we can do with them.

There are two general types of trace files, and what we do with each kind is very different:

•\ Trace files you expected and want: These are, for example, the result of enabling DBMS_MONITOR.SESSION_TRACE_ENABLE. They contain diagnostic information about your session and will help you tune your application to optimize its performance and diagnose any bottlenecks it is experiencing.

•\ Trace files you were not expecting but the server generated as the result of an ORA-00600 “Internal Error,” ORA-03113 “End of file on communication channel,” or ORA-07445 “Exception Encountered” type of error: These traces contain diagnostic information that is most useful to an Oracle Support analyst and, beyond showing where in our application the internal error was raised, are of limited use to us.

More Details