Oct 22, 2023
Fast Recovery Area (FRA)- Files

The FRA in Oracle is a location where the database will manage many of the files related to database backup and recovery. In this area (an area being a part of a disk set aside for this purpose; a directory, for example), you could find the following:

•\ RMAN backup pieces (full and/or incremental backups)
•\ RMAN image copies (byte-for-byte copies of datafiles and controlfiles)
•\ Online redo logs
•\ Archived redo logs
•\ Multiplexed control files
•\ Flashback logs

Oracle uses this new area to manage these files, so the server will know what is on disk and what is not on disk (and perhaps on tape elsewhere). Using this information, the database can perform operations like a disk-to-disk restore of a damaged datafile or the flashing back (a “rewind” operation) of the database to undo an operation that should not have taken place. For example, you could use the FLASHBACK DATABASE command to put the database back the way it was five minutes ago (without doing a full restore of the database and a point-in-time recovery). That would allow you to “undrop” that accidentally dropped user account.

To set up an FRA, you need to set two parameters: db_recovery_file_dest_size and db_recovery_file_dest. You can view the current settings of these parameters via

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE

db_recovery_file_dest string /opt/oracle/fra
db_recovery_file_dest_size big integer 100G

The Fast Recovery Area is more of a logical concept. It is a holding area for the file types discussed in this chapter. Its use is optional—you don’t need to use it, but if you want to use some advanced features, such as the Flashback Database, you must use this area to store the information.

Data Pump Files

Data Pump is a file format used by at least two tools in Oracle. External tables can load and unload data in the Data Pump format, and the import/export tools IMPDP and EXPDP use this file format.

They are cross-platform (portable) binary files that contain metadata (not stored in CREATE/ALTER statements, but rather in XML) and possibly data.

That they use XML as a metadata representation structure is actually relevant to you and me as end users of the tools. IMPDP and EXPDP have some sophisticated filtering and translation capabilities.

This is in part due to the use of XML and to the fact that a CREATE TABLE statement is not stored as a CREATE TABLE, but rather as a marked-up document.

This permits easy implementation of a request like “Please replace all references to tablespace FOO with tablespace BAR.” IMPDP just has to apply a simple XML transformation to accomplish the same. FOO, when it refers to a TABLESPACE, would be surrounded by FOO tags (or some other similar representation).

In Chapter 15, we’ll take a closer look at these tools. Before we get there, however, let’s see how we can use this Data Pump format to quickly extract some data from database A and move it to database B. We’ll be using an “external table in reverse” here.

External tables give us the ability to read flat files—plain old text files—as if they were database tables. We have the full power of SQL to process them. They are read-only and designed to get data from outside Oracle in. External tables can go the other way: they can be used to get data out of the database in the Data Pump format to facilitate moving the data to another machine or another platform. To start this exercise, we’ll need a DIRECTORY object, telling Oracle the location to unload to:

SQL> create or replace directory tmp as ‘/tmp’; Directory created.
SQL> create table all_objects_unload organization external
( type oracle_datapumpdefault directory TMPlocation( ‘allobjects.dat’ ))

asselect * from all_objects

Table created.

And that literally is all there is to it: we have a file in /tmp named allobjects.dat that contains the contents of the query select * from all_objects. We can peek at this information:

SQL> !strings /tmp/allobjects.dat | head x86_64/Linux 2.4.xxAL32UTF8 19.00.00.00.00001:001:000001:000001 i<?xml version=”1.0″ encoding=”UTF-8″?…

That’s just the head, or top, of the file. Now, using a binary SCP, you can move that file to any other platform where you have Oracle installed and by issuing a CREATE DIRECTORY statement (to tell the database where the file is) and a CREATE TABLE statement, such as this:

SQL> create table t ( OWNER VARCHAR2(30),OBJECT_NAME VARCHAR2(30),SUBOBJECT_NAME VARCHAR2(30),OBJECT_ID NUMBER, DATA_OBJECT_ID NUMBER,OBJECT_TYPE VARCHAR2(19),CREATED DATE,LAST_DDL_TIME DATE,TIMESTAMP VARCHAR2(19),STATUS VARCHAR2(7),TEMPORARY VARCHAR2(1),GENERATED VARCHAR2(1),SECONDARY VARCHAR2(1))

organization external

( type oracle_datapump default directory TMP location( ‘allobjects.dat’ ));

You would be set to read that unloaded data using SQL immediately. That is the power of the Data Pump file format: immediate transfer of data from system to system, over “sneakernet” if need be. Think about that the next time you’d like to take a subset of data home to work with over the weekend while testing.

Even if the database character sets differ (they did not in this example), Oracle has the ability now to recognize the differing character sets due to the Data Pump format and deal with them. Character set conversion can be performed on the fly as needed to make the data “correct” in each database’s representation.

Again, we’ll come back to the Data Pump file format in Chapter 15, but this section should give you an overall feel for what it is about and what might be contained in the file.

More Details
Sep 22, 2023
Flat Files- Files

Flat files have been around since the dawn of electronic data processing. We see them literally every day. The text alert log described previously is a flat file. I found the following definition for “flat file” on the Internet and feel it pretty much wraps things up:

An electronic record that is stripped of all specific application (program) formats. This allows the data elements to be migrated into other applica-tions for manipulation. This mode of stripping electronic data prevents data loss due to hardware and proprietary software obsolescence.

A flat file is simply a file whereby each “line” is a “record,” and each line has some text delimited, typically by a comma or pipe (vertical bar). Flat files are easily read by Oracle using either the legacy data loading tool SQLLDR or external tables. In fact, I will cover this in detail in Chapter 15.

Occasionally, a user will request data in a flat file format such as one of the following:

•\ Character (or comma) separated values (CSV): These are easily imported into tools such as spreadsheets.
•\ HyperText Markup Language (HTML): Used to display pages displayed in web browsers.
•\ JavaScript Object Notation (JSON): Standard text-based format for representing structured data and used for transmitting data in web browsers.

I’ll briefly demonstrate generating each of the file types in the following sections.

Generating a CSV File

You can easily generate CSV flat files from SQLPlus. You can either use the -m ‘csv on’ switch from the operating system prompt or use SET MARKUP CSV after starting a SQLPlus session, for example:

$ sqlplus scott/tiger@PDB1
SQL> set markup csv on delimiter , quote off SQL> SELECT * FROM dept;DEPTNO,DNAME,LOC

10,ACCOUNTING,NEW YORK

20,RESEARCH,DALLAS

30,SALES,CHICAGO

40,OPERATIONS,BOSTON

Here is an example of using the -m ‘csv on’ switch at the command line:

$ sqlplus -markup ‘csv on quote off’ scott/tiger@PDB1
SQL> select * from emp where rownum < 3;

EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO 7369,SMITH,CLERK,7902,17-DEC-80,800,,20 7499,ALLEN,SALESMAN,7698,20-FEB-81,1600,300,30

Tip When you use the -markup ‘csv on’ switch, SQL*Plus sets variables such as ROWPREFETCH, STATEMENTCACHE, and PAGESIZE to optimize I/O performance.

Generating HTML

Similar to generating a CSV file, you can generate an HTML file from SQL*Plus by setting

SET MARKUP HTML ON before running your query:

SQL> set markup html on
SQL> select * from dept;

DEPTNODNAME… You can also specify the -markup ‘html on’ switch at the command line to generate HTML output: $ sqlplus -markup ‘html on’ scott/tiger@PDB1 In this manner, you can easily produce HTML output based on the contents of a table.
More Details
Aug 22, 2023
The Process Global Area and User Global Area- Memory Structures

The PGA is a process-specific piece of memory. In other words, it is memory specific to a single operating system process or thread.

This memory is not accessible by any other process or thread in the system. It is typically allocated via either of the C runtime calls malloc() or memmap(), and it may grow (or even shrink) at runtime.

The PGA is never allocated in Oracle’s SGA; it is always allocated locally by the process or thread—the P in PGA stands for process or program; it is not shared.

The UGA is, in effect, your session’s state. It is memory that your session must always be able to get to. The location of the UGA is dependent on how you connect to Oracle.

If you connect via a shared server, the UGA must be stored in a memory structure that every shared server process has access to—and that’s the SGA. In this way, your session can use any one of the shared servers, since any of them can read and write your session’s data.

On the other hand, if you are using a dedicated server connection, there’s no need for universal access to your session state, and the UGA becomes virtually synonymous with the PGA; it will, in fact, be contained in the PGA of your dedicated server.

When you look at the system statistics, you’ll find the UGA reported in the PGA in dedicated server mode (the PGA will be greater than or equal to the UGA memory used; the PGA memory size will include the UGA size as well).

So, the PGA contains process memory and may include the UGA. The other areas of PGA memory are generally used for in-memory sorting, bitmap merging, and hashing. It would be safe to say that, besides the UGA memory, these are the largest contributors by far to the PGA.

There are two ways to manage memory in the PGA: manual and automatic. The manual method should not be used (unless you’re on an old version of Oracle and don’t have a choice).

The automatic PGA memory management is the recommended technique that you should use. The automatic method is much simpler and more efficient in managing memory. The manner in which memory is allocated and used differs greatly in each case, so we’ll discuss each in turn.

More Details
Jul 22, 2023
Manual PGA Memory Management- Memory Structures

There are only a few scenarios that I can think of where manual PGA memory management may be appropriate. One is if you’re running an old version of Oracle and do not have the option to upgrade.

Another situation may be that you run large batch jobs that run during periods when they are the only activities in the instance where manual PGA management may provide performance benefits.

Therefore, it’s possible you may find yourself in an environment where this type of PGA memory management is used. Other than those scenarios, the manual management style should be avoided.

In manual PGA memory management, the following are the parameters that have the largest impact on the size of your PGA, outside of the memory allocated by your session for PL/SQL tables and other variables:

•\ SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk (using disk space in the temporary tablespace the user is assigned to).

•\ SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete. That is, if SORT_AREA_ SIZE is 512KB and SORT_AREA_RETAINED_SIZE is 256KB, your server process would use up to 512KB of memory to sort data during the initial processing of the query. When the sort is complete, the sorting area would “shrink” down to 256KB, and any sorted data that does not fit in that 256KB would be written out to the temporary tablespace.

•\ HASH_AREA_SIZE: The amount of memory your server process can use to store hash tables in memory. These structures are used during a hash join, typically when joining a large set with another set. The smaller of the two sets would be hashed into memory, and anything that didn’t fit in the hash area region of memory would be stored in the temporary tablespace by the join key.

Note Before using the parameters in the prior list, you must set the WORKAREA_ SIZE_POLICY parameter to MANUAL.

These parameters control the amount of space Oracle will use to sort or hash data in memory before using the temporary tablespace on disk, and how much of that memory segment will be retained after the sort is done. The SORT_AREA_SIZE (minus SORT_AREA_ RETAINED_SIZE) calculated value is generally allocated out of your PGA, and the SORT_ AREA_RETAINED_SIZE value will be in your UGA.

Here are the important things to remember about using the *_AREA_SIZE parameters:

•\ These parameters control the maximum amount of memory used by a SORT, HASH, or BITMAP MERGE operation.

•\ A single query may have many operations taking place that use this memory, and multiple sort/hash areas could be created. Remember that you may have many cursors opened simultaneously, each with its own SORT_AREA_RETAINED needs. So, if you set the sort area size to 10MB, you could use 10, 100, 1000, or more megabytes of RAM in your session. These settings are not session limits; rather, they are limits on a single operation, and your session could have many sorts in a single query or many queries open that require a sort.

•\ The memory for these areas is allocated on an “as needed” basis. If you set the sort area size to 1GB as we did, it doesn’t mean you’ll allocate 1GB of RAM. It only means that you’ve given the Oracle process the permission to allocate that much memory for a sort/hash operation.

Now that we’ve briefly reviewed manual PGA memory management, let’s move on to what you should be using, automatic PGA memory management.

More Details
Jun 22, 2023
Tablespaces- Files

As noted earlier, a tablespace is a container—it holds segments. Each segment belongs to exactly one tablespace. A tablespace may have many segments within it. All of the extents for a given segment will be found in the tablespace associated with that segment. Segments never cross tablespace boundaries.

A tablespace itself has one or more datafiles associated with it. An extent for any given segment in a tablespace will be contained entirely within one datafile. However, a segment may have extents from many different datafiles.

Graphically, a tablespace might look like Figure 3-3.

Figure 3-3.  A tablespace containing two datafiles, three segments, and four extents

Figure 3-3 shows a tablespace named USER_DATA. It consists of two datafiles, user_data01.dbf and user_data02.dbf. It has three segments allocated to it: T1, T2, and I1 (probably two tables and an index).

The tablespace has four extents allocated in it, and each extent is depicted as a logically contiguous set of database blocks. Segment T1 consists of two extents, one extent in each file. Segments T2 and I1 each have one extent depicted. If we need more space in this tablespace, we could either resize the datafiles already allocated to the tablespace or we could add a third datafile to it.

A tablespace is a logical storage container in Oracle. As developers, we will create segments in tablespaces. We will never get down to the raw file level—we don’t specify that we want our extents to be allocated in a specific file (we can, but in general we don’t). Rather, we create objects in tablespaces and Oracle takes care of the rest. If at some point in the future, the DBA decides to move our datafiles around on disk to more evenly distribute I/O, that is OK with us. It will not affect our processing at all.

Storage Hierarchy Summary

In summary, the hierarchy of storage in Oracle is as follows:

\ 1.\ A database is made up of one or more tablespaces.

\ 2.\ A tablespace is made up of one or more datafiles. These files might be cooked files in a file system, raw partitions, ASM-managed database files, or a file on a clustered file system. A tablespace contains segments.

\ 3.\ A segment (TABLE, INDEX, and so on) is made up of one or more extents. A segment exists in a tablespace, but may have data in many datafiles within that tablespace.

\ 4.\ An extent is a logically contiguous set of blocks on disk. An extent is in a single tablespace and, furthermore, is always in a single file within that tablespace.

\ 5.\ A block is the smallest unit of allocation in the database. A block is the smallest unit of I/O used by a database on datafiles.

More Details
May 22, 2023
A Brief Review of File System Mechanisms- Files

There are three file system mechanisms in which to store your data in Oracle. By your data, I mean your data dictionary, redo, undo, tables, indexes, LOBs, and so on—the data you personally care about at the end of the day. Briefly, they are

•\     “Cooked” operating system (OS) file systems: These are files that appear in the file system just like your word processing documents do. You can see them in Windows Explorer; you can see them in UNIX/Linux as the result of an ls command. You can use simple OS utilities such as xcopy on Windows or cp on UNIX/Linux to move them around. Cooked OS files are historically the most popular method for storing data in Oracle. In RAC environments, the ASM disk type is used (more on that in a moment). Cooked file systems are typically buffered as well, meaning that the OS will cache information for you as you read and, in some cases, write to disk.

•\     Automatic Storage Management (ASM): ASM is a file system designed exclusively for use by the database. An easy way to think about it is as a database file system. You won’t store your shopping list in a text file on this particular file system—you’ll store only database-related information here: tables, indexes, backups, control files, parameter files, redo logs, archives, and more. But even in ASM, the equivalent of a datafile exists; conceptually, data is still stored in files, but the file system is ASM. ASM is designed to work in either a single machine or clustered environment. Since Oracle 11g Release 2, ASM provides not only this database file system but optionally a clustered file system as well, which is described next.

•\     Clustered file system: This is specifically for a RAC (clustered) environment and provides what looks like a cooked file system that is shared by many nodes (computers) in a clustered environment. A traditional cooked file system is usable by only one computer in a clustered environment. So, while it is true that you could NFS mount or Samba share (a method of sharing disks in a Windows/UNIX/ Linux environment similar to NFS) a cooked file system among many nodes in a cluster, it represents a single point of failure. If the node owning the file system and performing the sharing failed, that file system would be unavailable. In releases of Oracle prior to 11g Release 2, the Oracle Cluster File System (OCFS) is Oracle’s offering in this area and is currently available for Windows and UNIX/ Linux only. Other third-party vendors provide certified clustered file systems that work with Oracle as well. Oracle 11g Release 2 provides another option in the form of the Oracle Automatic Storage Management Cluster File System (ACFS). A clustered file system brings the comfort of a cooked file system to a clustered environment.

The interesting thing is that a database might consist of files from any or all of the preceding file systems—you don’t need to pick just one. You could have a database whereby portions of the data were stored in conventional cooked file systems, some on raw partitions, others in ASM, and yet other components in a clustered file system.

This makes it rather easy to move from technology to technology or to just get your feet wet in a new file system type without moving the entire database into it. Now, since a full discussion of file systems and all of their detailed attributes is beyond the scope of this book, we’ll dive back into the Oracle file types. Regardless of whether the file is stored on cooked file systems, in raw partitions, within ASM, or on a clustered file system, the following concepts always apply.

Note  Raw partitions are deprecated in Oracle 11g and are no longer supported at all in Oracle 12c.

More Details
Apr 22, 2023
Trace File Wrap-Up- Files

You now know the two types of general trace files, where they are located, and how to find them. Hopefully, you’ll use trace files mostly for tuning and increasing the performance of your application, rather than for filing service requests.

As a last note, Oracle Support does have access to many undocumented “events” that are very useful for dumping out tons of diagnostic information whenever the database hits any error. For example, if you are getting an ORA-01555 Snapshot Too Old that you absolutely feel you should not be getting, Oracle Support can guide you through the process of setting such diagnostic events to help you track down precisely why that error is getting raised, by creating a trace file every time that error is encountered.

Alert Log File

The alert file (also known as the alert log) is the diary of the database. It is a simple text file written to from the day the database is “born” (created) to the end of time (when you erase it). In this file, you’ll find a chronological history of your database—the log switches; the internal errors that might be raised; when tablespaces were created, taken offline, put back online; and so on. It is an incredibly useful file for viewing the history of a database. I like to let mine grow fairly large before “rolling” (archiving) it.

The more information, the better, I believe, for this file. I will not describe everything that goes into an alert log—that’s a fairly broad topic. I encourage you to take a look at yours, however, and see the wealth of information it holds.

To determine the location of the text-based alert log for your database, run the following query:

SQL> select value from v$diag_info, v$instance where name = ‘Diag Trace’;

VALUE

/opt/oracle/diag/rdbms/cdb/CDB/trace

The name of the alert log will be of this format:

alert_.log

You can generate the location and name of the alert log with the following query:

SQL> select value || ‘/alert_’ || instance_name || ‘.log’ from v$diag_info, v$instance where name = ‘Diag Trace’;

VALUE||’/ALERT_’||INSTANCE_NAME||’.LOG’

/opt/oracle/diag/rdbms/cdb/CDB/trace/alert_CDB.log
If you want to view the location of the XML-based alert log, run this query:
SQL> select value from v$diag_info where name = ‘Diag Alert’;

It’s worth noting that there is an internal table, X$DBGALERTEXT, that you can query from SQL*Plus which derives its information from the alert log. This table requires SYS privileges to view. For example, as SYS, you can query the alert log for ORA- errors as follows:

$ sqlplus / as sysdba
SQL> select record_id,
to_char(originating_timestamp,’DD.MM.YYYY HH24:MI:SS’),
message_text
from x$dbgalertext
where message_text like ‘%ORA-%’;
Tip Oracle Support has a note on how to edit, read, and query the alert log (Doc ID 1072547.1).

In addition to using an external table to query the alert log, you can easily view the alert log using the ADRCI tool. That tool lets you find, edit (review), and monitor (interactively display new records as they appear in the alert log). Also, Enterprise Manager provides access to the alert log.

Generally speaking, when the need arises to view the alert log, most DBAs will navigate directly to the alert log location. Once there, they’ll use operating system utilities such as vi, tail, and grep to extract information.

Tip With Oracle 19c and above, there’s also an attention.log file that contains information regarding critical events in your database (startup, shutdown, invalid memory parameters, and so on). It is located in the $ORACLE_BASE/diag/ rdbms///log directory.

More Details
Nov 22, 2022
Control Files- Files

Control files are fairly small files (usually megabytes in size) that contain a directory of the other files Oracle needs.

The parameter file tells the instance where the control files are, and the control files tell the instance where the database and online redo log files are. You can view the location of the control files via the following:

SQL> show parameter control_files

NAME TYPE VALUE

control_files string /opt/oracle/oradata/CDB/
control01.ctl, /opt/oracle/oradata/C
DB/control02.ctl

The control files also tell Oracle other things, such as information about checkpoints that have taken place, the name of the database (which should match the db_name parameter in the parameter file), the timestamp of the database as it was created, an archive redo log history (this can make a control file large in some cases), RMAN information, and so on.

Control files should be multiplexed either by hardware (RAID) or by Oracle when RAID or mirroring is not available. More than one copy should exist, and the copies should be stored on separate disks to avoid losing them in case you have a disk failure. It is not fatal to lose your control files—it just makes recovery that much harder.

Control files are something a developer will probably never have to actually deal with. To a DBA, they are an important part of the database, but to a software developer they are not really relevant.

Redo Log Files

Redo log files are crucial to the Oracle database. These are the transaction logs for the database. They are generally used only for recovery purposes, but they can be used for the following as well:

•\ Instance recovery after a system crash
•\ Media recovery after a datafile restore from backup
•\ Standby database processing
•\ Input into GoldenGate—redo log mining processes for information sharing (a fancy way of saying replication)
•\ Allow administrators to inspect historical database transactions through the Oracle LogMiner utility

Their main purpose in life is to be used in the event of an instance or media failure or as a method of maintaining a standby database for failover. If the power goes off on your database machine, causing an instance failure, Oracle will use the online redo logs to restore the system to exactly the point it was at immediately prior to the power outage. If your disk drive containing your datafile fails permanently, Oracle will use archived redo logs, as well as online redo logs, to recover a backup of that drive to the correct point in time. Additionally, if you “accidentally” drop a table or remove some critical information and commit that operation, you can restore a backup and have Oracle restore it to the point just before the accident using these online and archived redo log files.

Virtually every operation you perform in Oracle generates some amount of redo to be written to the online redo log files. When you insert a row into a table, the end result of that insert is written to the redo logs. When you delete a row, the fact that you deleted that row is written. When you drop a table, the effects of that drop are written to the redo log. The data from the table you dropped is not written; however, the recursive SQL that Oracle performs to drop the table does generate redo. For example, Oracle will delete a row from the SYS.OBJ$ table (and other internal dictionary objects), and this will generate redo, and if various modes of supplemental logging are enabled, the actual DROP TABLE statement will be written into the redo log stream.

Some operations may be performed in a mode that generates as little redo as possible. For example, I can create an index with the NOLOGGING attribute. This means that the initial creation of the index data will not be logged, but any recursive SQL Oracle performed on my behalf will be. For example, the insert of a row into SYS. OBJ$ representing the existence of the index will be logged, as will all subsequent modifications of the index using SQL inserts, updates, and deletes. But the initial writing of the index structure to disk will not be logged.

I’ve referred to two types of redo log file: online and archived. We’ll take a look at each in the sections that follow. In Chapter 9, we’ll take another look at redo in conjunction with undo segments, to see what impact they have on you as a developer. For now, we’ll just concentrate on what they are and what their purpose is.

More Details
Oct 22, 2022
Temp Files- Files

Temporary datafiles (temp files) in Oracle are a special type of datafile. Oracle will use temporary files to store the intermediate results of large sort operations and hash operations, as well as to store global temporary table data, or resultset data, when there is insufficient memory to hold it all in RAM. Temporary tablespaces can also hold the UNDO generated by operations performed on global temporary tables. Permanent data objects, such as a table or an index, will never be stored in a temp file, but the contents of a temporary table and its indexes would be. So, you’ll never create your application tables in a temp file, but you might store data there when you use a temporary table.

Temp files are treated in a special way by Oracle. Normally, every change you make to an object will be recorded in the redo logs; these transaction logs can be replayed at a later date to “redo a transaction,” which you might do during recovery from failure. Temp files are excluded from this process. Specifically, transactions in global temporary tables (located in temp files) never have REDO generated for them, although they can have UNDO generated. Thus, there may be REDO generated working with temporary tables since UNDO is always protected by REDO, as you will see in detail in Chapter 9. The UNDO generated for global temporary tables is to support rolling back work you’ve done in your session, either due to an error processing data or because of some general transaction failure. A DBA never needs to back up a temporary datafile, and, in fact, attempting to do so would be a waste of time, as you can never recover a temporary datafile.

Note In Oracle 12c and above, the UNDO generated for global temporary tables may be stored in the temporary tablespace. By default, UNDO will be generated into the permanent UNDO tablespace, just like prior releases. An init.ora system-level setting, or a TEMP_UNDO_ENABLED session-level settable parameter, may be set to TRUE to enable the UNDO generated for global temporary tables to be stored in a temp file. In this manner, no REDO will be generated for these operations. We will investigate this further in Chapter 9.

One of the nuances of true temp files is that if the OS permits it, the temporary files will be created sparse—that is, they will not actually consume disk storage until they need to. You can see that easily in this example (on Oracle Linux):

SQL> !df -h /tmp

Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root

50G 6.5G 41G 14% /
SQL> create temporary tablespace temp_huge tempfile ‘/tmp/temp_huge.dbf’ size 2g;

Tablespace created.
SQL> !df -h /tmp
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 6.5G 41G 14% /
SQL> !ls -l /tmp/temp_huge.dbf
-rw-r—–. 1 oracle oinstall 2147491840 Feb 15 18:06 /tmp/temp_huge.dbf

Note The UNIX/Linux command df shows “disk free” space. This command showed that I have 41GB free in the file system containing /tmp before I added a 2GB temp file to the database. After I added that file, I still had 41GB free in the file system.

Apparently, it didn’t take much storage to hold that file. If we look at the ls output, it appears to be a normal 2GB file, but it is, in fact, consuming only a few kilobytes of storage currently. So we could actually create hundreds of these 2GB temporary files, even though we have roughly 41GB of disk space free. Sounds great—free storage for all! The problem is, as we start to use these temp files and they start expanding out, we would rapidly hit errors stating “no more space.” Since the space is allocated or physically assigned to the file as needed by the OS, we stand a definite chance of running out of room (especially if after we create the temp files, someone else fills up the file system with other stuff).

How to solve this differs from OS to OS. On UNIX/Linux, you can use dd to fill the file with data, causing the OS to physically assign disk storage to the file, or use cp to create a nonsparse file, for example:

SQL> !cp –sparse=never /tmp/temp_huge.dbf /tmp/temp_huge_not_sparse.dbf
SQL> !df -h /tmp

Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root
50G 8.5G 39G 19% /
SQL> drop tablespace temp_huge including contents and datafiles;

Tablespace dropped.

SQL> create temporary tablespace temp_huge tempfile ‘/tmp/temp_huge_not_ sparse.dbf’ reuse;
Tablespace created.

After copying the sparse 2GB file to /tmp/temp_huge_not_sparse.dbf and creating the temporary tablespace using that temp file with the REUSE option, we are assured that temp file has allocated all of its file system space, and our database actually has 2GB of temporary space to work with.

Note In my experience, Windows NTFS does not do sparse files, and this applies to UNIX/Linux variants. On the plus side, if you have to create a 15GB temporary tablespace on UNIX/Linux and have temp file support, you’ll find it happens very fast (instantaneously); just make sure you have 15GB free and reserve it in your mind.

More Details
Sep 22, 2022
Online Redo Log- Files-2

So, when this message appeared, processing was suspended in the database while DBWn hurriedly finished its checkpoint. Oracle gave all the processing power it could to DBWn at that point in the hope it would finish faster.

This is a message you never want to see in a nicely tuned database instance. If you do see it, you know for a fact that you have introduced artificial, unnecessary waits for your end users. This can always be avoided. The goal (and this is for the DBA, not the developer necessarily) is to have enough online redo log files allocated so that you never attempt to reuse a log file before the checkpoint (initiated by the log switch) completes. If you see this message frequently, it means a DBA has not allocated sufficient online redo logs for the application, or that DBWn needs to be tuned to work more efficiently.

Different applications will generate different amounts of redo log. A decision support system (DSS, query only) or DW system will naturally generate significantly less online redo logging than an OLTP (transaction processing) system would, day to day. A system that does a lot of image manipulation in Binary Large Objects (BLOBs) in the database may generate radically more redo than a simple order entry system. An order entry system with 100 users will probably generate a tenth the amount of redo 1000 users would generate. Thus, there is no “right” size for your redo logs, although you do want to ensure they are large enough for your unique workload.

You must take many things into consideration when setting both the size of and the number of online redo logs. Many of them are beyond the scope of this book, but I’ll list some of them to give you an idea:

•\ Peak workloads: You’d like your system to not have to wait for checkpoint-not-complete messages, to not get bottlenecked during your peak processing. You should size your redo logs not for average hourly throughput, but rather for your peak processing. If you generate 24GB of log per day, but 10GB of that log is generated between 9:00 am and 11:00 am, you’ll want to size your redo logs large enough to carry you through that two-hour peak. Sizing them for an average of 1GB per hour would probably not be sufficient.

•\ Lots of users modifying the same blocks: Here, you might want large redo log files. Since everyone is modifying the same blocks, you’d like to update them as many times as possible before writing them out to disk. Each log switch will fire a checkpoint, so you’d like to switch logs infrequently. This may, however, affect your recovery time.

•\ Mean time to recover: If you must ensure that a recovery takes as little time as possible, you may be swayed toward smaller redo log files, even if the previous point is true. It will take less time to process one or two small redo log files than a gargantuan one upon recovery. The overall system will run slower than it absolutely could day to day perhaps (due to excessive checkpointing), but the amount of time spent in recovery will be shorter. There are other database parameters that may also be used to reduce this recovery time, as an alternative to the use of small redo log files.

More Details