Jul 22, 2024
Database Block Buffer Cache- Memory Structures

So far, we have looked at relatively small components of the SGA. Now we are going to look at one that is potentially huge in size. The block buffer cache is where Oracle stores database blocks before writing them to disk and after reading them in from disk.

This is a crucial area of the SGA for us. Make it too small and our queries will take forever to run. Make it too big and we’ll starve other processes (e.g., we won’t leave enough room for a dedicated server to create its PGA, and we won’t even get started).

We have three places to store cached blocks from individual segments in the SGA:

•\ Default pool: The location where all segment blocks are normally cached. This is the original—and, previously, the only—buffer pool.
•\ Keep pool: An alternate buffer pool where by convention you assign segments that are accessed fairly frequently, but still get aged out of the default buffer pool due to other segments needing space.
•\ Recycle pool: An alternate buffer pool where by convention you assign large segments that you access very randomly and which would therefore cause excessive buffer flushing of many blocks from many segments. There’s no benefit to caching such segments because by the time you wanted the block again, it would have been aged out of the cache. You would separate these segments out from the segments in the default and keep pools so they would not cause those blocks to age out of the cache.

Note that in the keep and recycle pool descriptions I used the phrase “by convention.” There is nothing in place to ensure that you use either the keep pool or the recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks.

The goal here was to give the DBA the ability to segregate segments to hot, warm, and “do not care to cache” areas. The theory was that objects in the default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves.

The cache would keep them in memory since they were very popular blocks. If you had some segments that were fairly popular but not really hot, these would be considered the warm blocks. These segments’ blocks could get flushed from the cache to make room for blocks you used infrequently (the “do not care to cache” blocks).

To keep these warm segments’ blocks cached, you could do one of the following:

•\ Assign these segments to the keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.

•\ Assign the “do not care to cache” segments to the recycle pool, keeping the recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).

Having to do one of these two things increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. ­Remember also that there is no sharing among them, so if the keep pool has lots of unused space, it won’t give it to the overworked default or recycle pool.

All in all, these pools were generally regarded as a very fine, low-level tuning device, only to be used after most other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather than set up multiple buffer pools, that would be my choice).

There are up to four more optional caches, the DB_nK_CACHE_SIZE, to consider in addition to the default, keep, and recycle pools. These caches were added in support of multiple block sizes in the database. A database can have a default block size, which is the size of the blocks stored in the default, keep, or recycle pool, as well as up to four nondefault block sizes, as explained in Chapter 3.

The blocks in these buffer caches are managed in the same way as the blocks in the original default pool—there are no special algorithm changes for them either. Let’s now move on to see how the blocks are managed in these pools.

More Details
Mar 22, 2024
Determining How the Memory Is Allocated- Memory Structures-2

This script sorts the big table T using automatic PGA memory management. Then, for that session, it captures all of the PGA/UGA memory settings as well as the sort-to-disk activity. In addition, the UNION ALL adds system-level metrics about the same (total PGA memory, total UGA memory, and so on). I ran that script against a database started with the following initialization settings:

memory_target=0 pga_aggregate_target=300m sga_target=1500m

These settings show I was using automatic PGA memory management

with a PGA_AGGREGATE_TARGET of 300MB, meaning I wanted Oracle to use up to about 300MB of PGA memory for sorting.

I set up another script to be run in other sessions to generate a large sorting load on the machine. This script loops and uses a built-in package, DBMS_ALERT, to see if it should continue processing. If it should, it runs the same big query, sorting the entire T table. When the simulation finished, a session could signal all of the sorting processes, the load generators, to “stop” and exit. Here’s the script (stored in a file named gen_load.sql) used to perform the sort:

To observe the differing amounts of RAM allocated to the session I was measuring, I initially ran the SELECT in isolation—as the only session. I captured the statistics and saved them into the SESS_STATS table, along with the count of active sessions. Then I added 25 sessions to the system (i.e., I ran the preceding benchmark script (gen_load. sql) with the for i in 1 .. 999999 loop in 25 new sessions). I waited a short period of time—one minute for the system to adjust to this new load—and then I created a new session and ran the single sort query from earlier, capturing the metrics the first time through the loop. I did this repeatedly, for up to 300 concurrent users.

Tip On the GitHub source code site for this book, you can download the scripts used for this experiment. In the ch04 directory, the run.sql script automates the test described in this section.

It should be noted that I asked the database instance to do an impossible thing here. At 300 users, we would be very close to the PGA_AGGREGATE_TARGET setting just by having them all logged in, let alone actually doing any work! This drives home the point that the PGA_AGGREGATE_TARGET is just that: a target, not a directive. We can and will exceed this value for various reasons.

Now we are ready to report on the finding; for reasons of space, we’ll stop the output at 275 users—since the data starts to get quite repetitive:

SQL> column active format 999
SQL> column pga format 999.9
SQL> column “tot PGA” format 999.9
SQL> column pga_diff format 999.99
SQL> column “temp write” format 9,999
SQL> column “tot writes temp” format 99,999,999
SQL> column writes_diff format 9,999,999
SQL> select active,

Before we analyze the results, let’s look at the query I used for reporting. My query uses a feature called pivot to pivot a resultset. Here’s an alternate way to write lines 11 through 22 of that SQL query (without the pivot feature):

select active,

max( decode(name,’session pga memory’,val) ) pga,

max( decode(name,’total: session pga memory’,val) ) as “tot PGA”, max( decode(name, ‘physical writes direct temporary tablespace’, val) ) as “temp write”,

This part of the query retrieved the records from the table of metrics when there were less than 275 active sessions, converted the metrics for memory (UGA/PGA memory) from bytes into megabytes, and then pivoted—turned rows into columns—on the four interesting metrics. Once we got those four metrics in a single record, we used analytics (the LAG() function specifically) to add to each row the prior rows’ total observed PGA and total observed I/O to temp so we could easily see the incremental differences in these values. Back to the data—as you can see, when I had a few active sessions, my sorts were performed entirely in memory.

For an active session count of 1 to somewhere less than 50, I could sort entirely in memory. However, by the time I had 50 users logged in and actively sorting, the database started reining in the amount of memory I was allowed to use at a time. It would have taken a couple of minutes before the amount of PGA being used fell back within acceptable limits (the 300MB request), but eventually it would at these low concurrent user levels.

The amount of PGA memory allocated to the session we were watching dropped from 15.2MB to 7.7MB and settled on around 5.2MB (remember, parts of that PGA are not for work area (sorting) allocations, but are for other operations; just the act of logging in created a .5MB PGA allocation).

The total PGA in use by the system remained within tolerable limits until somewhere around 126 users. At that point, I started to exceed on a regular basis the PGA_AGGREGATE_TARGET and continued to do so until the end of the test. I gave the database instance in this case an impossible­ task; the very act of having 126 users, most executing PL/SQL, plus the sort they were all requesting, just did not fit into the 300MB of RAM I had targeted. It simply could not be done.

Each session therefore used as little memory as possible, but had to allocate as much memory as it needed. By the time I finished this test, the active sessions were using a total of about 560MB of PGA memory—as little as they could.

Automatic PGA memory management was designed specifically to allow a small community of users to use as much RAM as possible when it was available. In this mode, it backs off on this allocation over time as the load increased and increases the amount of RAM allocated for individual operations over time as the load decreased.

More Details
Mar 22, 2024
Determining How the Memory Is Allocated- Memory Structures-1

Questions that come up frequently are “How is this memory allocated?” and “What will be the amount of RAM used by my session?” These are hard questions to answer for the simple reason that the algorithms for serving out memory under the automatic scheme are not documented and can and will change from release to release. When using things that begin with “A”—for automatic—you lose a degree of control, as the underlying algorithms decide what to do and how to control things.

We can make some observations based on information from Oracle Support notes 147806.1 and 223730.1:

•\ The PGA_AGGREGATE_TARGET is a goal of an upper limit. It is not a value that is preallocated when the database is started up. You can observe this by setting the PGA_AGGREGATE_TARGET to a value much higher than the amount of physical memory you have available on your server. You will not see any large allocation of memory as a result (one caveat, if you’ve set MEMORY_TARGET, and then set PGA_ AGGREGATE_TARGET to a value larger than MEMORY_TARGET, on instance startup Oracle throws an ORA-00838 error and won’t let you start yourinstance).

•\ The amount of PGA memory available for a given session is derived from the setting of PGA_AGGREGATE_TARGET. The algorithm for determining the maximum size used by a process varies by database version. The amount of PGA memory a process is allocated is typically a function of the amount of memory available and the number of processes competing for space.

•\ As the workload on your instance goes up (more concurrent queries, concurrent users), the amount of PGA memory allocated to your work areas will go down. The database will try to keep the sum of all PGA allocations under the threshold set by PGA_AGGREGATE_TARGET. This is analogous to having a DBA sit at a console all day, setting the SORT_AREA_SIZE and HASH_AREA_SIZE parameters based on the amount of work being performed in the database. We will directly observe this behavior shortly in a test.

OK, so how can we observe the different work area sizes being allocated to our session? By running some test scripts to observe the memory used by our session and the amount of I/O to temp we performed. I performed the following test on an Oracle Linux machine with four CPUs and dedicated server connections. We begin by creating a table to hold the metrics we’d like to monitor (the following code is placed in a file named stats.sql):

The columns in this table we’ll be using for the metrics represent

•\ NAME: The name of the statistic we are gathering (PGA and UGA information from V$SESSTAT for the current session, plus all of the memory information for the database instance as well as temporary tablespace writes).

•\ VALUE: The value of the given metric.

•\ ACTIVE: The number of other sessions doing work in the instance. Before we start, we assume an “idle” instance; we are the only user session right now, hence the value of zero.

Next, create a table T as follows:

More Details
Jan 22, 2024
Automatic PGA Memory Management- Memory Structures

In almost all cases, you should be using automatic PGA memory management. The entire goal of automatic PGA memory management is to maximize the use of RAM while at the same time not using more RAM than you want. You can enable the automatic management of the PGA in two ways:

•\ Set MEMORY_TARGET to zero, and then set PGA_AGGREGATE_TARGET to a nonzero value. The PGA_AGGREGATE_TARGET parameter controls how much memory the instance should allocate, in total, for all work areas used to sort or hash data. Its default value varies by version and may be set by various tools such as the DBCA. In this mode, WORKAREA_SIZE_POLICY is set to AUTO (which is its default value).

•\ Use the AMM feature by setting MEMORY_TARGET to a nonzero value, and leave PGA_AGGREGATE_TARGET set to zero. This effectively lets Oracle manage the allocation of memory to the PGA. However, if you’re in a Linux environment that uses HugePages, you should not be using the AMM method to manage memory (more on this in the“System Global Area (SGA) Memory Management” section of this chapter).

The prior two techniques are discussed in the following subsections.

Setting PGA_AGGREGATE_TARGET

Most of the databases I’ve worked on in the past several years use automatic PGA memory management and automatic SGA memory management. For my test database, automatic PGA memory management and automatic SGA memory management are enabled as follows (you’ll use memory sizes appropriate for your environment, given the workload and amount of physical memory available):

$ sqlplus / as sysdba
SQL> alter system set memory_target=0 scope=spfile;
SQL> alter system set pga_aggregate_target=300M scope=spfile; SQL> alter system set sga_target=1500M scope=spfile;

Then restart the instance to instantiate the parameters (using startup force here which shuts down abort and restarts the instance):

SQL> startup force;

You don’t have to enable both PGA and SGA memory management together (as shown in the prior example). You could enable one for automatic management and leave the other for manual management. I usually don’t implement it that way, but you can do it.

Also, some places I’ve worked have set the PGA_AGGREGATE_LIMIT parameter as well. In most scenarios, you don’t need to set this parameter as it will default to a reasonable value. If for some reason you need more control, then feel free to set it. Keep in mind if you set this parameter too low, you’ll get an ORA-00093 error, and your instance will fail to start. In this situation, you’ll need to create a text-based init.ora file, and restart your instance, and re-create your spfile (see Chapter 3 for details on how to do this).

Setting MEMORY_TARGET

The automatic memory management of the PGA is enabled as follows (adjust the memory sizes per your environment):

$ sqlplus / as sysdba
SQL> alter system set memory_target=1500M scope=spfile;
SQL> alter system set pga_aggregate_target=0 scope=spfile;
SQL> alter system set sga_target=0 scope=spfile;

And at this point, you can restart your instance to instantiate the parameters. If you want to give Oracle recommendations on minimal values to use for the SGA_TARGET and PGA_AGGREGATE_TARGET, you can set these to a nonzero value (as long as the sum of these is less than the value of MEMORY_TARGET):

SQL> alter system set sga_target=500M scope=spfile;
SQL> alter system set pga_aggregate_target=400M scope=spfile;

Note This PGA_AGGREGATE_TARGET parameter is optional for pluggable databases. When this parameter is set in a pluggable database, it specifies the target aggregate PGA size for the pluggable database.

Now that we’ve covered enabling automatic PGA memory management, let’s next look at how the PGA memory is allocated.

More Details
Dec 22, 2023
Change Tracking File- Files

The change tracking file is an optional file for use with Oracle 10g Enterprise Edition and above. The sole purpose of this file is to track what blocks have modified since the last incremental backup.

With this, the Recovery Manager (RMAN) tool can back up only the database blocks that have actually been modified without having to read the entire database.

As Oracle is running, and as blocks are modified, Oracle optionally maintains a file that tells RMAN what blocks have been changed. Creating this change tracking file is rather simple and is accomplished via the ALTER DATABASE command:

$ mkdir -p /opt/oracle/product/btc $ sqlplus / as sysdba
SQL> alter database enable block change tracking using file ‘/opt/oracle/ product/btc/changed_blocks.btc’; Database altered.

Caution I’ll say this from time to time throughout the book: please bear in mind that commands that set parameters, modify the database, or make fundamental changes should not be done lightly and definitely should be tested prior to performing them on your “real” system. The preceding command will, in fact, cause the database to do more work. It will consume resources.

To turn off and remove the block change tracking file, you’d use the ALTER DATABASE command once again:

SQL> alter database disable block change tracking; Database altered.

Note that this command will erase the block change tracking file. It does not just disable the feature—it removes the file as well.

Note On certain operating systems, such as Windows, you might find that if you run my example—creating a block change tracking file and then disabling it—the file appears to still exist. This is an OS-specific issue—it does not happen on many operating systems. It will happen only if you CREATE and DISABLE the change tracking file from a single session. The session that created the block change tracking file will leave that file open, and some operating systems will not permit you to erase a file that has been opened by a previous process (e.g., the session process that created the file). This is harmless; you just need to remove the file yourself later.

You can enable this new block change tracking feature in either ARCHIVELOG or NOARCHIVELOG mode. But remember, a database in NOARCHIVELOG mode, where the redo log generated daily is not retained, can’t recover all changes in the event of a media (disk or device) failure! A NOARCHIVELOG mode database will lose data someday. We will cover these two database modes in more detail in Chapter 9.

Flashback Logs

Flashback logs are used to support the FLASHBACK DATABASE command. Flashback logs contain “before images” of modified database blocks that can be used to return the database to the way it was at some prior point in time.

More Details
Nov 22, 2023
Flashback Database- Files

The FLASHBACK DATABASE command was introduced to speed up the otherwise slow process of point-in-time database recovery. It can be used in place of a full database restore and a rolling forward using archive logs, and it is primarily designed to speed up the recovery from an “accident.”

For example, let’s take a look at what a DBA might do to recover from an accidentally dropped schema, in which the right schema was dropped, just in the wrong database (it was meant to be dropped in the test environment). The DBA immediately recognizes the mistake they have made and shuts down the database right away. Now what?

Prior to the FLASHBACK DATABASE capability, what would probably happen is this:

\ 1.\ The DBA would shut down the database.

\ 2.\ The DBA would restore the last full backup of the database from tape (typically), generally a long process. Typically, this would be initiated with RMAN via RESTORE DATABASE UNTIL .

\ 3.\ The DBA would restore all archive redo logs generated since the backup that were not available on the system.

\ 4.\ Using the archive redo logs (and possibly information in the online redo logs), the DBA would roll the database forward and stop rolling forward at a point in time just before the erroneous DROP USER command. Steps 3 and 4 in this list would typically be initiated with RMAN via RECOVER DATABASE UNTIL .

\ 5.\ The database would be opened with the RESETLOGS option.

This was a nontrivial process with many steps and would generally consume a large piece of time (time when no one could access the database, of course). The causes of a point-in-time recovery like this are many: an upgrade script gone awry, an upgrade gone bad, an inadvertent command issued by someone with the privilege to issue it (a mistake, probably the most frequent cause), or some process introducing data integrity issues into a large database (again, an accident; maybe it was run twice instead of just once, or maybe it had a bug). Whatever the reason, the net effect was a large period of downtime.

The steps to recover, assuming you configured the Flashback Database capability, would be as follows:

\ 1.\ The DBA shuts down the database.

\ 2.\ The DBA startup-mounts the database and issues the Flashback Database command, using either an SCN (the Oracle internal clock), a restore point (which is a pointer to an SCN), or a timestamp (wall clock time), which would be accurate to within a couple of seconds.

\ 3.\ The DBA opens the database with resetlogs.

When flashing back, you can flashback either the entire container database (including all pluggable databases) or just a particular pluggable database. When flashing back the pluggable database, you would use restore points that you created for the entire database or for a specific pluggable database. Flashing back a pluggable database does not require that you open the database with resetlogs, but it does require that you issue the RECOVER DATABASE command while connected to the pluggable database you flashed back.

To use the flashback feature, your database must be in archivelog mode, and you must have a Fast Recovery Area (FRA) setup (because the flashback logs are stored in the FRA). To use normal restore points, you must enable flashback logging in the database. Guaranteed restore points do not require flashback logging in the database.

To view the flashback logging status, run the following query:

SQL> select flashback_on from v$database;
FLASHBACK_ON

NO

To enable flashback logging, do as follows in the root container:

$ sqlplus / as sysdba
SQL> alter database flashback on;

The final point here is that you need to set up the flashback capability before you ever need to use it. It is not something you can enable after the damage is done; you must make a conscious decision to use it, whether you have it on continuously or whether you use it to set restore points.

More Details
Oct 22, 2023
Fast Recovery Area (FRA)- Files

The FRA in Oracle is a location where the database will manage many of the files related to database backup and recovery. In this area (an area being a part of a disk set aside for this purpose; a directory, for example), you could find the following:

•\ RMAN backup pieces (full and/or incremental backups)
•\ RMAN image copies (byte-for-byte copies of datafiles and controlfiles)
•\ Online redo logs
•\ Archived redo logs
•\ Multiplexed control files
•\ Flashback logs

Oracle uses this new area to manage these files, so the server will know what is on disk and what is not on disk (and perhaps on tape elsewhere). Using this information, the database can perform operations like a disk-to-disk restore of a damaged datafile or the flashing back (a “rewind” operation) of the database to undo an operation that should not have taken place. For example, you could use the FLASHBACK DATABASE command to put the database back the way it was five minutes ago (without doing a full restore of the database and a point-in-time recovery). That would allow you to “undrop” that accidentally dropped user account.

To set up an FRA, you need to set two parameters: db_recovery_file_dest_size and db_recovery_file_dest. You can view the current settings of these parameters via

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE

db_recovery_file_dest string /opt/oracle/fra
db_recovery_file_dest_size big integer 100G

The Fast Recovery Area is more of a logical concept. It is a holding area for the file types discussed in this chapter. Its use is optional—you don’t need to use it, but if you want to use some advanced features, such as the Flashback Database, you must use this area to store the information.

Data Pump Files

Data Pump is a file format used by at least two tools in Oracle. External tables can load and unload data in the Data Pump format, and the import/export tools IMPDP and EXPDP use this file format.

They are cross-platform (portable) binary files that contain metadata (not stored in CREATE/ALTER statements, but rather in XML) and possibly data.

That they use XML as a metadata representation structure is actually relevant to you and me as end users of the tools. IMPDP and EXPDP have some sophisticated filtering and translation capabilities.

This is in part due to the use of XML and to the fact that a CREATE TABLE statement is not stored as a CREATE TABLE, but rather as a marked-up document.

This permits easy implementation of a request like “Please replace all references to tablespace FOO with tablespace BAR.” IMPDP just has to apply a simple XML transformation to accomplish the same. FOO, when it refers to a TABLESPACE, would be surrounded by FOO tags (or some other similar representation).

In Chapter 15, we’ll take a closer look at these tools. Before we get there, however, let’s see how we can use this Data Pump format to quickly extract some data from database A and move it to database B. We’ll be using an “external table in reverse” here.

External tables give us the ability to read flat files—plain old text files—as if they were database tables. We have the full power of SQL to process them. They are read-only and designed to get data from outside Oracle in. External tables can go the other way: they can be used to get data out of the database in the Data Pump format to facilitate moving the data to another machine or another platform. To start this exercise, we’ll need a DIRECTORY object, telling Oracle the location to unload to:

SQL> create or replace directory tmp as ‘/tmp’; Directory created.
SQL> create table all_objects_unload organization external
( type oracle_datapumpdefault directory TMPlocation( ‘allobjects.dat’ ))

asselect * from all_objects

Table created.

And that literally is all there is to it: we have a file in /tmp named allobjects.dat that contains the contents of the query select * from all_objects. We can peek at this information:

SQL> !strings /tmp/allobjects.dat | head x86_64/Linux 2.4.xxAL32UTF8 19.00.00.00.00001:001:000001:000001 i<?xml version=”1.0″ encoding=”UTF-8″?…

That’s just the head, or top, of the file. Now, using a binary SCP, you can move that file to any other platform where you have Oracle installed and by issuing a CREATE DIRECTORY statement (to tell the database where the file is) and a CREATE TABLE statement, such as this:

SQL> create table t ( OWNER VARCHAR2(30),OBJECT_NAME VARCHAR2(30),SUBOBJECT_NAME VARCHAR2(30),OBJECT_ID NUMBER, DATA_OBJECT_ID NUMBER,OBJECT_TYPE VARCHAR2(19),CREATED DATE,LAST_DDL_TIME DATE,TIMESTAMP VARCHAR2(19),STATUS VARCHAR2(7),TEMPORARY VARCHAR2(1),GENERATED VARCHAR2(1),SECONDARY VARCHAR2(1))

organization external

( type oracle_datapump default directory TMP location( ‘allobjects.dat’ ));

You would be set to read that unloaded data using SQL immediately. That is the power of the Data Pump file format: immediate transfer of data from system to system, over “sneakernet” if need be. Think about that the next time you’d like to take a subset of data home to work with over the weekend while testing.

Even if the database character sets differ (they did not in this example), Oracle has the ability now to recognize the differing character sets due to the Data Pump format and deal with them. Character set conversion can be performed on the fly as needed to make the data “correct” in each database’s representation.

Again, we’ll come back to the Data Pump file format in Chapter 15, but this section should give you an overall feel for what it is about and what might be contained in the file.

More Details
Sep 22, 2023
Flat Files- Files

Flat files have been around since the dawn of electronic data processing. We see them literally every day. The text alert log described previously is a flat file. I found the following definition for “flat file” on the Internet and feel it pretty much wraps things up:

An electronic record that is stripped of all specific application (program) formats. This allows the data elements to be migrated into other applica-tions for manipulation. This mode of stripping electronic data prevents data loss due to hardware and proprietary software obsolescence.

A flat file is simply a file whereby each “line” is a “record,” and each line has some text delimited, typically by a comma or pipe (vertical bar). Flat files are easily read by Oracle using either the legacy data loading tool SQLLDR or external tables. In fact, I will cover this in detail in Chapter 15.

Occasionally, a user will request data in a flat file format such as one of the following:

•\ Character (or comma) separated values (CSV): These are easily imported into tools such as spreadsheets.
•\ HyperText Markup Language (HTML): Used to display pages displayed in web browsers.
•\ JavaScript Object Notation (JSON): Standard text-based format for representing structured data and used for transmitting data in web browsers.

I’ll briefly demonstrate generating each of the file types in the following sections.

Generating a CSV File

You can easily generate CSV flat files from SQLPlus. You can either use the -m ‘csv on’ switch from the operating system prompt or use SET MARKUP CSV after starting a SQLPlus session, for example:

$ sqlplus scott/tiger@PDB1
SQL> set markup csv on delimiter , quote off SQL> SELECT * FROM dept;DEPTNO,DNAME,LOC

10,ACCOUNTING,NEW YORK

20,RESEARCH,DALLAS

30,SALES,CHICAGO

40,OPERATIONS,BOSTON

Here is an example of using the -m ‘csv on’ switch at the command line:

$ sqlplus -markup ‘csv on quote off’ scott/tiger@PDB1
SQL> select * from emp where rownum < 3;

EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO 7369,SMITH,CLERK,7902,17-DEC-80,800,,20 7499,ALLEN,SALESMAN,7698,20-FEB-81,1600,300,30

Tip When you use the -markup ‘csv on’ switch, SQL*Plus sets variables such as ROWPREFETCH, STATEMENTCACHE, and PAGESIZE to optimize I/O performance.

Generating HTML

Similar to generating a CSV file, you can generate an HTML file from SQL*Plus by setting

SET MARKUP HTML ON before running your query:

SQL> set markup html on
SQL> select * from dept;

DEPTNODNAME… You can also specify the -markup ‘html on’ switch at the command line to generate HTML output: $ sqlplus -markup ‘html on’ scott/tiger@PDB1 In this manner, you can easily produce HTML output based on the contents of a table.
More Details
Aug 22, 2023
The Process Global Area and User Global Area- Memory Structures

The PGA is a process-specific piece of memory. In other words, it is memory specific to a single operating system process or thread.

This memory is not accessible by any other process or thread in the system. It is typically allocated via either of the C runtime calls malloc() or memmap(), and it may grow (or even shrink) at runtime.

The PGA is never allocated in Oracle’s SGA; it is always allocated locally by the process or thread—the P in PGA stands for process or program; it is not shared.

The UGA is, in effect, your session’s state. It is memory that your session must always be able to get to. The location of the UGA is dependent on how you connect to Oracle.

If you connect via a shared server, the UGA must be stored in a memory structure that every shared server process has access to—and that’s the SGA. In this way, your session can use any one of the shared servers, since any of them can read and write your session’s data.

On the other hand, if you are using a dedicated server connection, there’s no need for universal access to your session state, and the UGA becomes virtually synonymous with the PGA; it will, in fact, be contained in the PGA of your dedicated server.

When you look at the system statistics, you’ll find the UGA reported in the PGA in dedicated server mode (the PGA will be greater than or equal to the UGA memory used; the PGA memory size will include the UGA size as well).

So, the PGA contains process memory and may include the UGA. The other areas of PGA memory are generally used for in-memory sorting, bitmap merging, and hashing. It would be safe to say that, besides the UGA memory, these are the largest contributors by far to the PGA.

There are two ways to manage memory in the PGA: manual and automatic. The manual method should not be used (unless you’re on an old version of Oracle and don’t have a choice).

The automatic PGA memory management is the recommended technique that you should use. The automatic method is much simpler and more efficient in managing memory. The manner in which memory is allocated and used differs greatly in each case, so we’ll discuss each in turn.

More Details
Jul 22, 2023
Manual PGA Memory Management- Memory Structures

There are only a few scenarios that I can think of where manual PGA memory management may be appropriate. One is if you’re running an old version of Oracle and do not have the option to upgrade.

Another situation may be that you run large batch jobs that run during periods when they are the only activities in the instance where manual PGA management may provide performance benefits.

Therefore, it’s possible you may find yourself in an environment where this type of PGA memory management is used. Other than those scenarios, the manual management style should be avoided.

In manual PGA memory management, the following are the parameters that have the largest impact on the size of your PGA, outside of the memory allocated by your session for PL/SQL tables and other variables:

•\ SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk (using disk space in the temporary tablespace the user is assigned to).

•\ SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete. That is, if SORT_AREA_ SIZE is 512KB and SORT_AREA_RETAINED_SIZE is 256KB, your server process would use up to 512KB of memory to sort data during the initial processing of the query. When the sort is complete, the sorting area would “shrink” down to 256KB, and any sorted data that does not fit in that 256KB would be written out to the temporary tablespace.

•\ HASH_AREA_SIZE: The amount of memory your server process can use to store hash tables in memory. These structures are used during a hash join, typically when joining a large set with another set. The smaller of the two sets would be hashed into memory, and anything that didn’t fit in the hash area region of memory would be stored in the temporary tablespace by the join key.

Note Before using the parameters in the prior list, you must set the WORKAREA_ SIZE_POLICY parameter to MANUAL.

These parameters control the amount of space Oracle will use to sort or hash data in memory before using the temporary tablespace on disk, and how much of that memory segment will be retained after the sort is done. The SORT_AREA_SIZE (minus SORT_AREA_ RETAINED_SIZE) calculated value is generally allocated out of your PGA, and the SORT_ AREA_RETAINED_SIZE value will be in your UGA.

Here are the important things to remember about using the *_AREA_SIZE parameters:

•\ These parameters control the maximum amount of memory used by a SORT, HASH, or BITMAP MERGE operation.

•\ A single query may have many operations taking place that use this memory, and multiple sort/hash areas could be created. Remember that you may have many cursors opened simultaneously, each with its own SORT_AREA_RETAINED needs. So, if you set the sort area size to 10MB, you could use 10, 100, 1000, or more megabytes of RAM in your session. These settings are not session limits; rather, they are limits on a single operation, and your session could have many sorts in a single query or many queries open that require a sort.

•\ The memory for these areas is allocated on an “as needed” basis. If you set the sort area size to 1GB as we did, it doesn’t mean you’ll allocate 1GB of RAM. It only means that you’ve given the Oracle process the permission to allocate that much memory for a sort/hash operation.

Now that we’ve briefly reviewed manual PGA memory management, let’s move on to what you should be using, automatic PGA memory management.

More Details