Jul 22, 2024
Database Block Buffer Cache- Memory Structures

So far, we have looked at relatively small components of the SGA. Now we are going to look at one that is potentially huge in size. The block buffer cache is where Oracle stores database blocks before writing them to disk and after reading them in from disk.

This is a crucial area of the SGA for us. Make it too small and our queries will take forever to run. Make it too big and we’ll starve other processes (e.g., we won’t leave enough room for a dedicated server to create its PGA, and we won’t even get started).

We have three places to store cached blocks from individual segments in the SGA:

•\ Default pool: The location where all segment blocks are normally cached. This is the original—and, previously, the only—buffer pool.
•\ Keep pool: An alternate buffer pool where by convention you assign segments that are accessed fairly frequently, but still get aged out of the default buffer pool due to other segments needing space.
•\ Recycle pool: An alternate buffer pool where by convention you assign large segments that you access very randomly and which would therefore cause excessive buffer flushing of many blocks from many segments. There’s no benefit to caching such segments because by the time you wanted the block again, it would have been aged out of the cache. You would separate these segments out from the segments in the default and keep pools so they would not cause those blocks to age out of the cache.

Note that in the keep and recycle pool descriptions I used the phrase “by convention.” There is nothing in place to ensure that you use either the keep pool or the recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks.

The goal here was to give the DBA the ability to segregate segments to hot, warm, and “do not care to cache” areas. The theory was that objects in the default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves.

The cache would keep them in memory since they were very popular blocks. If you had some segments that were fairly popular but not really hot, these would be considered the warm blocks. These segments’ blocks could get flushed from the cache to make room for blocks you used infrequently (the “do not care to cache” blocks).

To keep these warm segments’ blocks cached, you could do one of the following:

•\ Assign these segments to the keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.

•\ Assign the “do not care to cache” segments to the recycle pool, keeping the recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).

Having to do one of these two things increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. ­Remember also that there is no sharing among them, so if the keep pool has lots of unused space, it won’t give it to the overworked default or recycle pool.

All in all, these pools were generally regarded as a very fine, low-level tuning device, only to be used after most other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather than set up multiple buffer pools, that would be my choice).

There are up to four more optional caches, the DB_nK_CACHE_SIZE, to consider in addition to the default, keep, and recycle pools. These caches were added in support of multiple block sizes in the database. A database can have a default block size, which is the size of the blocks stored in the default, keep, or recycle pool, as well as up to four nondefault block sizes, as explained in Chapter 3.

The blocks in these buffer caches are managed in the same way as the blocks in the original default pool—there are no special algorithm changes for them either. Let’s now move on to see how the blocks are managed in these pools.

More Details
Jun 22, 2024
Fixed SGA- Memory Structures

The fixed SGA is a component of the SGA that varies in size from platform to platform and from release to release. It is “compiled” into the Oracle binary itself at installation time (hence the name “fixed”). The fixed SGA contains a set of variables that point to the other components of the SGA, as well as variables that contain the values of various parameters. The size of the fixed SGA is something over which we have no control, and it is generally very small. Think of this area as a “bootstrap” section of the SGA—something Oracle uses internally to find the other bits and pieces of the SGA.

Redo Buffer

The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily, before it is written to disk. Since a memory-to-memory transfer is much faster than a memory-to-disk transfer, the use of the redo log buffer can speed up the database operation. The data will not reside in the redo buffer for very long. In fact, LGWR initiates a flush to disk of this area in one of the following scenarios:

•\ Every three seconds
•\ Whenever a COMMIT or ROLLBACK is issued
•\ When LGWR is asked to switch log files
•\ When the redo buffer gets one-third full or contains 1MB of cached redo log data

Oracle recommends setting the redo log buffer size to a minimum of 8MB. If you’re using flashback functionality and have an SGA greater than 4G, then Oracle recommends setting the log buffer to at least 64MB. If you’re using Data Guard with asynchronous redo transport and have a high redo generation rate, then Oracle recommends setting this to at least 256MB.

A large system with lots of concurrent transactions might benefit somewhat from a large redo log buffer because while LGWR (the process responsible for flushing the redo log buffer to disk) is writing a portion of the log buffer, other sessions could be filling it up. In general, a long-running transaction that generates a lot of redo will benefit the most from a larger than normal log buffer, as it will be continuously filling up part of the redo log buffer while LGWR is busy writing out some of it (we’ll cover the phenomenon of writing uncommitted data at length in Chapter 9). The larger and longer the transaction, the more benefit it could receive from a generous log buffer.

The default size of the redo buffer, as controlled by the LOG_BUFFER parameter, varies widely by operating system, database version, and other parameter settings. Rather than try to explain what the most common default size is (there isn’t such a thing), I’ll refer you to the documentation for your release of Oracle (the Oracle Database Reference guide). My default LOG_BUFFER—given the instance we just started earlier with a 1.5GB SGA—is shown by the following query:

SQL> select value, isdefault from v$parameter where name = ‘log_buffer’;

The size is about 7MB. The minimum size of the default log buffer is OS dependent. If you’d like to find out what that is, just set your LOG_BUFFER to 1 byte and restart your database. For example, on my Oracle Linux instance, I see the following:

SQL> alter system set log_buffer=1 scope=spfile;
SQL> startup force;
SQL> show parameter log_buffer

The smallest log buffer I can really have, regardless of my settings, is going to be 1.680MB on this system.

Note For most database applications, the default value for the LOG_BUFFER parameter is sufficient. If you see a large number of waits associated with the log buffer space event, then consider increasing the LOG_BUFFER parameter.

More Details
May 22, 2024
PGA and UGA Wrap-Up- Memory Structures-2

The parameters that have the greatest effect on the overall size of the SGA are as follows:

•\ JAVA_POOL_SIZE: Controls the size of the Java pool.
•\ SHARED_POOL_SIZE: Controls the size of the shared pool (to some degree).
•\ LARGE_POOL_SIZE: Controls the size of the large pool.
•\ STREAMS_POOL_SIZE: Controls the size of the Streams pool.
•\ DB_*_CACHE_SIZE: Eight of these CACHE_SIZE parameters control the sizes of the various buffer caches available.
•\ LOG_BUFFER: Controls the size of the redo log buffer (to some degree).
•\ SGA_TARGET: Used with automatic SGA memory management and can be changed online.
•\ SGA_MAX_SIZE: Used to control the size of the SGA.
•\ MEMORY_TARGET: Used with automatic memory management (both PGA and SGA are automatically sized).
•\ MEMORY_MAX_SIZE: Used to control the maximum amount of memory Oracle should strive to use over both the PGA and SGA sizes under automatic memory management. This is really just a target; the PGA may exceed the optimum size if the number of users increases beyond some level or a session(s) allocates large untunable bits of memory as demonstrated earlier.
•\ INMEMORY_SIZE: Sets the size of the in-memory area in the SGA used for the in-memory column store. The default is zero which means in-­ memory is not in use.
•\ MEMOPTIMIZE_POOL_SIZE: Sets the memory size for the memoptimized rowstore feature.
•\ DB_FLASH_CACHE_SIZE: Sets the size of the flash cache used in conjunction with DB_FLASH_CACHE_FILE.

Using ASMM is simply a matter of setting the SGA_TARGET parameter to the desired SGA size, leaving out the other SGA-related parameters altogether. The database instance will take it from there, allocating memory to the various pools as needed and even taking memory away from one pool to give to another over time. This is the most common way of sizing the SGA.

When using AMM, you simply set the MEMORY_TARGET. The database instance will then decide the optimal SGA size and PGA size—and those components will be set up appropriately and do their own automatic memory management within their respective boundaries. Further, the database can and will resize the SGA and PGA allocations as the workload changes over time.

Regardless of the type of memory management you use, you’ll find that memory is allocated to the various pools in the SGA in units called granules. A single granule is an area of memory of 4MB, 8MB, or 16MB in size. The granule is the smallest unit of allocation, so if you ask for a Java pool of 5MB and your granule size is 4MB, Oracle will actually allocate 8MB to the Java pool (8 being the smallest number greater than or equal to 5 that is a multiple of the granule size of 4).

The size of a granule is determined by the size of your SGA (this sounds recursive to a degree, as the size of the SGA is dependent on the granule size). You can view the granule sizes used for each pool by querying V$SGA_DYNAMIC_COMPONENTS. In fact, we can use this view to see how the total SGA size might affect the size of the granules.

First, let’s set the SGA to a small value, and then restart the instance, and observe the granule sizes. I’ll connect to the root container of my database when running the following commands:

Note If your instance is currently running, the STARTUP FORCE command will shut down the instance (abort mode) and restart it.

Note This is the SGA information for the Oracle instance started with the initialization parameter file in the previous example. We specified the SGA and PGA sizes ourselves in that parameter file. Therefore, we are using ASMM and automatic PGA memory management, but not the AMM setting, which would have sized and resized our PGA/SGA settings for us.

In this example, I used ASMM and controlled the size of the SGA via the single parameter SGA_TARGET. When my SGA size is under about 1GB, the granule is 4MB. When the SGA size is increased to some threshold over 1GB (it will vary slightly from operating system to operating system and even from release to release), I see an increased granule size:

SQL> alter system set sga_target = 1512m scope=spfile;
SQL> startup force
SGA> show parameter sga_target

NAME TYPE VALUE

sga_target big integer 1520M
Now when we look at the SGA components:
SQL> select component, granule_size from v$sga_dynamic_components;
COMPONENT GRANULE_SIZE

As you can see, at 1.5GB of SGA, my pools will be allocated using 16MB granules, so any given pool size will be some multiple of 16MB. With this in mind, let’s look at each of the major SGA components in turn.

More Details
May 22, 2024
PGA and UGA Wrap-Up- Memory Structures-1

So far, we have looked at two memory structures: the PGA and the UGA. You should understand now that the PGA is private to a process. It is the set of variables that an Oracle dedicated or shared server needs to have independent of a session.

The PGA is a “heap” of memory in which other structures may be allocated. The UGA is also a heap of memory in which various session-specific structures may be defined.

The UGA is allocated from the PGA when you use a dedicated server to connect to Oracle and from the SGA under a shared server connection.

This implies that when using a shared server, you must size your SGA’s large pool to have enough space to cater to every possible user that will ever connect to your database concurrently.

So, the SGA of a database supporting shared server connections is generally much larger than the SGA for a similarly configured dedicated server mode-only database. We’ll cover the SGA in more detail next.

The System Global Area

Every Oracle instance has one big memory structure referred to as the System Global Area (SGA). This is a large, shared memory structure that every Oracle process will access at one point or another. It varies in size from dozens of megabytes on small test systems, to a few gigabytes on medium-to-large systems, up to hundreds of gigabytes for really big systems.

On UNIX/Linux, the SGA is a physical entity you can “see” from the OS command line. It is physically implemented as a shared memory segment—a stand-alone piece of memory to which processes may attach. It is possible to have an SGA on a system without having any Oracle processes; the memory stands alone. It should be noted, however, that if you have an SGA without any Oracle processes, this is an indication that the database crashed in some fashion. It is an unusual situation, but it can happen. This is what an SGA “looks like” on Oracle Linux:

0x00000000 32768 oracle 600 9138176 116
0x00000000 32769 oracle 600 1560281088 58
0x00000000 32770 oracle 600 7639040 58
0x322379e0 32771 oracle 600 12288 58

One SGA is represented here, and the report shows the OS account that owns the SGA (oracle for all of these in this example) and the size of the SGA. On Windows, you really can’t see the SGA as a distinct entity the way you can in UNIX/Linux. Because on the Windows platform Oracle executes as a single process with a single address space, the SGA is allocated as private memory to the oracle.exe process. If you use the Windows Task Manager or some other performance tool, you can see how much memory oracle.exe has allocated, but you can’t see the SGA vs. any other piece of allocated memory.

Note Unless you have my parameter settings and you are running my exact same version of Oracle on my exact same OS, you will almost certainly see different numbers than I do. The SGA sizing is very version/OS/parameter dependent.

Within Oracle itself, you can see the SGA regardless of the platform, using another magic V$ view called V$SGASTAT. It might look as follows:

SQL> compute sum of bytes on pool
SQL> break on pool skip 1
SQL> select pool, name, bytes from v$sgastat order by pool, name;

The SGA is broken up into various pools. Here are the major ones you’ll see:

•\ Shared pool: The shared pool contains shared cursors, stored procedures, state objects, dictionary caches, and many dozens of other bits of data. If a user executes a SQL statement, then Oracle will use the shared pool.
•\ Database buffer cache (block buffers): Data blocks read from disk as users query and modify data. Contains the most recently used data blocks.
•\ Fixed SGA: Contains internal housekeeping information regarding the state of the instance and database.
•\ Redo log buffer: A circular buffer that contains information regarding changes to the database. These changes are written to the online redo logs on disk. This information is used for database recovery.
•\ Java pool: The Java pool is a fixed amount of memory allocated for the JVM running in the database. The Java pool may be resized online while the database is up and running.
•\ Large pool: Optional memory area used by shared server connections for session memory, by parallel execution features for message buffers, and by RMAN backup for disk I/O buffers. This pool is resizable online.
•\ Streams pool: This is a pool of memory used by data sharing tools such as Oracle GoldenGate, Oracle Streams, and Data Pump. This pool is resizable online. If the Streams pool is not configured and you use the Streams functionality, Oracle will use up to ten percent of the shared pool for streams memory.
•\ Flashback buffer: Optional memory area used when Flashback Database is enabled. The recovery write process will copy modified blocks from the buffer cache to the flashback buffer, which are written to Flashback Database logs on disk.
•\ Shared I/O pool: Used for I/O operations on SecureFile Large Objects. This area is used for SecureFile deduplication, encryption, and compression.
•\ In-memory area: Optional memory area that allows tables and partitions to be stored in a columnar format. Useful for analytic operations that operate on a few columns returning many rows (as opposed to an OLTP application that returns a few rows with many columns).
•\ Memoptimize pool: Optional memory component that optimizes key-­ based queries.
•\ Optional Database Smart Flash Cache: Optional memory extension to the database buffer cache for Linux and Solaris systems. Resides on solid-state storage devices that use flash memory.

A typical SGA might look as shown in Figure 4-1. The optional memory components are indicated by the dashed outlines.

Figure 4-1.  Typical SGA

More Details
Apr 22, 2024
Using PGA_AGGREGATE_TARGET to Control Memory Allocation- Memory Structures

Earlier, I wrote that “in theory” we can use the PGA_AGGREGATE_TARGET to control the overall amount of PGA memory used by the instance. We saw in the last example that this is not a hard limit, however. The instance will attempt to stay within the bounds of the PGA_AGGREGATE_TARGET, but if it can’t, it won’t stop processing; rather, it will just be forced to exceed that threshold.

Another reason this limit is a “theory” is because work areas, though large contributors to PGA memory, are not the only contributors to PGA memory. Many factors contribute to PGA memory allocation, and only the work areas are under the control of the database instance. If you create and execute a PL/SQL block of code that fills in a large array with data in dedicated server mode where the UGA is in the PGA, Oracle can’t do anything but allow you to do it.

Consider the following quick example. We’ll create a package that can hold some persistent (global) data in the server:

$ sqlplus eoda/foo@PDB1

SQL> create or replace package demo_pkg as type array is table of char(2000) index b binary_integer; g_data array; end;/

Package created.

Now we’ll measure the amount of memory our session is currently using in the PGA/UGA (I used a dedicated server in this example, so the UGA is a subset of the PGA memory):

SQL> select a.name, to_char(b.value, ‘999,999,999’) bytes,

Initially, we are using about 6MB of PGA memory in our session (as a result of compiling a PL/SQL package, running this query, etc.). Now, we’ll run our query against T again using the same 300MB PGA_AGGREGATE_TARGET (this was done in a recently restarted and otherwise idle instance; we are the only session requiring memory right now):

SQL> set autotrace traceonly statistics; SQL> select * from t order by 1,2,3,4; 72616 rows selected.

As you can see, the sort was done entirely in memory, and in fact if we peek at our session’s PGA/UGA usage, we can see how much we used:

SQL> select a.name, to_char(b.value, ‘999,999,999’) bytes,

We see about 17MB of RAM being used. Now we’ll fill up that CHAR array we have in the package (a CHAR datatype is blank-padded so each of these array elements is exactly 2000 characters in length):

Now, that is memory allocated in the PGA that the instance itself can’t control. We already exceeded the PGA_AGGREGATE_TARGET set for the entire instance in this single session—and there is quite simply nothing the database can do about it. It would have to fail our request if it did anything, and it will do that only when the OS reports back that there is no more memory to give (ORA-04030). If we wanted, we could allocate more space in that array and place more data in it, and the instance would just have to do it for us.

However, the instance is aware of what we have done. It does not ignore the memory it can’t control; it simply recognizes that the memory is being used and backs off the size of memory allocated for work areas accordingly. So if we rerun the same sort query, we see that this time we sorted to disk—the instance did not give us the 12MB or so of RAM needed to do this in memory since we had already exceeded the PGA_AGGREGATE_ TARGET:

SQL> set autotrace traceonly statistics; SQL> select * from t order by 1,2,3,4;67180 rows selected.

So, because some PGA memory is outside of Oracle’s control, it is easy to exceed the PGA_AGGREGATE_TARGET simply by allocating lots of really large data structures in our PL/ SQL code. I am not recommending you do this by any means. I’m just pointing out that the PGA_AGGREGATE_TARGET is more of a request than a hard limit.

More Details
Mar 22, 2024
Determining How the Memory Is Allocated- Memory Structures-2

This script sorts the big table T using automatic PGA memory management. Then, for that session, it captures all of the PGA/UGA memory settings as well as the sort-to-disk activity. In addition, the UNION ALL adds system-level metrics about the same (total PGA memory, total UGA memory, and so on). I ran that script against a database started with the following initialization settings:

memory_target=0 pga_aggregate_target=300m sga_target=1500m

These settings show I was using automatic PGA memory management

with a PGA_AGGREGATE_TARGET of 300MB, meaning I wanted Oracle to use up to about 300MB of PGA memory for sorting.

I set up another script to be run in other sessions to generate a large sorting load on the machine. This script loops and uses a built-in package, DBMS_ALERT, to see if it should continue processing. If it should, it runs the same big query, sorting the entire T table. When the simulation finished, a session could signal all of the sorting processes, the load generators, to “stop” and exit. Here’s the script (stored in a file named gen_load.sql) used to perform the sort:

To observe the differing amounts of RAM allocated to the session I was measuring, I initially ran the SELECT in isolation—as the only session. I captured the statistics and saved them into the SESS_STATS table, along with the count of active sessions. Then I added 25 sessions to the system (i.e., I ran the preceding benchmark script (gen_load. sql) with the for i in 1 .. 999999 loop in 25 new sessions). I waited a short period of time—one minute for the system to adjust to this new load—and then I created a new session and ran the single sort query from earlier, capturing the metrics the first time through the loop. I did this repeatedly, for up to 300 concurrent users.

Tip On the GitHub source code site for this book, you can download the scripts used for this experiment. In the ch04 directory, the run.sql script automates the test described in this section.

It should be noted that I asked the database instance to do an impossible thing here. At 300 users, we would be very close to the PGA_AGGREGATE_TARGET setting just by having them all logged in, let alone actually doing any work! This drives home the point that the PGA_AGGREGATE_TARGET is just that: a target, not a directive. We can and will exceed this value for various reasons.

Now we are ready to report on the finding; for reasons of space, we’ll stop the output at 275 users—since the data starts to get quite repetitive:

SQL> column active format 999
SQL> column pga format 999.9
SQL> column “tot PGA” format 999.9
SQL> column pga_diff format 999.99
SQL> column “temp write” format 9,999
SQL> column “tot writes temp” format 99,999,999
SQL> column writes_diff format 9,999,999
SQL> select active,

Before we analyze the results, let’s look at the query I used for reporting. My query uses a feature called pivot to pivot a resultset. Here’s an alternate way to write lines 11 through 22 of that SQL query (without the pivot feature):

select active,

max( decode(name,’session pga memory’,val) ) pga,

max( decode(name,’total: session pga memory’,val) ) as “tot PGA”, max( decode(name, ‘physical writes direct temporary tablespace’, val) ) as “temp write”,

This part of the query retrieved the records from the table of metrics when there were less than 275 active sessions, converted the metrics for memory (UGA/PGA memory) from bytes into megabytes, and then pivoted—turned rows into columns—on the four interesting metrics. Once we got those four metrics in a single record, we used analytics (the LAG() function specifically) to add to each row the prior rows’ total observed PGA and total observed I/O to temp so we could easily see the incremental differences in these values. Back to the data—as you can see, when I had a few active sessions, my sorts were performed entirely in memory.

For an active session count of 1 to somewhere less than 50, I could sort entirely in memory. However, by the time I had 50 users logged in and actively sorting, the database started reining in the amount of memory I was allowed to use at a time. It would have taken a couple of minutes before the amount of PGA being used fell back within acceptable limits (the 300MB request), but eventually it would at these low concurrent user levels.

The amount of PGA memory allocated to the session we were watching dropped from 15.2MB to 7.7MB and settled on around 5.2MB (remember, parts of that PGA are not for work area (sorting) allocations, but are for other operations; just the act of logging in created a .5MB PGA allocation).

The total PGA in use by the system remained within tolerable limits until somewhere around 126 users. At that point, I started to exceed on a regular basis the PGA_AGGREGATE_TARGET and continued to do so until the end of the test. I gave the database instance in this case an impossible­ task; the very act of having 126 users, most executing PL/SQL, plus the sort they were all requesting, just did not fit into the 300MB of RAM I had targeted. It simply could not be done.

Each session therefore used as little memory as possible, but had to allocate as much memory as it needed. By the time I finished this test, the active sessions were using a total of about 560MB of PGA memory—as little as they could.

Automatic PGA memory management was designed specifically to allow a small community of users to use as much RAM as possible when it was available. In this mode, it backs off on this allocation over time as the load increased and increases the amount of RAM allocated for individual operations over time as the load decreased.

More Details
Mar 22, 2024
Determining How the Memory Is Allocated- Memory Structures-1

Questions that come up frequently are “How is this memory allocated?” and “What will be the amount of RAM used by my session?” These are hard questions to answer for the simple reason that the algorithms for serving out memory under the automatic scheme are not documented and can and will change from release to release. When using things that begin with “A”—for automatic—you lose a degree of control, as the underlying algorithms decide what to do and how to control things.

We can make some observations based on information from Oracle Support notes 147806.1 and 223730.1:

•\ The PGA_AGGREGATE_TARGET is a goal of an upper limit. It is not a value that is preallocated when the database is started up. You can observe this by setting the PGA_AGGREGATE_TARGET to a value much higher than the amount of physical memory you have available on your server. You will not see any large allocation of memory as a result (one caveat, if you’ve set MEMORY_TARGET, and then set PGA_ AGGREGATE_TARGET to a value larger than MEMORY_TARGET, on instance startup Oracle throws an ORA-00838 error and won’t let you start yourinstance).

•\ The amount of PGA memory available for a given session is derived from the setting of PGA_AGGREGATE_TARGET. The algorithm for determining the maximum size used by a process varies by database version. The amount of PGA memory a process is allocated is typically a function of the amount of memory available and the number of processes competing for space.

•\ As the workload on your instance goes up (more concurrent queries, concurrent users), the amount of PGA memory allocated to your work areas will go down. The database will try to keep the sum of all PGA allocations under the threshold set by PGA_AGGREGATE_TARGET. This is analogous to having a DBA sit at a console all day, setting the SORT_AREA_SIZE and HASH_AREA_SIZE parameters based on the amount of work being performed in the database. We will directly observe this behavior shortly in a test.

OK, so how can we observe the different work area sizes being allocated to our session? By running some test scripts to observe the memory used by our session and the amount of I/O to temp we performed. I performed the following test on an Oracle Linux machine with four CPUs and dedicated server connections. We begin by creating a table to hold the metrics we’d like to monitor (the following code is placed in a file named stats.sql):

The columns in this table we’ll be using for the metrics represent

•\ NAME: The name of the statistic we are gathering (PGA and UGA information from V$SESSTAT for the current session, plus all of the memory information for the database instance as well as temporary tablespace writes).

•\ VALUE: The value of the given metric.

•\ ACTIVE: The number of other sessions doing work in the instance. Before we start, we assume an “idle” instance; we are the only user session right now, hence the value of zero.

Next, create a table T as follows:

More Details
Jan 22, 2024
Automatic PGA Memory Management- Memory Structures

In almost all cases, you should be using automatic PGA memory management. The entire goal of automatic PGA memory management is to maximize the use of RAM while at the same time not using more RAM than you want. You can enable the automatic management of the PGA in two ways:

•\ Set MEMORY_TARGET to zero, and then set PGA_AGGREGATE_TARGET to a nonzero value. The PGA_AGGREGATE_TARGET parameter controls how much memory the instance should allocate, in total, for all work areas used to sort or hash data. Its default value varies by version and may be set by various tools such as the DBCA. In this mode, WORKAREA_SIZE_POLICY is set to AUTO (which is its default value).

•\ Use the AMM feature by setting MEMORY_TARGET to a nonzero value, and leave PGA_AGGREGATE_TARGET set to zero. This effectively lets Oracle manage the allocation of memory to the PGA. However, if you’re in a Linux environment that uses HugePages, you should not be using the AMM method to manage memory (more on this in the“System Global Area (SGA) Memory Management” section of this chapter).

The prior two techniques are discussed in the following subsections.

Setting PGA_AGGREGATE_TARGET

Most of the databases I’ve worked on in the past several years use automatic PGA memory management and automatic SGA memory management. For my test database, automatic PGA memory management and automatic SGA memory management are enabled as follows (you’ll use memory sizes appropriate for your environment, given the workload and amount of physical memory available):

$ sqlplus / as sysdba
SQL> alter system set memory_target=0 scope=spfile;
SQL> alter system set pga_aggregate_target=300M scope=spfile; SQL> alter system set sga_target=1500M scope=spfile;

Then restart the instance to instantiate the parameters (using startup force here which shuts down abort and restarts the instance):

SQL> startup force;

You don’t have to enable both PGA and SGA memory management together (as shown in the prior example). You could enable one for automatic management and leave the other for manual management. I usually don’t implement it that way, but you can do it.

Also, some places I’ve worked have set the PGA_AGGREGATE_LIMIT parameter as well. In most scenarios, you don’t need to set this parameter as it will default to a reasonable value. If for some reason you need more control, then feel free to set it. Keep in mind if you set this parameter too low, you’ll get an ORA-00093 error, and your instance will fail to start. In this situation, you’ll need to create a text-based init.ora file, and restart your instance, and re-create your spfile (see Chapter 3 for details on how to do this).

Setting MEMORY_TARGET

The automatic memory management of the PGA is enabled as follows (adjust the memory sizes per your environment):

$ sqlplus / as sysdba
SQL> alter system set memory_target=1500M scope=spfile;
SQL> alter system set pga_aggregate_target=0 scope=spfile;
SQL> alter system set sga_target=0 scope=spfile;

And at this point, you can restart your instance to instantiate the parameters. If you want to give Oracle recommendations on minimal values to use for the SGA_TARGET and PGA_AGGREGATE_TARGET, you can set these to a nonzero value (as long as the sum of these is less than the value of MEMORY_TARGET):

SQL> alter system set sga_target=500M scope=spfile;
SQL> alter system set pga_aggregate_target=400M scope=spfile;

Note This PGA_AGGREGATE_TARGET parameter is optional for pluggable databases. When this parameter is set in a pluggable database, it specifies the target aggregate PGA size for the pluggable database.

Now that we’ve covered enabling automatic PGA memory management, let’s next look at how the PGA memory is allocated.

More Details