David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft, mostly performance related, that DBAs might find interesting.
Or then again they might not! The non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.David Kurtzhttp://www.blogger.com/profile/00468908370233805717noreply@blogger.comBlogger172125
Updated: 6 hours 36 min ago

PSFT_PLAN: A Sample Oracle Database Resource Manager Plan for PeopleSoft

Mon, 2024-03-11 06:55
In the cloud (or any virtualised environment), performance is instrumented as cost.  This is also true in any other on-premises environment, but it takes a lot longer to feedback!
  • If you never run out of CPU, then you have probably bought/rented/allocated/licensed too many CPUs.
  • If you do run out of CPU, then you should use the database resource manager to prioritise the processes that are most important to the business.
  • If you don't enable the resource manager, you will have less visibility of when you do run out of CPU.
At the very least, you can use one of the sample resource manager plans installed in the Oracle database by default.  
This article proposes a resource plan for PeopleSoft systems.  It can be used as a starting point before enhancing it with your own specific requirements.  

Resource Plan Design Goals
The purpose of a database resource plan is to prioritise important/urgent processes over less important/less urgent processes by allocating CPU, to the higher priority processes, and by restricting CPU, other resources, and the degree of parallelism for lower priority processes.
The design of a resource plan should reflect what the business defines as important.  
Consumer Groups
A resource plan consists of several resource groups with different priorities, and resource allocations. Each priority level defined by the business becomes a consumer group in the resource plan.  A consumer group can be allocated to one of 8 priority levels in a resource plan.  Multiple consumer groups can exist at the same priority level with different CPU guarantees (adding up to not more than 100%) and can include other limits.
I have made some assumptions about process priorities in a typical PeopleSoft system, and have grouped and ranked them in the table below starting with the highest priority.  Not all customers run all these processes.  Consumer groups and mappings that are not needed can be omitted. There are gaps in the priority levels to allow for other definitions to be introduced.
Priority Level Consumer Group %CPU Guarantee Comment 1SYS
_GROUP100% Oracle system processes. Defined automatically. 2PSFT
_GROUP100% Any process that connects to the database as either SYSADM (the default PeopleSoft owner ID) or PS has higher priority than other processes unless other rules apply. The online application (other than ad hoc query) falls into this category so that the online user experience is safeguarded before other PeopleSoft processes.
This includes remote call Cobol processes, but not remote call Application Engine that should be run in the component processor. 4BATCH
_GROUP100% Process scheduler processes, and processes run by the process schedulers 5NVISION
_GROUP100% nVision (NVSRUN) and nVision report book (RPTBOOK) processes 6PSQUERY
_ONLINE
_GROUP90%Ad hoc queries are allocated to one of three consumer groups with the same priority, but different CPU guarantees, comprising:
  • on-line PS/Queries,
  • nVision reports run through the PIA,
PSQUERY
_BATCH
_GROUP9%
  • PS/Queries run on the process scheduler using the PSQUERY application engine. A 4-hour maximum runtime limit is defined.
NVSRUN
_GROUP1%
  • nVision through the 3-tier nVision client
8LOW
_GROUP1%Other low-priority processes LOW
_LIMITED
_GROUP1%Other low-priority processes, but whose maximum query time is limited. OTHER
_GROUPS1%All other processes.  Defined automatically.
Consumer Group Mapping Priority
Sessions are allocated to the consumer groups.  They can be allocated explicitly, or via mapping rules that use various session attributes. As the attributes are set or changed, the consumer group will be set according to the matching rules.  
I have set the following attributes to be mapped in the following order of precedence.  The more specific mappings take precedence over the more generic ones.
PriorityMapping Attribute Comment 2Module, ActionThe PIA instrumentation sets attributes MODULE to the component name and ACTION to the page name. Specific component pages are allocated to specific consumer groups 3ModuleSpecific scheduled processes are allocated by name to specific consumer groups.  PeopleSoft instrumentation puts this name in the MODULE attribute. 4Client ProgramBatch and query processes are identified by program name and allocated to certain consumer groups. 5Oracle UserAnything that connects to the database as either SYSADM or PS is allocated to the PSFT_GROUP. So other mapping rules must take precedence over this mapping.
Required PeopleSoft Configuration
The PSFT_PLAN sample resource manager plan relies on MODULE and ACTION being set by the PeopleSoft Application.  Therefore, the following additional configuration is required.
  • Enable PeopleSoft instrumentation: Set EnableAEMonitoring=1 in ALL PeopleSoft application server and process scheduler domains so that PeopleSoft processes set MODULE and ACTION information in the session attributes (using DBMS_APPLICATION_INFO).  
See also:
  • Install instrumentation trigger for PeopleSoft (psftapi.sql).  Not all PeopleSoft processes are instrumented.  COBOL, SQR, and nVision do not set MODULE or ACTION.  When a PeopleSoft process is started by the process scheduler, the first thing it does is set its own status to 7, meaning that it is processing.  This script creates a database trigger that fires on that DML and sets the session attributes MODULE to the name of the process and ACTION to the process instance number.  Application Engine processes may then subsequently update these values again.
Consumer Group Mappings
Consumer groups are matched to session attributes.  The highest priority matching mapping is applied.  Mappings can be matched to literal values, or with LIKE or REGEXP_LIKE operations.
Mapping Priority Attribute Value Consumer
Group
Priority Consumer Group 2MODULE_ACTIONQUERY_MANAGER.QUERY_VIEWER6PSQUERY_ONLINE_GROUP 3MODULERPTBOOK
NVSRUN5NVISION_GROUP PSQRYSRV%6PSQUERY_ONLINE_GROUP PSAE.PSQUERY.%6PSQUERY_BATCH_GROUP 4CLIENT_PROGRAMPSRUNRMT2PSFT_GROUP psae%
PSAESRV%
PSDSTSRV%
PSMSTPRC%
PSRUN@%
PSSQR%
pssqr%
sqr%4BATCH_GROUP PSQRYSRV%6PSQUERY_ONLINE_GROUP PSNVSSRV%6NVSRUN_GROUP SQL Developer
sqlplus%
Toad%8LOW_GROUP / LOW_LIMITED_GROUP 5ORACLE_USERPS
SYSADM2PSFT_GROUP
Resource Plan Script
Two SQL scripts are available on GitHub
Other Options

There are other resource manager options that are either not illustrated in the sample plan, or that are commented out.  They may be worth considering in some situations.

  • PeopleSoft does not use parallel query by default, but if you do use it, you may well want to limit which processes use how much parallelism.  Consumer groups can specify a limit to the parallel query degree.
    • If you use the resource plan to restrict the degree of parallelism, and you also plan to vary the number of CPUs in a cloud environment, then I suggest creating a resource plan for each number of CPUs and switch between the plans by changing the setting of  the RESOURCE_MANAGER_PLAN parameter.

  DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
    'PSFT_PLAN', 'NVISION_GROUP', 'nVision Reports.'
    ,mgmt_p5 => 100
    ,parallel_degree_limit_p1=>2
  );
  • A parallel query may queue waiting to obtain sufficient parallel query server processes.  A timeout can be specified to limit that wait and to determine the behaviour when the timeout is reached.  The query can either be cancelled raising error ORA-07454, or run at a reduced parallelism).

  DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
    'PSFT_PLAN', 'PSQUERY_ONLINE_GROUP'
    ,mgmt_p6 => 90
    ,parallel_queue_timeout=>900
    ,pq_timeout_action=>'RUN'
  );
  • A consumer group can restrict queries that run for a long time, or that are expected to run for a long time based on their optimizer cost.  They can be switched to the CANCEL_SQL group after a number of seconds and they will terminate with ORA-00040: active time limit exceeded - call aborted:.  This has only specified for the LOW_LIMITED_GROUP, and the PSQUERY_BATCH_GROUP for scheduled queries because the message is captured by the process scheduler and logged.  It has not been specified for PSQUERY_ONLINE_GROUP because this error is not handled well by the online application.  Just the Oracle error message will be displayed to the user without further explanation, which is neither friendly nor helpful.  Instead, there are PeopleSoft configuration options to limit query runtime.
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
    'PSFT_PLAN', 'PSQUERY_BATCH_GROUP'
    ,mgmt_p6 => 1
    ,switch_group => 'CANCEL_SQL'
    ,switch_time => 14400
    ,switch_estimate => TRUE 
    ,switch_for_call => TRUE
    );
  • Sometimes customers may have different priorities and different priorities at different times that cannot be satisfied by a single resource plan.  In which case, different resource plans can be activated at different times by different scheduler windows. 
Other Online Resources

What PS/Query is that?

Tue, 2024-02-20 06:05

Sometimes, performance analysis will turn up a problem SQL query that is probably a PS/Query. However, I need to know which PS/Query it is should I wish to alter it or talk to the user who wrote it. 

Is it a PS/Query?

It is quite easy to spot SQL queries that are generated from queries defined in the PS/Query tool. These are typical characteristics:

  • Single character row source aliases (eg. A, B, D) 
  • The same row source with a suffix 1 (eg. D1) for query security records.
  • Effective date/sequence subqueries are always correlated back to the same table.
  • Order by column position number rather than column names or aliases.
Sometimes, you may find SQL that looks like a PS/Query coming from other parts of PeopleSoft because a developer has copied the text of a PS/Query, usually into an Application Engine step.
SELECT A.EMPLID, A.ATTENDANCE, A.COURSE, B.DESCR, D.NAME, A.SESSION_NBR,
TO_CHAR(A.STATUS_DT,'YYYY-MM-DD'),B.COURSE
FROM PS_TRAINING A, PS_COURSE_TBL B, PS_PERSONAL_DTA_VW D, PS_PERS_SRCH_QRY D1
WHERE D.EMPLID = D1.EMPLID
AND D1.ROWSECCLASS = 'HCDPALL'
AND ( A.COURSE = :1
AND A.ATTENDANCE IN ('S','W')
AND A.COURSE = B.COURSE
AND A.EMPLID = D.EMPLID )

The text of a PS/Query is not stored in the database.  Instead, as with other objects in PeopleSoft, it is held as various rows in PeopleTools tables.  The PSQRY% tables are used to generate the SQL on demand.  We can query these tables to identify the query.  

PSQRYRECORD holds a row for every record referenced in the query (not including effective date/sequence subqueries).  My usual tactic is to write a SQL query on PSQRYRECORD, like the one below, that looks for PS/Queries that reference these tables with these table aliases (see PeopleSoft for the Oracle DBA, Chapter 11).  
REM findqry.sql
REM (c)Go-Faster Consultancy 2012

SELECT a.oprid, a.qryname
FROM   psqryrecord a
,      psqryrecord b
,      psqryrecord d
WHERE  a.oprid = b.oprid
AND    a.qryname = b.qryname
AND    a.oprid = d.oprid
AND    a.qryname = d.qryname
AND    a.corrname = 'A'
AND    a.recname = 'TRAINING'
AND    b.corrname = 'B'
AND    b.recname = 'COURSE_TBL'
AND    d.corrname = 'D'
AND    d.recname = 'PERSONAL_DTA_VW'
/
The example PS/Query above is TRN003__COURSE_WAITING_LIST from the HCM demo database.  However, my query on PSQRYRECORD found another PS/Queries with the same 3 records using the same row source aliases.  It is worth looking at queries on the same tables as they often suffer from the same problems, and you might want to make the same fix.  
Another source of results for this query (though not this time) can be when users copy a public PS/Query to a private one so they can alter it in isolation.
OPRID                          QRYNAME
------------------------------ ------------------------------
                               TRN002__SESSION_ROSTER
                               TRN003__COURSE_WAITING_LIST

Writing the query on PSQRYRECORD to find queries, which always is slightly different each time, is quite boring.  So I have written a script that will dynamically generate the SQL to identify a PS/Query.

Start with a SQL_ID
A SQL tuning activity will usually identify the SQL_ID and plan hash value of a statement.  If you are lucky, AWR will have captured the text and execution plan.  If not, you may have to try looking for a different SQL_ID that produces the same execution plan.  From the statement text, it is easy to see whether it might be a PS/Query.  
In this example, I have cut the SQL statement and execution plan back to show just the tables and indexes referenced.
SQL_ID c3h6vf2w5fxgp
--------------------
SELECT …
FROM PSTREELEAF B, PSTREENODE C, PS_OPER_UNIT_TBL A, PS_PRODUCT_TBL G 
…
UNION SELECT …
FROM PSTREENODE D,PS_TREE_NODE_TBL E, PSTREELEAF F 
…

--------------------------------------------------------------------------------------------------------------------------
|   Id  | Operation                                   | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------------------
|  *  7 |        INDEX STORAGE FAST FULL SCAN         | PSBPSTREELEAF    |   426K|    19M|       |  1178   (1)| 00:00:01 |
|    10 |          TABLE ACCESS BY INDEX ROWID BATCHED| PS_PRODUCT_TBL   |     1 |    41 |       |     3   (0)| 00:00:01 |
|  * 11 |           INDEX RANGE SCAN                  | PS_PRODUCT_TBL   |     1 |       |       |     2   (0)| 00:00:01 |
|  * 14 |              INDEX RANGE SCAN (MIN/MAX)     | PS_PRODUCT_TBL   |     1 |    21 |       |     2   (0)| 00:00:01 |
|  * 15 |       TABLE ACCESS STORAGE FULL             | PSTREENODE       |   135K|  5709K|       |   663   (1)| 00:00:01 |
|  * 17 |       INDEX STORAGE FAST FULL SCAN          | PS_OPER_UNIT_TBL |  1791 | 35820 |       |     4   (0)| 00:00:01 |
|  * 20 |       INDEX RANGE SCAN (MIN/MAX)            | PS_PSTREENODE    |     1 |    33 |       |     3   (0)| 00:00:01 |
|  * 23 |       INDEX RANGE SCAN (MIN/MAX)            | PSAPSTREELEAF    |     1 |    32 |       |     3   (0)| 00:00:01 |
|  * 26 |       INDEX RANGE SCAN (MIN/MAX)            | PS_OPER_UNIT_TBL |     1 |    20 |       |     2   (0)| 00:00:01 |
|    33 |          TABLE ACCESS INMEMORY FULL         | PS_TREE_NODE_TBL | 35897 |  1647K|       |     6   (0)| 00:00:01 |
|  * 35 |          TABLE ACCESS STORAGE FULL          | PSTREENODE       |   167K|  9670K|       |   663   (1)| 00:00:01 |
|- * 36 |       INDEX RANGE SCAN                      | PS_PSTREELEAF    |     1 |    39 |       |  1267   (1)| 00:00:01 |
|    37 |      INDEX STORAGE FAST FULL SCAN           | PS_PSTREELEAF    |   480K|    17M|       |  1267   (1)| 00:00:01 |
|  * 40 |       INDEX RANGE SCAN (MIN/MAX)            | PS_PSTREENODE    |     1 |    33 |       |     3   (0)| 00:00:01 |
|  * 43 |       INDEX RANGE SCAN (MIN/MAX)            | PS_TREE_NODE_TBL |     1 |    28 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

…
   7 - SEL$1 / B@SEL$1
  10 - SEL$1 / G@SEL$1
  11 - SEL$1 / G@SEL$1
…
  15 - SEL$1 / C@SEL$1
  17 - SEL$1 / A@SEL$1
…
  33 - SEL$6 / E@SEL$6
  35 - SEL$6 / D@SEL$6
  36 - SEL$6 / F@SEL$6
  37 - SEL$6 / F@SEL$6
…

I use this query on DBA_HIST_SQL_PLAN to extract the tables that have single-character row source aliases that correspond to PeopleSoft records, and put them into PLAN_TABLE. I use this table because it is delivered by Oracle as a global temporary table, so it is always there and I can make use of it even if I only have read-only access.

INSERT INTO plan_table (object_name, object_alias) 
with p as ( --plan lines with single letter aliases
SELECT DISTINCT object_owner, object_type, object_name, regexp_substr(object_alias,'[[:alpha:]]',2,1) object_alias
from dba_hist_sql_plan p
, ps.psdbowner d
where p.sql_id = '&&sql_id' --put SQL ID here--
and p.object_name IS NOT NULL
and p.object_owner = d.ownerid
and regexp_like(object_alias,'"[[:alpha:]]"') --single character aliases
), r as ( --PeopleSoft table records and the table name
select r.recname, DECODE(r.sqltablename,' ','PS_'||r.recname,r.sqltablename) sqltablename
from psrecdefn r
where r.rectype = 0 --PeopleSoft table records
)
select r.recname, object_alias --referenced table
from p, r
where p.object_type like 'TABLE%'
and p.object_name = r.sqltablename
union --a query plan may reference an index and not the table
select r.recname, object_alias --table for referenced index
from p, r
, all_indexes i
where p.object_type like 'INDEX%'
and i.index_name = p.object_name
and i.owner = p.object_owner
and i.table_name = r.sqltablename
order by 2,1
/
I now have a list of records and row source aliases aliases
RECNAME         O
--------------- -
OPER_UNIT_TBL   A
PSTREELEAF      B
PSTREENODE      C
PSTREENODE      D
TREE_NODE_TBL   E
PSTREELEAF      F
PRODUCT_TBL     G

Next, I can run this anonymous PL/SQL block to dynamically build the SQL query on PSQRYRECORD (one reference for every table) and execute it to find the matching PS/Queries

DECLARE 
  l_sep1 VARCHAR2(20);
  l_sep2 VARCHAR2(20);
  l_counter INTEGER := 0;
  l_sql CLOB := 'SELECT r1.oprid, r1.qryname';
  l_where CLOB;
  
  TYPE t_query IS RECORD (oprid VARCHAR2(30), qryname VARCHAR2(30));
  TYPE a_query IS TABLE OF t_query INDEX BY PLS_INTEGER;
  l_query a_query;
BEGIN
  FOR i IN(
    SELECT *
    FROM plan_table
    ORDER BY object_alias
  ) LOOP
    l_counter := l_counter + 1;
    dbms_output.put_line(i.object_alias||':'||i.object_name);
    IF l_counter = 1 THEN
      l_sep1 := ' FROM ';
      l_sep2 := ' WHERE ';
    ELSE
      l_sep1 := ' ,';
      l_sep2 := ' AND ';
      l_where := l_where||' AND r1.oprid = r'||l_counter||'.oprid AND r1.qryname = r'||l_counter||'.qryname';
    END IF;
    l_sql := l_sql||l_sep1||'psqryrecord r'||l_counter;
    l_where := l_where||l_sep2||'r'||l_counter||'.corrname = '''||i.object_alias||''' AND r'||l_counter||'.recname = '''||i.object_name||'''';
  END LOOP;
  l_sql := l_sql||l_where||' ORDER BY 1,2';
  dbms_output.put_line(l_sql);

  EXECUTE IMMEDIATE l_sql BULK COLLECT INTO l_query;

  FOR indx IN 1 .. l_query.COUNT
  LOOP
    DBMS_OUTPUT.put_line (indx||':'||l_query(indx).oprid||'.'||l_query(indx).qryname);
  END LOOP;
END;
/

The seven records found in my execution plan become a query of PSQRYRECORD 7 times, one for each record, joined on operator ID and query name.

SELECT r1.oprid, r1.qryname 
FROM psqryrecord r1 ,psqryrecord r2 ,psqryrecord r3 ,psqryrecord r4 ,psqryrecord r5 ,psqryrecord r6 ,psqryrecord r7 
WHERE r1.corrname = 'A' AND r1.recname = 'OPER_UNIT_TBL'
AND r1.oprid = r2.oprid AND r1.qryname = r2.qryname AND r2.corrname = 'B' AND r2.recname = 'PSTREELEAF' 
AND r1.oprid = r3.oprid AND r1.qryname = r3.qryname AND r3.corrname = 'C' AND r3.recname = 'PSTREENODE' 
AND r1.oprid = r4.oprid AND r1.qryname = r4.qryname AND r4.corrname = 'D' AND r4.recname = 'PSTREENODE' 
AND r1.oprid = r5.oprid AND r1.qryname = r5.qryname AND r5.corrname = 'E' AND r5.recname = 'TREE_NODE_TBL' 
AND r1.oprid = r6.oprid AND r1.qryname = r6.qryname AND r6.corrname = 'F' AND r6.recname = 'PSTREELEAF' 
AND r1.oprid = r7.oprid AND r1.qryname = r7.qryname AND r7.corrname = 'G' AND r7.recname = 'PRODUCT_TBL' 
ORDER BY 1,2
The query finds several queries. I can look at the public PS/Queries in the Query Manager tool.  I can also see which users' private queries exist.
NB. You can only open public queries (where OPRID is a single space) or your own private queries.  In the Query Manager, you cannot see a private query owned by another user.
…
3: .PS_TREE_PRODUCT
4: .QUERY_PRODUCT_TREE
5: .RM_TREE_PRODUCT
6:XXXXXX.PS_TREE_PRODUCT_XX
…
The new findqry.sql script is available on Github.

Reducing the Operating System Priority of PeopleSoft Processes

Thu, 2024-01-25 08:38

PeopleSoft for the Oracle DBA

I wrote about controlling the operating system priority of processes in PeopleSoft Tuxedo domains in Chapters 13 of 14 of PeopleSoft for the Oracle DBA, but I think it is worth a note here.

On Linux and Unix systems, the nice command can be used to lower the operating system scheduling priority of a process (or a privileged can increase the priority). When a server has no free CPU, processes with a lower priority get less time on the CPU. However, when there is free CPU available, the scheduling priority does not affect the amount of CPU that the process can utilise. 

On Unix, the priority of a Tuxedo server process can be adjusted using the -n server command line option in the configuration. The parameters to this option are simply passed through to the nice(2) function. Hence, this option does not work on Windows.

PSPRCSRV        SRVGRP=BASE
                SRVID=101
                MIN=1
                MAX=1
                RQADDR="SCHEDQ"
                REPLYQ=Y
                CLOPT="-n 4 -sInitiateRequest -- -C psprcs.cfg -CD HR88 -PS PSUNX -A start -S PSPRCSRV"
The operating system priority of a process is inherited from its parent. Therefore, lowering the priority of the Process Scheduler running under Tuxedo will also lower the priority of the batch processes that it spawns. 
  • Therefore Stand-alone Application Engine processes (psae) and Cobol processes inherit the priority of the process scheduler server process (PSPRCSRV).
  • However, if the Application Engine server process (PSAESRV) is used, its priority can be set directly. 
There are some potential uses for this approach.
  • If the process scheduler is co-resident with the application server, then it could be run at a lower priority to ensure the online users get preferential allocation of CPU, and that online performance does not suffer excessively at the hands of the batch.
  • A system might have two websites: one for self-service and the other for the 'back-office' users. You could configure separate application servers for each site, and run the self-service application server is run at a lower priority. 

In PeopleSoft, I prefer to create additional variables in the configuration file (psprcs.cfg).

[Process Scheduler]
;=========================================================================
; General settings for the Process Scheduler
;=========================================================================
PrcsServerName=PSUNX
;-------------------------------------------------------------------------
;Reduce priority of Process Scheduler server process, set to 0 if not needed
Niceness=4
...
From PeopleTools 8.4, the Application Engine server process is configured by default. The priority of the AE server processes can then be controlled independently of the process scheduler by creating a separate variable in the PSAESRV section of the configuration file.  However, it is generally better to use standalone PSAE, unless you have many short-lived application engine processes, as in CRM (see Application Engine in Process Scheduler: PSAESRV Server Process -v- Standalone PSAE executable).   
[PSAESRV]
;=========================================================================
; Settings for Application Engine Tuxedo Server
;=========================================================================
;-------------------------------------------------------------------------
;Reduce priority of application engine server process, set to 0 if not needed
Niceness=5
...
In this example, I have reduced the priorities of both the process scheduler and AE servers, but the process scheduler is left with a higher priority than the AE servers. The new variables can then be referenced Tuxedo template file (psprcsrv.ubx).
{APPENG}
#
# PeopleSoft Application Engine Server
#
PSAESRV         SRVGRP=AESRV
                SRVID=1
                MIN={$PSAESRV\Max Instances}
                MAX={$PSAESRV\Max Instances}
                REPLYQ=Y
                CLOPT="-n {$PSAESRV\Niceness} -- -C {CFGFILE} -CD {$Startup\DBName} -S PSAESRV"
{APPENG}
...
PSPRCSRV        SRVGRP=BASE
                SRVID=101
                MIN=1
                MAX=1
                RQADDR="SCHEDQ"
                REPLYQ=Y
                CLOPT="-n {$Process Scheduler\Niceness} -sInitiateRequest -- -C {CFGFILE} -CD {$Startup\DBName} -PS {$Process Scheduler\PrcsServerName} -A start -S PSPRCSRV"
When the domain is configured in psadmin, the variables are resolved in the Tuxedo configuration file (psprcsrv.ubb).  The -n option can be seen in the server command-line options (CLOPT).
#
# PeopleSoft Application Engine Server
#
PSAESRV         SRVGRP=AESRV
                SRVID=1
                MIN=1
                MAX=1
                REPLYQ=Y
                CLOPT="-n 5 -- -C psprcs.cfg -CD HR88 -S PSAESRV"
...
PSPRCSRV        SRVGRP=BASE
                SRVID=101
                MIN=1
                MAX=1
                RQADDR="SCHEDQ"
                REPLYQ=Y
                CLOPT="-n 4 -sInitiateRequest -- -C psprcs.cfg -CD HR88 -PS PSUNX -A start -S PSPRCSRV"

Prioritising Scheduled Processes by Operator ID/Run Control

Fri, 2023-11-10 05:13

Batch processing is like opera (and baseball) - "It ain't over till the fat lady sings".  Users care about when it starts and when it finishes.  If the last process finishes earlier, then that is an improvement in performance. 

This note describes a method of additionally prioritising processes queued to run on the process scheduler in PeopleSoft by their requesting operator ID and run control.  Where processing consists of more instances of the same process than can run concurrently, it can be used to make the process scheduler run longer-running processes before shorter-running processes that were scheduled earlier, thus completing batch processing earlier.

In PeopleSoft, without customisation, it is only possible to prioritise processes queued to run on the process scheduler by assigning a priority to the process definition or their process category.  Higher priority processes are selected to be run in preference to lower priorities.  Otherwise, processes are run in the order of the time at which they are requested to run.

Problem Statement
During an overnight batch, many nVision report books are scheduled to run on the Windows process schedulers by one of several specific batch operator IDs.  Many more reports are scheduled than can run concurrently, so some execute while others queue.  Inevitably, the reports have widely varying execution times.  The maximum concurrency of the nVision report book (RPTBOOK) process definition has been set, and the Oracle database resource manager has also been configured, to prevent too many of these processes from overloading the database.

CPU Utilisation of BatchThis chart shows the database activity when the batch runs.  We often see what has come to be called the 'long tail' while we wait for just a few long-running processes to complete.


The next chart shows the processing time of each nVision report process.  The blue bars run from when the started to when it ended, ordered by start time.   The clear boxes below run from the time when it was requested to when it started, thus showing the period for which the processes were queued on a process scheduler, but were blocked because the maximum number of processes were already processing.
Process Map (without prioritisation)
All these processes run with the same priority because they are the same process definition.  Some long-running jobs execute earlier in the batch simply because they were scheduled earlier, but others that started later, run on beyond the end of the batch.  
It would be better if the longest-running processes were executed earlier, irrespective of the order in which they were requested.  Thus the shorter processes can run later as slots on the scheduler become free, and thus all processes should finished both closer together and earlier.
There is nothing delivered in the PeopleSoft process scheduler configuration that will let you assign different priorities to different executions of the same process.  In PeopleSoft, only three priorities are defined PRCSPRIORITY (1=Low, 5=Medium, 9=High) on the process definition and the server category.  These priorities are transferred to the process request queue (PSPRCSQUE.PRCSPRTY).  
If we could put our own priority into that column we could control the priority of the request.  Solution
Introduce a database trigger that fires on insert into PSPRCSQUE and sets a priority specified on a new metadata table.  PRCSPRTY is not validated by PeopleSoft, and therefore any value can be specified.  
The files are available in a Github repository davidkurtz/psprcspty. The exact metadata will vary with the use case and requirements, but I provided some examples of how it might be generated.

Metadata Table: PS_XX_GFCPRCSPRTY

We need a table that will hold the priority for each combination of process type, process name, operation ID, and run control ID.  A corresponding record should be created using the Application Designer project in the GitHub repository.

create table sysadm.ps_xx_gfcprcsprty
(prcstype  VARCHAR2(30 CHAR) NOT NULL
,prcsname  VARCHAR2(12 CHAR) NOT NULL
,oprid     VARCHAR2(30 CHAR) NOT NULL
,runcntlid VARCHAR2(30 CHAR) NOT NULL
,prcsprty  NUMBER NOT NULL
--------------------optional columns
,avg_duration NUMBER NOT NULL
,med_duration NUMBER NOT NULL
,max_duration NUMBER NOT NULL
,cum_duration NUMBER NOT NULL
,tot_duration NUMBER NOT NULL
,num_samples  NUMBER NOT NULL
) tablespace ptapp;

create unique index sysadm.ps_xx_gfcprcsprty
on sysadm.ps_xx_gfcprcsprty(prcstype, prcsname, oprid, runcntlid) 
tablespace psindex compress 3;
Trigger Before Insert into PSPRCSQUE

As processes are scheduled in PeopleSoft, a row is inserted into the process scheduler queue table PSPRCSQUE.  A trigger will be created on this table that fires after the insert.  It will look for a matching row on the metadata table, PS_XX_GFCPRCSPRTY for the combination of process type, process name, operator ID, and run control ID.  If found, the trigger will assign the specified priority to the process request.  Otherwise, it will take no action.

CREATE OR REPLACE TRIGGER sysadm.psprcsque_set_prcsprty
BEFORE INSERT ON sysadm.psprcsque
FOR EACH ROW
WHEN (new.prcsname = 'RPTBOOK')
DECLARE
  l_prcsprty NUMBER;
BEGIN
  SELECT prcsprty
  INTO   l_prcsprty
  FROM   ps_xx_gfcprcsprty
  WHERE  prcstype = :new.prcstype
  AND    prcsname = :new.prcsname
  AND    oprid = :new.oprid
  AND    runcntlid = :new.runcntlid;
 
  :new.prcsprty := l_prcsprty;
EXCEPTION
  WHEN no_data_found THEN NULL;
  WHEN others THEN NULL;
END;
/
show errors

In this case, I am only assigning priorities to RPTBOOK processes, so I have added a when clause to the trigger so that it only fires for RPTBOOK process requests.  This can either be changed for other processes or removed entirely.

Priority Metadata

How the priorities should be defined will depend on the specific use case.  In some cases, you may choose to create a set of metadata that remains unchanged.

In this case, the objective is that the processes to take the longest to run should be executed first.  Therefore, I decided that the priority of each nVision report book process (by operator ID and run control ID) will be determined by the median elapsed execution time in the last two months.  The priorities are allocated such that the sum of the median execution times for each priority will be as even as possible.  

I have created a PL/SQL procedure GFCPRCSPRIORITY to truncate the metadata table and then repopulate it using a query on the process scheduler table (although, an Application Engine program could have been written to do this instead).  The procedure is executed daily, thus providing a feedback loop so if the run time varies over time, or new processes are added to the batch, it will be reflected in the priorities.

REM nvision_prioritisation_by_cumulative_runtime.sql

set serveroutput on
create or replace procedure sysadm.gfcprcspriority as
  PRAGMA AUTONOMOUS_TRANSACTION; --to prevent truncate in this procedure affecting calling session
  l_hist INTEGER := 61 ; --consider nVision processes going back this many days
begin
  EXECUTE IMMEDIATE 'truncate table ps_xx_gfcprcsprty';

--populate priorty table with known nVision processes 
insert into ps_xx_gfcprcsprty
with r as (
select r.prcstype, r.prcsname, r.prcsinstance, r.oprid, r.runcntlid, r.runstatus, r.servernamerun
, CAST(r.rqstdttm AS DATE) rqstdttm
, CAST(r.begindttm AS DATE) begindttm
, CAST(r.enddttm AS DATE) enddttm
from t, psprcsrqst r
  inner join ps.psdbowner p on r.dbname = p.dbname -- in test exclude any history copied from another database
where r.prcstype like 'nVision%' --limit to nVision processes
and r.prcsname like 'RPTBOOK' -- limit to report books
and r.enddttm>r.begindttm  --it must have run to completion
and r.oprid IN('NVISION','NVISION2','NVISION3','NVISION4')  --limit to overnight batch operator IDs
and r.begindttm >= TRUNC(SYSDATE)+.5-l_hist --consider process going back l_hist days from midday today
and r.runstatus = '9' --limit to successful processes
and r.begindttm BETWEEN ROUND(r.begindttm)-5/24 AND ROUND(r.begindttm)+5/24 --started between 7pm and 5am
), x as (
select r.*, CEIL((enddttm-begindttm)*1440) duration -–rounded up to the next minute
from r
), y as (
select prcstype, prcsname, oprid, runcntlid
, AVG(duration) avg_duration
, MEDIAN(CEIL(duration)) med_duration 
, MAX(duration) max_duration
, SUM(CEIL(duration)) sum_duration
, COUNT(*) num_samples
from x
group by prcstype, prcsname, oprid, runcntlid
), z as (
select y.* 
, sum(med_duration) over (order by med_duration rows between unbounded preceding and current row) cum_duration 
, sum(med_duration) over () tot_duration
from y
)
select prcstype, prcsname, oprid, runcntlid 
, avg_duration, med_duration, max_duration, cum_duration, tot_duration, num_samples
--, CEIL(LEAST(tot_duration,cum_duration)/tot_duration*3)*4-3 prcsprty  --3 priorities
, CEIL(LEAST(tot_duration,cum_duration)/tot_duration*9) prcsprty  --9 priorities
--, DENSE_RANK() OVER (order by med_duration) prcsprty --unlimited priorities
from z
order by prcsprty, cum_duration;

  dbms_output.put_line(sql%rowcount||' rows inserted');
  commit;

end gfcprcspriority;
/
show errors
In testing, I found that using just the 3 delivered levels of priority was not sufficiently granular to prioritise the jobs adequately, so I chose to use 9 levels (1 to 9).  The process priority on PRCSQUE is not validated, so I can use any value.  I also found I could just rank the processes from 1 to n by duration, and that would also work.

It is possible to create additional priority levels for process categories (see also More Process Priority Levels for the Process Scheduler), but that still only works for prioritising different processes over each other.
Metadata

This is the metadata produced on a test system by the above query.  It will vary depending on what has been run recently, and how it performed.  There are more, shorter processes in the lower priority groups, and fewer, longer processes in the higher priority groups.  

PRCSTYPE                       PRCSNAME     OPRID        RUNCNTLID                        PRCSPRTY
------------------------------ ------------ ------------ ------------------------------ ----------
…
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_XXX1                          1
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_XXX3                          1
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_LLLL8                         1
…
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_LLLL9                         2
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_STAT8                       2
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_STAT1                       2
…
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_TEMPXX                        6
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_3                           6
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_17                          6
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_STAT4                       7
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_28                          7
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_INCXXX                        8
nVision-ReportBook             RPTBOOK      NVISION2     NVS_RPTBK_MORYYY1                       8
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_24                          9
nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_16                          9
A Test Script

This test script inserts some dummy rows into PSPRCSQUE to check whether a priority is assigned by the trigger. The insert is then rolled back.

INSERT INTO psprcsque (prcsinstance, prcstype, prcsname, oprid, runcntlid)
VALUES (-42, 'nVision-ReportBook', 'RPTBOOK', 'NVISION','NVS_RPTBOOK_17');
INSERT INTO psprcsque (prcsinstance, prcstype, prcsname, oprid, runcntlid)
VALUES (-43, 'nVision-ReportBook', 'RPTBOOK', 'NVISION','NVS_RPTBOOK_STAT1');
select prcsinstance, prcstype, prcsname, oprid, runcntlid, prcsprty from psprcsque where prcsinstance IN(-42,-43);
rollback;

You can see that it was successful because priorities 2 and 7 were assigned.

PRCSINSTANCE PRCSTYPE                       PRCSNAME     OPRID        RUNCNTLID                        PRCSPRTY
------------ ------------------------------ ------------ ------------ ------------------------------ ----------
         -43 nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_STAT1                       2
         -42 nVision-ReportBook             RPTBOOK      NVISION      NVS_RPTBOOK_17                          7
Monitoring Script

This query in script process_prioritisation_by_cumulative_runtime_report.sql reports on the average, median, and cumulative median execution time for each nVision process that ran to success during the overnight processing window as calculated by the package GFCPRCSPRIORITY and stored in PS_XX_GFCPRCSPRTY.  It also compares that to the priority and last actual run time for that process.

Example Output
                                                                                                      Cum.                                            
                                                                        Average   Median            Median    Total         Last Run   Actual                   
                                                                  Prcs Duration Duration Duration Duration Duration     Num  Process Duration Duration Duration Priorty
PRCSTYPE             PRCSNAME   OPRID        RUNCNTLID            Prty   (mins)   (mins)   (mins)   (mins)   (mins) Samples Priority   (mins)    Diff    % Diff    Diff
-------------------- ---------- ------------ -------------------- ---- -------- -------- -------- -------- -------- ------- -------- -------- -------- -------- -------
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_4           9    90.65      131      209     1834     1997      23        6      189       58       44       3 
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_16          9   159.17      163      209     1997     1997      23        9      177       14        9       0

nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_14          8    89.26      127      215     1703     1997      23        6      167       40       31       2
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_24          8   115.87      117      165     1576     1997      23        9      144       27       23      -1

nVision-ReportBook   RPTBOOK    NVISION2     NVS_RPTBK_MORYYY1       7    93.13       85      165     1459     1997      23        8      158       73       86      -1
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_28          7    88.30       80      172     1374     1997      23        8      108       28       35      -1

nVision-ReportBook   RPTBOOK    NVISION2     NVS_RPTBK_INCXXX        6    83.61       79      149     1294     1997      18        7      118       39       49      -1
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_17          6    70.96       69      105     1143     1997      23        7       81       12       17      -1
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_STAT4       6    68.00       72       81     1215     1997       8        7       81        9       13      -1

nVision-ReportBook   RPTBOOK    NVISION2     NVS_RPTBK_MMMMMM        5    52.45       46      119      914     1997      22        5       91       46      100       0
nVision-ReportBook   RPTBOOK    NVISION2     NVS_RPTBK_TEMPXX        5    50.48       49      104      963     1997      23        5       94       45       92       0
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_3           5    55.52       55       99     1018     1997      23        6       79       24       44      -1
nVision-ReportBook   RPTBOOK    NVISION      NVS_RPTBOOK_1           5    47.70       56      137     1074     1997      23        4       56        0        0       1
…
Monitoring Query

The query in prcsmap.sql is used to produce the data for a map of the processes, showing request time, time spent queuing, and time spent executing.  It is the basis of the second chart above. I normally run this in SQL Developer and export the data as an Excel workbook.  There is an example spreadsheet in the Github repository.

9 levels of Prioritisation
With prioritisation, we can see that the long-running jobs with higher priority ran earlier.  

We can also see that some of the higher-priority jobs that are scheduled later are running earlier than those scheduled earlier, and are thus finishing earlier.

Batch load with process prioritisation

There is no longer any tail of processing.  Instead, load drops quickly at the end of the batch, and the batch as a whole finishes earlier.










Querying the PeopleSoft Message Log with SQL

Thu, 2023-04-27 10:41

It is easy to access the PeopleSoft message log in Process Monitor component, but it can be a little difficult to work with in SQL because it is stored in multiple tables.

This started when I wanted to generate a PeopleSoft log message as a single string of text, so I could investigate shared pool memory errors by searching for ORA-04031 errors.  Ultimately, the string 'ORA-04031' is stored in PS_MESSAGE_LOGPARM, but I wanted to see the whole error message.

  • Each process, has a request record on PSPRCSRQST, it can have many messages.
  • Each message is stored in the message log table PS_MESSAGE_LOG
  • The text of each message is stored in the message catalogue table PSMSGCATDEFN.  It can have up to 9 substitution strings (%1, %2, etc).
  • A message can have up to 9 parameters stored on PS_MESSAGE_LOGPARM that are substituted into the message string.

I assemble the message text in a PL/SQL function exactly as PeopleTools programs do, substituting the variables in the message string from the message catalogue with the parameter values.  The PL/SQL function is put into the SQL query as a common table expression so that I don't have to create a function or package in the database.  The function returns the full message text in a CLOB, thus I can then easily manipulate the message string in SQL.

In this case, I wrote a SQL query to search for ORA-04031 (see psmsglogora4031.sql on Github), but the same PL/SQL function can be used in various queries.  

It can be slow to search the generated message string.  It can be faster to search PS_MESSAGE_LOGPARM directly. 

WITH FUNCTION psmsgtext(p_process_instance INTEGER, p_message_seq INTEGER) RETURN CLOB IS
  l_message_log ps_message_log%ROWTYPE;
  l_message_text CLOB;
BEGIN
  SELECT *
  INTO   l_message_log
  FROM   ps_message_log 
  WHERE  process_instance = p_process_instance
  AND    message_seq = p_message_seq;

  SELECT message_text
  INTO   l_message_text
  FROM   psmsgcatdefn
  WHERE  message_set_nbr = l_message_log.message_set_nbr
  AND    message_nbr     = l_message_log.message_nbr;

  --dbms_output.put_line(l_message_text);
  FOR i IN (
    SELECT *
    FROM   ps_message_logparm
    WHERE  process_instance = p_process_instance
    AND    message_seq = p_message_seq
    ORDER BY parm_seq
  ) LOOP
    --dbms_output.put_line(i.message_parm);
    l_message_text := REPLACE(l_message_text,'%'||i.parm_seq,i.message_parm);
  END LOOP;

  --and tidy up the unused replacements at the end
  RETURN REGEXP_REPLACE(l_message_text,'%[1-9]','');
END;
x as (
select r.prcstype, r.prcsname, r.oprid, r.runcntlid
, l.*, psmsgtext(l.process_instance, l.message_seq) message_text
from ps_message_log l
LEFT OUTER JOIN psprcsrqst r ON r.prcsinstance = l.process_instance
WHERE …
)
select *
from x
ORDER BY dttm_stamp_sec
/
Now, I can easily produce a report of messages, like this:


Process              Process         Operator                             Process  Msg                                   Msg         Msg
Type                 Name            ID         Run Control              Instance  Seq JOBID           PROGRAM_NAME     Set#  Msg#   Sev DTTM_STAMP_SEC
-------------------- --------------- ---------- ---------------------- ---------- ---- --------------- --------------- ----- ----- ----- ----------------------------
MESSAGE_TEXT
------------------------------------------------------------------------------------------------------------------------------------------------------
nVision-ReportBook   RPTBOOK         VP1        NVS_XXXXXXX_99            1234567    1 PRCS SCHDL      psprschd           65    70     0 01/04/2023 20.37.21
Process Request shows status of 'INITIATED' or 'PROCESSING' but no longer running

nVision-ReportBook   RPTBOOK         VP1        NVS_XXXXXXX_99            1234567    2 PRCS SCHDL      psprschd           65    73     0 01/04/2023 20.37.23
PSNT1 failed to post files to the report repository.  Server scheduled to try again on 2023-04-01-20.37.41.239539.  See log
...

Clearing PeopleTools Physical Cache Files after Database Restore or Database Flashback

Fri, 2023-04-14 03:49

I've written previously about how to clear the physical cache files on a PeopleTools process, but I have found myself explaining it a few times recently, so I am going to post about it again.

When you refresh the database of a PeopleSoft system, you need to clear the physical cache files of the PeopleTools processes.  The files are outside the database and they no longer reflect what is inside the database.  This includes all application servers and process schedulers and anywhere where Application Designer or a client process is used.

It is common to refresh a database when testing a PeopleSoft system.  For example, to copy production to a performance test environment.  It is also increasingly common to use Oracle database flashback during testing.  A guaranteed restore point is taken, a test is performed, and then the database is flashed back to that restore point.  Flashback returns the whole database not just to the same logical state, but also the same physical state.  Block for block, the entire database is physically the same as when the restore point was taken.  Thus a test is completely repeatable with the same initial conditions.  Although the database instance will have been restarted during the flashback so the content of the database memory will have been cleared.

It is also common, after the flashback to then make small changes or corrections, take a new restore point and repeat the test.  Some of those changes might include Application Designer projects that will then be loaded into the physical cache.  Flashing the database back won't change the physical cache files stored outside the database, so they need to be cleared too.  Otherwise, they may have higher version numbers than the objects in the database, and caching won't work correctly.  When you retest, your changes may not be loaded and executed by PeopleTools processes.

The officially approved method is to go around each server and use either the purge option in the psadmin utility or manually delete the files.  See:

However, since at least PeopleTools 5, it has been possible to invalidate all physical cache files on all servers by updating the LASTREFRESHDTTM on the single row in table PSSTATUS.  Any cached object older than the value of LASTREFRESHDTTM will be purged from the cache when the process that reference that cache is started. Therefore, if immediately after a restore or flashback that value is updated to the current system time, all caches will be purged as the processes are restarted.

UPDATE PSSTATUS
SET    LASTREFRESHDTTM = SYSDATE
/
COMMIT
/

Programmatically Suspending and Restarting the Process Scheduler

Wed, 2023-04-12 05:26
I found this question on a message forum, wrote a note, and forgot about it:

Anyone have any tricks on suspending the process schedulers programmatically?  I just tried using the following but the weird thing was I saw at least one of my process schedulers unsuspend itself, so I must be missing an update to a table.

UPDATE PSSERVERSTAT set SERVERSTATUS = '2' where SERVERNAME like '%PSUNX%'

The process scheduler writes its status to SERVERSTATUS so that it can be seen in the Process Monitor.  Instructions to the process scheduler are read from SERVERACTION, so this is the column that must be updated.  Both columns have a set of XLAT values that translate the status.

table { border-collapse: collapse; width: 100%;} td, th { border: 1px solid black; text-align: left; vertical-align: top; padding: 1px;} PeopleSoft Field NameDescription SERVERSTATUSServer Status
0=Error
1=Down
2=Suspended
3=Running
4=Purging
5=Running With No Report Node
6=Suspended - Disk Low
7=Suspended - Offline
8=Running - Report Rep. Full
9=Overloaded SERVERACTIONProcess Server Action
0=None
1=Stop
2=Suspended
3=Restart
4=Purge See PSSERVERSTAT.

You can see how PeopleSoft does this by tracing the Process Monitor component as it issues commands to the process scheduler.  Hence you can issue commands as follows:
  • Stop
update psserverstat
set    serveraction = 1 /*Stop*/
where  serverstatus = 3 /*Running*/
and    servername = ...
/
commit
/
  • Suspend
update psserverstat
set    serveraction = 2 /*Suspend*/
where  serverstatus = 3 /*Running*/
and    servername = ...
/
commit
/
  • Restart (after suspension)
update psserverstat
set    serveraction = 3 /*Restart*/
where  serverstatus = 2 /*Suspended*/
and    servername = ...
/
commit
/
  • Startup (if the Tuxedo domain is running)
update psserverstat
set    serveraction = 3
where  servername = ...
/
commit
/

Oracle SQL Tracing Processes from Startup

Tue, 2023-04-11 08:35

Sometimes, ASH and AWR are not enough.  SQL may not be sampled by ASH if it is short-lived, and even if it is sampled, the SQL may not be captured by AWR.  Sometimes, in order to investigate a problem effectively, it is necessary to use database session SQL trace.  

It is easy to trace a process initiated by the process scheduler with a trigger (see Enabling Oracle Database Trace on PeopleSoft processes with a Trigger).

Another tactic is to use an AFTER LOGON trigger with logic to look at the program name.  The program name can be read using SYS_CONTEXT().  If it matches what I am looking for, I can enable session trace.

Here is an example I used for the OpenXML nVision server PSNVSSRV

  • I want to trace SQL and not any wait events or bind variables.  Therefore, I will set event 10046 at level 1.
  • I also set a tracefile_identifier that will be included in the trace file name, so I can more easily identify the trace file.
REM additional SQL trace triggers
CREATE OR REPLACE TRIGGER sysadm.gfc_nvision_trace_on_logon
AFTER LOGON
ON sysadm.schema
DECLARE
  l_process_instance INTEGER;
  l_program          VARCHAR2(64 CHAR);
  l_sql              VARCHAR2(100);
BEGIN

  SELECT sys_context('USERENV', 'CLIENT_PROGRAM_NAME')
  INTO   l_program
  FROM   dual;

  IF l_program like 'PSNVSSRV%' THEN --then this is a NVISION session
    EXECUTE IMMEDIATE 'ALTER SESSION SET tracefile_identifier = ''PSNVSSRV''';
    EXECUTE IMMEDIATE 'ALTER SESSION SET events ''10046 TRACE NAME CONTEXT FOREVER, LEVEL 1''';
  END IF;

EXCEPTION 
  WHEN OTHERS THEN NULL;
END;
/
show errors
ALTER TRIGGER sysadm.gfc_nvision_trace_on_logon ENABLE;
See also Reading Trace files with SQL

Adding Flags to Trace Level Overrides in Process Definitions

Mon, 2022-10-17 06:41

A trace level is set in a process definition in PS_PRCSDEFN precedence over a trace level set in the process scheduler configuration file (psprcs.cfg).

I often set the process scheduler trace level for Application Engine to 1152 to enable batch timings to both the database batch timings tables and the AE trace file, but then I often find that a trace is left enabled on a few processes to aid performance analysis of a troublesome process.

This script updates the trace level set in the parameter list in the process definition to include the bit flags set by 1152 (to enable batch timings).  

  • The current trace level is extracted with regular expression substring functions.
  • A bitwise OR is performed between the current trace level and the desired settings.  There is no single function to do this in Oracle SQL, but it can be calculated simply (see Oracle blog: There is no BITOR() in Oracle SQL).  
  • The old trace value is replaced with the new one in the parameter list with a regular expression replace function.
  • The version number on the process definition is also updated as it would be if updated by the process definition component in the PIA.  Thus it is correctly re-cached by the process scheduler, the scheduler does not need to be recycled, nor does the cache need to be cleared

The script is available on Github.

REM fixprcstracelevel.sql
set pages 99 lines 200 serveroutput on
spool fixprcstracelevel append
ROLLBACK;
DECLARE
l_counter INTEGER := 0;
l_trace_expr VARCHAR2(20); /*expression containing TRACE keyword and value*/
l_req_trace_level INTEGER := 1152; /*trace value set in the scheduler config*/
l_cur_trace_level INTEGER; /*current trace level*/
l_new_trace_level INTEGER; /*new calculated trace level*/
l_parmlist ps_prcsdefn.parmlist%TYPE;
BEGIN
for i in (
SELECT t.*
FROM ps_prcsdefn t
WHERE UPPER(t.parmlist) LIKE '%-%TRACE%'
AND prcstype LIKE 'Application Engine'
-- AND parmlisttype IN('1','2','3')
) LOOP
l_trace_expr := REGEXP_SUBSTR(i.parmlist,'\-trace[ ]*[0-9]+',1,1,'i');
l_cur_trace_level := TO_NUMBER(REGEXP_SUBSTR(l_trace_expr,'[0-9]+',1,1,'i'));
l_new_trace_level := l_req_trace_level+l_cur_trace_level-bitand(l_cur_trace_level,l_req_trace_level);
l_parmlist := REGEXP_REPLACE(i.parmlist,l_trace_expr,'-TRACE '||l_new_trace_level,1,1,'i');

IF l_new_trace_level = l_cur_trace_level THEN
dbms_output.put_line(i.prcstype||':'||i.prcsname||':'||i.parmlist||'=>No Change');
ELSE
l_counter := l_counter + 1;
IF l_counter = 1 THEN
UPDATE psversion
SET version = version+1
WHERE objecttypename IN('SYS','PPC');

UPDATE pslock
SET version = version+1
WHERE objecttypename IN('SYS','PPC');
END IF;
dbms_output.put_line(l_counter||':'||i.prcstype||' '||i.prcsname||':'||i.parmlist||'=>'||l_parmlist);
UPDATE ps_prcsdefn
SET version = (SELECT version FROM psversion WHERE objecttypename = 'PPC')
, parmlist = l_parmlist
WHERE prcstype = i.prcstype
AND prcsname = i.prcsname;
END IF;
END LOOP;
COMMIT;
END;
/
spool off
The script reports the old and new parameter list setting for each process definition altered.  
Below is a sample output:
Application Engine:GL_JEDIT:-TRACE 1159=>No Change
1:Application Engine PTDEFSECINRL:-toolstacepc 2048 -toolstracesql 15 -TRACE 15=>-toolstacepc 2048 -toolstracesql 15 -TRACE 1167
  • TRACE for GL_JEDIT is already 1159, so no change is required.
  • TRACE for PTDEFSECINRL was changed from 15 to 1167.

Reporting View Hierarchies

Wed, 2021-08-11 10:31
It is a feature of PeopleSoft that it uses lots of views to present data within the application, and frequently views reference other views.  There are examples of this design going 5 levels deep in HR and deeper in Financials.  When faced with a SQL execution plan for such a view you often wonder which view referenced which table.

However, the Oracle database describes "dependencies between procedures, packages, functions, package bodies, and triggers accessible to the current user, including dependencies on views created without any database links" in the view ALL_DEPENDENCIES.

This hierarchical query (depend_heir.sql) on this view will report the structure of views within views. 

REM depend_hier.sql
undefine view_name
set pages 999 lines 176 long 50000
break on name skip 1 on owner
ttitle 'Dependency Hierarchy'
column my_level format a5 heading 'Level'
column owner format a12
column name format a18
column type format a7
column referenced_type format a7 heading 'Refd|Type'
column referenced_owner format a6 heading 'Refd|Owner'
column referenced_name format a18 heading 'Refd|Name'
column referenced_link_name format a10 heading 'Refd|Link'
column dependency_type heading 'Dep|Type'
column text heading 'View Text' format a80 wrap on
spool depend_hier.&&view_name..lst
with d as (
select * from all_dependencies
union all
select null, null, null, owner, view_name, 'VIEW', null, null
from all_views
where owner = 'SYSADM' and view_name = UPPER('&&view_name')
)
select LPAD(TO_CHAR(level),level,'.') my_level
, d.type, d.owner, d.name
, d.referenced_type, d.referenced_owner, d.referenced_name, d.referenced_link_name
, d.dependency_type
, v.text
from d
left outer join all_views v
on v.owner = d.referenced_owner
and v.view_name = d.referenced_name
connect by nocycle
d.name = prior d.referenced_name
and d.owner = prior d.referenced_owner
start with d.owner IS NULL and d.name IS NULL
/
spool off
ttitle off
For example, this is the report for PS_POSN_HISTORY3 from a demo HCM database. It is only three levels deep. "POSN_HISTORY3 is the third of three nested views which retrieve position incumbent history.  It selects job records with effective dates before position exits to obtain exit salaries."
Wed Aug 11                                                                                                                                                             page    1
Dependency Hierarchy

Refd Refd Refd Refd Dep
Level TYPE OWNER NAME Type Owner Name Link Type View Text
----- ------- ------------ ------------------ ------- ------ ------------------ ---------- ---- --------------------------------------------------------------------------------
1 VIEW SYSADM PS_POSN_HISTORY3 SELECT A.Position_Nbr ,A.Position_Entry_Dt ,A.Emplid ,A.EMPL_RCD ,B.EFFDT ,B.EFF
SEQ ,B.Sal_Admin_Plan ,B.Grade ,B.Step ,B.Comprate ,B.Comp_Frequency ,B.Currency
_Cd ,' ' ,' ' ,' ' ,' ' FROM PS_POSN_HISTORY2 A ,PS_JOB B WHERE B.EmplID = A.Emp
lID AND B.EMPL_RCD = A.EMPL_RCD AND B.Position_Nbr = A.Position_Nbr AND B.EffDt
= ( SELECT MAX(C.EffDt) FROM PS_JOB C WHERE C.EmplID = B.EmplID AND C.EMPL_RCD =
B.EMPL_RCD AND (C.EffDt < A.Position_End_Dt OR (C.EffDt = A.Position_End_Dt AND
C.EffSeq = A.EffSeq - 1))) AND B.Effseq = ( SELECT MAX(C.Effseq) FROM PS_JOB C
WHERE C.EmplID = B.EmplID AND C.EMPL_RCD = B.EMPL_RCD AND (C.EffDt < A.Position_
End_Dt OR (C.EffDt = A.Position_End_Dt AND C.EffSeq = A.EffSeq - 1)))


.2 VIEW SYSADM PS_POSN_HISTORY3 TABLE SYSADM PS_JOB HARD
.2 VIEW VIEW SYSADM PS_POSN_HISTORY2 HARD SELECT A.Position_Nbr , A.Position_Entry_Dt , A.Emplid , A.EMPL_RCD , B.EffDt ,
B.EffSeq , B.Action FROM PS_POSN_HISTORY A , PS_JOB B WHERE A.EmplID = B.EmplID
AND A.Empl_Rcd = B.Empl_Rcd AND B.EffDt = ( SELECT MIN(C.EffDt) FROM PS_JOB C WH
ERE C.EmplID = B.EmplID AND C.Empl_Rcd = B.Empl_Rcd AND (C.EffDt > A.Position_En
try_Dt OR (C.EffDt = A.Position_Entry_Dt AND C.EffSeq > A.EffSeq)) AND ((C.Posit
ion_Nbr <> A.Position_Nbr) OR (C.HR_STATUS <> 'A'))) AND B.EffDt<=TO_DATE(TO_CHA
R(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND B.EffSeq = ( SELECT MIN(D.EffSeq) FROM
PS_JOB D WHERE D.EmplID = B.EmplID AND D.Empl_Rcd = B.Empl_Rcd AND D.EffDt = B.
EffDt AND ((D.Position_Nbr <> A.Position_Nbr) OR (D.HR_STATUS <> 'A')))


..3 VIEW SYSADM PS_POSN_HISTORY2 TABLE SYSADM PS_JOB HARD
..3 VIEW VIEW SYSADM PS_POSN_HISTORY HARD SELECT A.Position_Nbr ,A.Position_Entry_Dt ,A.Emplid ,A.EMPL_RCD ,A.EFFSEQ ,A.EF
FDT ,A.Sal_Admin_Plan ,A.Grade ,A.Step ,A.CompRate ,A.Comp_Frequency ,A.Currency
_Cd ,' ' ,' ' ,' ' ,' ' FROM PS_Job A WHERE A.Position_Entry_Dt = A.Effdt AND A.
Effseq = ( SELECT MIN(B.Effseq) FROM PS_Job B WHERE B.Emplid = A.Emplid AND B.EM
PL_RCD = A.EMPL_RCD AND B.Effdt = A.Effdt AND B.Position_Nbr = A.Position_Nbr)


...4 VIEW SYSADM PS_POSN_HISTORY TABLE SYSADM PS_JOB HARD
We can see from the report that view PS_POSN_HISTORY3 calls view PS_POSN_HISTORY2 that in turn calls view PS_POSN_HISTORY
Each of the views also contains multiple references to PS_JOB that perform various effective date/sequence sub-queries. Where there are multiple references to the same object, there is still only one dependency.
The script is available on Github as a part of my psscripts repository.

Oracle Active Data Guard in PeopleSoft with Oracle 19c DML Redirection

Fri, 2021-08-06 08:50

Active Data Guard is a configuration option that became available in Oracle 11g where a standby database is maintained as a synchronised physical replica of the primary database and is also open for read-only SQL queries.

PeopleSoft added configuration to direct certain read-only components and processes to an ADG standby using secondary connections in the application servers and process schedulers.  However, in PeopleSoft, all scheduled processes update at least the process scheduler request tables, even if they make no updates to application tables.  This cannot be done on a read-only standby database and must be directed back to the primary database.

PeopleBooks sets out a method for Implementing Active Data Guard (this link is to the PeopleTools 8.58 documentation).  It uses a second schema to which PeopleSoft application servers and process schedulers connect using a second access profile.  The second schema contains synonyms for each table and view that either point to the corresponding object in the original schema on the ADG standby, or if the object is going to be updated by the application then via database links to the corresponding object in the primary database.  This approach requires knowledge of which tables are updated during otherwise read-only processing, a lot of scripting to generate all the synonyms and grants, and ongoing maintenances as new objects are added to the database.

However, that approach is rendered obsolete by Active Data Guard DML redirection, a new feature in 19c.  This post explains how to configure PeopleSoft to make use of Active Data Guard on Oracle 19c with DML redirect.

With DML redirection enabled, updates on the secondary database are automatically redirected back to the primary database via a SQL*Net connection between the databases (not unlike a database link), and then they will be replicated back to the standby database like any other change.  PeopleSoft no longer needs to be configured specially to handle updated tables differently.  Consequently, the PeopleSoft ADG configuration is massively simplified.

Processes no longer have to be strictly read-only to run on the ADG database.  If there are only a small quantity of updates the redirect can handle it.

Database Initialisation Parameters
The parameter ADG_REDIRECT_DML should be set to true on both the primary and active data guard standby databases.
PeopleSoft Connection Configuration
A second row must be added to PS.PSDBOWNER to map the TNS name of the standby database to the database owner ID, in this case SYSADM.
SQL>select * from ps.psdbowner;

DBNAME OWNERID
-------- --------
FINPRD SYSADM
FINADG SYSADM
Application Server Configuration
To facilitate read-only components in the PIA (such as the query manager) redirecting to the standby database, all Application Server domains should still be configured to connect to both the primary database and the standby database, as described in the PeopleSoft documentation
[Startup]
;=========================================================================
; Database Signon settings
;=========================================================================
DBName=FINPRD
DBType=ORACLE
UserId=PSAPPS
UserPswd={V2}FEhk7rIFt2f0GRYaH6B9la8DXXMNtsz1kPZ+
ConnectId=PEOPLE
ConnectPswd={V2}Mw3RFr0MHFBpJHbqXh7Dx9qCsO7TFT4G
StandbyDBName=FINADG
StandbyDBType=ORACLE
StandbyUserId=PSAPPS
StandbyUserPswd={V2}Ski/r2xYCvbTbBhXOGfH8HO7zCRxoDFK5rmb
Process Scheduler Configuration
The approach for Process Schedulers is slightly different.
Application Engine Limitation in Active Data Guard
It is a documented limitation of Application Engine that only PSAESRV Application Engine server processes can connect via the second connection to the standby database.  If a standalone PSAE process attempts this the connection will fail.  
See:
  • Oracle Support Note: E-AE: Application Engine Process Might Stay in Initiated Status if PSAESRV Disabled on PeopleSoft ADG Enabled Environment (Doc ID 1641764.1)
    • This was raised as Bug 18482301: PSAE may stay in initiated status on ADG configured environment.  It was closed as 'not a bug'.
  • Using Two Temporary Tablespace in PeopleSoft.
Error in sign on
Database Type: 7 (ORACLE)
Database Name: HCM91
Server Name:
OperID:
ConnectID: people
Process Instance: 0
Reason: Invalid user ID or password for database signon. (id=)
Note: Attempt to authenticate using GUID 6a1ced41-2fe0-11e2-9183-be3e31d6e740
Invalid command line argument list.
process command line: -CT ORACLE -CD HCM91 -GUID 6a1ced41-2fe0-11e2-9183-be3e31d6e740 -SS NO -SN NO
GUID command line : -CT ORACLE -CD HCM91 -CO "PS" -CP Unavailable -R 1 -I 852 -AI AEMINITEST -OT 6 -FP
"C:\app\pt\appserv\prcs\HCM91PSNT\log_output\AE_AEMINITEST_852\" -OF 1
To continue to use stand-alone PSAE processes, as recommended in the PeopleTools Performance Guidelines Red Paper (Doc ID: 747389.1), it is necessary to configure separate process schedulers that connect only to the Active Data Guard standby database, and processes will need to be redirected appropriately by process categories to these schedulers.
System Settings
Process Categories can be used to restrict certain processes to certain processes schedulers.  In the Process Scheduler System Settings component, create a new category ADGOnly.
Process Scheduler Process Category Admin
New Process Schedulers for ADG
New process schedulers should be created for running ADG only jobs.  They will only be connected to the standby database.  The standby connection should be left blank.  All of the SQL updates made by the Process Scheduler processes will be handled by DML redirection.  You will need to have at least 2 so they do not become a single point of failure.
[Startup]
;=========================================================================
; Database Signon settings
;=========================================================================
DBName=FINADG
DBType=ORACLE
UserId=PSAPPS
UserPswd={V2}YoAQq7Ut4WBHJL89N9gv9E0AWwLaecGZ4qep
ConnectId=PEOPLE
ConnectPswd={V2}OlSYHuFMZa2c8uonfYkKk+3+APYvTU9N
StandbyDBName=
StandbyDBType=
StandbyUserId=
StandbyUserPswd=
These schedulers will ONLY run:
  • Processes in the new ADGOnly process category.  
  • A single LOADCACHE category process will be permitted.
  • The max concurrence of all other categories will be 0.
  • Other process types such as SQR might be needed if they are to be run on the ADG standby
They should not run master process scheduler processes. This process should only run schedulers connected to the primary database.
ADG Process Scheduler Configuration
Existing Process Schedulers
All process categories apply to all process schedulers.  The concurrency of the ADG category should be set to 0 on all other process schedulers to prevent it from running there.  These schedulers will remain connected to the primary database only.  They will not be connected to the secondary database.
[Startup]
;=========================================================================
; Database Signon settings
;=========================================================================
DBName=FINPRD
DBType=ORACLE
UserId=PSAPPS
UserPswd={V2}EcxeV3mit3GMT5kDfz/z+s0L9B1aUb6ix04f
ConnectId=PEOPLE
ConnectPswd={V2}NtXafW7hlcGY016bhazl2kqqvlSNYMK1
StandbyDBName=
StandbyDBType=
StandbyUserId=
StandbyUserPswd=
Process Definition
Normally, the processes that are to be run on ADG should be marked read-only and then they will be scheduled using the secondary connection.    However, Application Engine cannot be run in standalone PSAE mode via the second connection.  So, the ADG process schedulers are configured to connect directly to the ADG database via the primary connection.  Thus, the read-only flag has no effect.  Instead, all read-only processes should be moved to the ADGOnly category.
Configuration by SQL
It may be easier to update the process scheduler configuration by SQL
  • ADGOnly concurrency will be 0, or it will be the maximum API aware concurrency on ADG process schedulers.
  • PSQUERY, any read-only processes, and any processes in ADGOnly category, will all be marked as being both read-only and in the ADGOnly category.
update pslock 
set version = version + 1
where objecttypename IN('SYS','PPC')
/
update psversion
set version = version + 1
where objecttypename IN('SYS','PPC')
/
update ps_servercategory c
set maxconcurrent = CASE WHEN servername like 'PSUNX_A%'
THEN (SELECT s.maxapiaware FROM ps_serverdefn s
WHERE s.servername = c.servername)
ELSE 0 END
where prcscategory = 'ADGOnly'
/
update ps_serverdefn
set version = (SELECT version from psversion where objecttypename = 'PPC')
, lastupddttm = systimestamp
/
update ps_prcsdefn
set version = (SELECT version from psversion where objecttypename = 'PPC')
, prcsreadonly = 1
, prcscategory = 'ADGOnly'
, lastupddttm = systimestamp
where prcsreadonly = 1
or prcscategory = 'ADGOnly'
or prcsname = 'PSQUERY'
/
select prcstype, prcsname, prcscategory, prcsreadonly
from ps_prcsdefn
where prcsreadonly = 1
or prcscategory = 'ADGOnly'
or prcsname = 'PSQUERY'
/
select * from ps_servercategory
where prcscategory IN('ADGOnly')
order by 2,1
/
commit
/
Sample Output
PSOFT-FINADG>Select prcstype, prcsname, prcscategory, prcsreadonly
2 From ps_prcsdefn
3 where prcsreadonly = 1
4 or prcscategory = 'ADGOnly'
5 or prcsname = 'PSQUERY'
6 /

PRCSTYPE PRCSNAME PRCSCATEGORY P
------------------------------ ------------ ------------------------------ -
Application Engine AEMINITEST ADGOnly 1
COBOL SQL PTPDBTST ADGOnly 1
SQR Report PTSQRTST ADGOnly 1
Application Engine FB_GEN_EXTR ADGOnly 1
SQR Report XRFWIN ADGOnly 1
SQR Report SWPAUDIT ADGOnly 1
SQR Report SYSAUDIT ADGOnly 1
SQR Report XRFAPFL ADGOnly 1
SQR Report XRFAEPL ADGOnly 1
SQR Report XRFPGDZ ADGOnly 1
SQR Report DDDAUDIT ADGOnly 1
SQR Report XRFEVPC ADGOnly 1
SQR Report XRFFLPC ADGOnly 1
SQR Report XRFFLPN ADGOnly 1
SQR Report XRFFLRC ADGOnly 1
SQR Report XRFIELDS ADGOnly 1
SQR Report XRFMENU ADGOnly 1
SQR Report XRFPANEL ADGOnly 1
SQR Report XRFPCFL ADGOnly 1
SQR Report XRFPNPC ADGOnly 1
SQR Report XRFRCFL ADGOnly 1
SQR Report XRFRCPN ADGOnly 1
Application Engine PSQUERY ADGOnly 1
Application Engine PSCONQRS ADGOnly 1

SYSADM-FINADG>select * from ps_servercategory
2 where prcscategory IN('ADGOnly')
3 order by 2,1
4 /

SERVERNA PRCSCATEGORY P MAXCONCURRENT
-------- ------------------------------ - -------------

PSNT ADGOnly 5 0
PSUNX ADGOnly 5 0
PSUNX_A1 ADGOnly 5 5
PSQUERY Application Engine Amendment
If you have PS/Queries that reference a remote database via a database link then you may experience errors when you run them via the scheduled Application Engine process PSQUERY on an ADG database.
Using a database link, even if only for a query, starts a database transaction, and then you can't get DML redirect to work because it also starts a transaction.  However, if you already have a transaction open through DML redirect you can't start the transaction associated with the database link.  The two transactions become mutually exclusive.
PeopleTools 8.58.07 - Application Engine
Copyright (c) 1988-2021 Oracle and/or its affiliates.
All Rights Reserved

PeopleTools SQL Trace value: 159 (0x9f): /cs/psoft/cfg/FINPRD/appserv/prcs/FINPRDA1/log_output/AE_PSQUERY_xxxxxxxx/AE_PSQUERY_xxxxxxxx.trc
PeopleTools PeopleCode Trace value: 64 (0x40): /cs/psoft/cfg/FINPRD/appserv/prcs/FINPRDA1/log_output/AE_PSQUERY_xxxxxxxx/AE_PSQUERY_xxxxxxxx.trc

File: /vob/peopletools/src/pssys/qpm.cppSQL error. Stmt #: 8495 Error Position: 3055 Return: 16000 - ORA-16000: database or pluggable database open for read-only access
Failed SQL stmt: SELECT …
<a query that references a remote database via a database link>

Error in running query because of SQL Error, Code=16000, Message=ORA-16000: database or pluggable database open for read-only access (50,380)

PeopleCode Exit(1) Abort invoked by Application at PSQUERY.MAIN.ExecQry. (108,543)

Process xxxxxxxx ABENDED at Step PSQUERY.MAIN.ExecQry (PeopleCode) -- RC = 16 (108,524)

Process %s ABENDED at Step %s.%s.%s (Action %s) -- RC = %s
So we need to close the transactions by making the PSQUERY application engine commit.  Application Engine steps explicitly commit after each step.  I have added an extra step added that does nothing, but Application Engine issues a commit

Disable Query Statistics
I have found that query statistics can cause a problem when the query is run on ADG.
PeopleTools 8.58.07 - Application Engine
Copyright (c) 1988-2021 Oracle and/or its affiliates.
All Rights Reserved

PeopleTools SQL Trace value: 159 (0x9f): /cs/psoft/cfg/FINPRD/appserv/prcs/FINPRDA2/log_output/AE_PSQUERY_19356674/AE_PSQUERY_19356674.trc

File: /vob/peopletools/src/pssys/qdmutil.cppSQL error. Stmt #: 4608 Error Position: 91 Return: 16397 - ORA-16397: statement redirection from Oracle Active Data Guard standby database to primary database failed
Failed SQL stmt: SELECT EXECCOUNT, AVGEXECTIME, AVGFETCHTIME, LASTEXECDTTM, AVGNUMROWS, OPRID, QRYNAME FROM PSQRYSTATS WHERE OPRID = :1 AND QRYNAME = :2 FOR UPDATE OF EXECCOUNT

Application Engine program PSQUERY ended normally
They can be disabled in the PIA
  • Navigate to: PeopleTools, Utilities, Administration, Query Administration, 
  • Go to the last tab (Settings), 
  • Uncheck Run Query Statistics.
Enable Query Execution Logging
Query stats (logged in the table PSQRYSTATS) are not particularly useful as they only compute average metrics over a period that is not defined.  Instead, query execution logging (to the table PSQRYEXECLOG) is far more useful for performance analysis because you know who ran the query when it ran, and how many rows it retrieved.  It is enabled for individual queries.  It can be enabled across the board with the following SQL.
update pslock
set version = version + 1
where objecttypename IN('SYS','QDM')
/
update psversion
set version = version + 1
where objecttypename IN('SYS','QDM')
/
update psqrydefn
set version = (SELECT version from psversion where objecttypename = 'QDM')
, execlogging = 'Y'
, lastupddttm = systimestamp
where execlogging != 'Y'
/

Running nVision in a Mixed OpenXML/Excel mode

Mon, 2021-04-12 05:58
This blog post follows on from a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.
OpenXML is an alternative method for running PeopleSoft nVision introduced in PeopleTools 8.54.  It is recommended by Oracle and is the default configuration in the process scheduler.  It generally outperforms Excel because it is a lightweight process and it evades Excel's concurrency limitations.
Only one Excel nVision will run concurrently on a Windows server because Excel itself is a single-threaded process since Microsoft Excel 2010.  If you need to run 10 concurrent nVision reports you would need 10 windows servers each running a process scheduler that will run a single nVision report in Excel mode.  One of the major advantages of OpenXML is that it is not subject to this restriction.  See also:
What is the Problem?
I have worked with PeopleSoft Financials customers where it became apparent that some of their nVision layouts did not run correctly on OpenXML, and until they could be redeveloped they had to run on Excel.  The problems seem to be around pagination and nPlosion.  Also, some variables are not populated when the layouts are run in OpenXML.  For example, the report on the left was run in OpenXML, and the one on the right was run in Excel.
I have also found a very few layouts, that contrary to expectation, perform better on Excel than OpenXML.
Reverting to Excel is the only other option, but to do so across the board would have a significant performance impact.  However, my experience suggests that customers can run the majority of their layouts perfectly successfully on OpenXML, but a relatively small proportion still has to be run on Excel.  
A mixed mode of operation is therefore a good option, where nVision is run on OpenXML by default, but certain reports and report books are run on Excel.  It avoids having to train end-users to run certain reports on certain process schedulers.  
This blog sets out a method by which reports using certain nVision layouts are automatically redirected to certain process schedulers that run Excel nVision by a combination of configuring process categories and two database triggers to adjust the request records before they are picked up by the process schedulers.
Excel nVision Process Schedulers and OpenXML nVision process schedulers
Whether nVision is run in Excel or OpenXML mode is determined by the UseExcelAutomation variable in the nVision section of the process scheduler configuration file (psprcs.cfg).  It applies to all nVision process run by that scheduler.  However, this variable is not documented well in the PeopleTools manual.  It takes the following values:

[nVision]
;=========================================================================
; General settings for nVision
;=========================================================================

UseExcelAutomation=2
Today, most PeopleSoft systems run most of their batch processes on Linux/Unix servers.  nVision is one of the few PeopleSoft process types that had to run on Windows, and that is still the case for Excel nVision.  It is typical for these PeopleSoft systems to have several Windows servers dedicated to nVision execution.  To run in a mixed-mode, it is necessary to have an Excel nVision process scheduler and an OpenXML nVision process scheduler on each server.
Process Categories 
In the past, I have suggested creating additional process types, but I think it is easier and more in keeping with the process scheduler structure to create additional process definitions and process categories.  It is necessary to create two new process categories (in PeopleTools->Process  Scheduler->System Settings)
Servers
Process Schedulers (or Servers) should be created in pairs for each physical server available.  In this example, I have created PSNT_X1 to run the OpenXML category and PSNT_E1 to run the Excel category.  
Note that:
  • Maximum concurrency for the Excel server has been reduced to 1.
  • The maximum number of API aware processes for PSNT_E1 has also be reduced to 1.
  • The 'Default' category is disabled on both PSNT_E1 and PSNT_X1.
  • These servers only run nVision and PSJob process types,
  • They load balance within the same operating system (although this is not appropriate for PSNT_X1 if you also run OpenXML on other non-Windows process schedulers).

Process Definitions
The NVSRUN and RPTBOOK process definitions have been duplicated as NVSRUNE and RPTBOOKE respectively.  NVSRUN and RPTBOOK are allocated to the nVisionOpenXML process category, but NVSRUNE and RPTBOOKE are allocated to the nVisionExcel process category.
The whole configuration can be set up manually through the Process Scheduler components. However, I have also written a couple of scripts to manage the configuration that should ensure consistency.
Scripts
When a process request is submitted either in the PeopleSoft application or an enterprise scheduling tool (such as AutoSYS or Control-M), request records are inserted into:
  • PSPRCSQUE is polled by the process schedulers when looking for work.
  • PSPRCSRQST reports the status of processes and is exposed in the Process Monitor component in PeopleSoft.
These two tables are siblings, both keyed on PRCSINSTANCE.  Both have the following columns:
  • PRCSNAME determines the name of the process to be run.
  • PRCSCATEGORY specifies the category of the process in the request.
Triggers
I have created a pair of almost identical database triggers that fire on insert into each of these tables that will make certain changes to request for nVision layouts that should be run on Excel:
  • Change the process name from RPTBOOK to RPTBOOKE, or from NVSRUN to NVSRUNE.
  • Change the process category from nVisionOpenXML to nVisionExcel.  It is changing the process category that makes the difference to which process scheduler picks up the request, but the name is also changed to maintain consistency.
  • If the process request specifies a particular process scheduler (specified in SERVERNAMERQST) that cannot run the updated process category, then the requested server name is blanked out.
If any one layout in a ReportBook is listed as requiring to be run on Excel, then the whole ReportBook is moved to Excel because the report book runs as a single process.  This may become a reason to split ReportBooks where only some layouts require to run on Excel.
Script gfc_nvsion_excel_redirect_triggers.sql creates the meta-data table PS_NVS_REDIR_EXCEL, and the triggers on PSPRCSRQST and PSPRCSQUE.
  • The triggers will only fire when requests for nVision-Report/NVSRUN or nVision-ReportBook/RPTBOOK are submitted.
  • There is an almost identical trigger on each table (apart from the conditionally compiled debug code).  I will only exhibit the trigger on PSPRCSRQST in this post.
  • Normally, one would not code WHEN OTHERS THEN NULL exceptions handlers in PL/SQL.  However, I do this on triggers on process scheduler tables to prevent any error being raised causing the scheduler to crash.
  • Conditionally compiled debug code in the trigger is not compiled if the mydebug flag is set to FALSE.

ALTER SESSION SET PLSQL_CCFLAGS = 'mydebug:FALSE';

CREATE OR REPLACE TRIGGER sysadm.gfc_nvision_excel_redirect_rqst
BEFORE INSERT ON s.psprcsrqst
FOR EACH ROW
WHEN (new.prcstype IN('nVision-Report','nVision-ReportBook')
AND new.prcsname IN('RPTBOOK','NVSRUN')

)
DECLARE
l_excel INTEGER := 0;
BEGIN
$IF $$mydebug $THEN dbms_output.put_line('Entering Trigger psoft.gfc_nvision_excel_redirect_rqst'); $END

IF :new.prcstype = 'nVision-ReportBook' THEN
--check for reportbook running report that uses layout on Excel list
SELECT 1
INTO l_excel
FROM psnvsbookrequst b
, ps_nvs_report n
, ps_nvs_redir_excel e
WHERE b.oprid = :new.oprid
AND b.run_cntl_id = :new.runcntlid
AND b.eff_status = 'A'
AND n.business_unit = b.business_unit
AND n.report_id = b.report_id
AND n.layout_id = e.layout_id
AND e.eff_status = 'A'
AND rownum=1;
ELSE
--look in command line for report running layout on Excel list
SELECT 1
INTO l_excel
FROM psprcsparms p
, ps_nvs_report n
, ps_nvs_redir_excel e
WHERE p.prcsinstance = :new.prcsinstance
AND n.report_id = substr(regexp_substr(p.parmlist,'-NRN[^ ]+'),5)
AND n.layout_id = e.layout_id
AND e.eff_status = 'A'
AND rownum=1;
END IF;

--update name of process if to be run on Excel
$IF $$mydebug $THEN dbms_output.put_line('found Excel nVision layout for oprid='||:new.oprid||', runcntlid='||:new.runcntlid); $END
IF :new.prcsname IN('RPTBOOK') THEN
:new.prcsname := 'RPTBOOKE';
ELSE
:new.prcsname := :new.prcsname||'E';
END IF;

--get category of new process definition
SELECT d.prcscategory
INTO :new.prcscategory
FROM ps_prcsdefn d
WHERE d.prcstype = :new.prcstype
AND d.prcsname = :new.prcsname;

--get max concurrency of new category on new server
SELECT maxconcurrent
INTO l_maxconcurrent
FROM ps_servercategory
WHERE prcscategory = :new.prcscategory
AND servername = :new.servernamerqst;

--if request assigned to server where it cannot run blank out server assignment and allow load balancing to determine it
IF l_maxconcurrent = 0 THEN
:new.servernamerqst := ' ';
END IF;

$IF $$mydebug $THEN dbms_output.put_line('set process name to '||:new.prcsname); $END
EXCEPTION
WHEN no_data_found THEN
$IF $$mydebug $THEN dbms_output.put_line('No excel redirect found'); $ELSE NULL; $END
WHEN others THEN
$IF $$mydebug $THEN dbms_output.put_line('Other Error'); $ELSE NULL; $END
END;
/
show errors
MetaData
The script gfc_nvsion_excel_redirect_metadata.sql populates a list of nVision layouts that must run on Excel into the metadata table PS_NVS_REDIR_EXCEL. You have to enter your own list of nVision layouts here.
REM gfc_nvsion_excel_redirect_metadata.sql
REM (c)Go-Faster Consultancy 2021
REM load metadata of layouts that have to run on Excel rather than OpenXML

spool gfc_nvsion_excel_redirect_metadata

INSERT INTO ps_nvs_redir_excel VALUES ('EXCELNVS','A');
commit;

spool off
Other Scripts
  • excel_only_reportbooks.sql determines which nVision ReportBooks contain only some layouts that require to be run on nVision.  These are candidates to be split up.
                                                              ReportBooks with both Excel and OpenXML nVision layouts

Number of
All Excel
OPRID RUN_CNTL_ID Layouts Layouts Excel Layouts OpenXML Layouts
---------- ------------------------------ ------- ---------- ------------------------------ ----------------------------------------------------------------------
BATCH ADHOC_NVISION 8 1 GLXXXO21 GLXYZD03, GLXXXO03, GLXXXO05, GLXXXO22, GLXXXO23, GLXXXO31, GLXXXO32
BATCH ADHOC_09062016 3 1 ZYXVIS14 ZYXVBS14, ZYXVIS12
BATCH GLXXX_GLXXXO02_ABDC_YYY 2 1 GLXXXO02 GLXXXO28
BATCH GLXXX_GLXXXO21_ABDC_YYY 3 2 GLXXXO21, GLXXXO98 GLXXXO71
BATCH GLXXX_ZYXB4080_M000_ZZZ 10 2 ZYXVBS64, ZYXVIS14 ZYXVBS04, ZYXVBS14, ZYXVBS14_LED, ZYXVBS16, ZYXVBS16_LED, ZYXVBS54, ZB
UVIS04, ZYXVIS16
  • nvision_processsmonitor.sql reports on nVision ReportBooks scheduled in the last 24 hours, their process category and the excel layouts within them, so that you can verify that the configuration is working.
  •                                                        Process         Server   Server   Server
    PRCSINSTANCE RUNDTTM PRCSNAME Category Request Run Assign RUNCNTLID STA EXCEL_LAYOUT_IDS
    ------------ ---------------------------- ------------ --------------- -------- -------- -------- ------------------------------ --- ------------------------------
    12345680 31-MAR-21 07.42.51.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO10_ABDC_YYY OK GLXXXO10
    12345681 31-MAR-21 07.43.25.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO21_ABDC_YYY OK GLXXXO21, GLXXXO98
    12345683 31-MAR-21 08.06.42.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO98_ADHOC OK GLXXXO98
    12345684 31-MAR-21 08.32.12.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO21_ABDC_YYY OK GLXXXO21, GLXXXO98
    12345685 31-MAR-21 09.18.23.000000 AM FBRPTBK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLXYAD06_ABDC_YYY OK
    12345686 31-MAR-21 09.20.01.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLXYBP00_ABDC_YYY OK
    12345687 31-MAR-21 09.22.21.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLXYAD03_ABDC_YYY OK
    12345688 31-MAR-21 09.23.11.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLVLAD03_ABDC_XXX OK
    12345689 31-MAR-21 09.24.18.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_ZYXAB001_M000_ZZZ OK
    All the scripts mentioned in this blog have been included in my nVision Github repository.

    Tuning Dynamically Generated SQL from PeopleSoft COBOL and Application Engine

    Tue, 2020-12-01 04:58

    When working on a performance issue, you may reach the point where you want to introduce one or more hints into a SQL statement.  It is often tempting to simply add the hints to the source code.  Doing so may appear to be simpler.  That is often the case with Application Engine programs, however, it is much more difficult in PeopleSoft COBOL programs.  

    A strong argument against such code change is that having made it, you have also to functionally test the change and push it through the entire release management process to get it into production.  Then, should you ever want to change or remove the hints, you have to go through the entire process again.

    Oracle provides several mechanisms to define a database object containing a hint or set of hints and to apply them to matching SQL statements.  These mechanisms work best with SQL that uses bind variables rather than literals.  If an otherwise identical SQL statement has different literal values then it has a different SQL_ID and is treated by the database as a new statement.  SQL Plan Baselines, SQL Patches and SQL Profiles match to the specific text of a SQL statement.  Different literal values will prevent matching and defeat these mechanisms.  These techniques must still be tested and migrated in a controlled manner, but they have no functional impact and so only testing of performance is needed.

    SQL Profiles can also perform forced matching, where statements that are similar except for literal values are matched.  However, note that they also require licencing of Tuning Pack.

    Some parts of PeopleTools and the PeopleSoft applications are better than others at producing sharable SQL with bind variables.  Most of the SQL generated by the component processor uses bind variables.  In Application Engine, if the ReUse Statement property is not set, which it is not by default, the %BIND fields are substituted with their literal values in the SQL statement.  However, if the property is set then %BIND fields become bind variables.  Over the years much more PeopleSoft code has been delivered with this attribute enabled.  Doing so has significant performance benefits (see Performance Benefits of ReUse Statement Flag in Application Engine).  

    Where, under normal circumstances, I might use a baseline or patch to inject a hint or profile of hints into a particular SQL statement (i.e. where I am dealing with a single SQL_ID), if the statement has literal values that change, then each statement has a different SQL_ID.  I have experimented with setting CURSOR_SHARING to FORCE at session-level for a specific scheduled process, but I have always had very poor experiences with that approach.  It invariably causes more problems than it solves.  Instead, I use force matched SQL Profiles.

    The PeopleTools documentation sets out situations where ReUse Statement cannot be set.  This includes dynamically generated code where %BIND(…,NOQUOUTES) is used to embed a piece of SQL held in a variable.  This is a very common technique in PeopleSoft; often dynamically generated code is driven by the application configuration.  

    We also see a similar design in PeopleSoft's COBOL programs.  Static statements are loaded from the stored SQL statements table (PS_SQLSTMT_TBL) and do use bind variables, but dynamic statements are assembled at runtime from static fragments in the COBOL code and any variable values are embedded as literals rather than using bind variables.

    Forced matching will allow a SQL profile to match a statement that is the same except for different literal values.   However, dynamic SQL statements can change in ways that are beyond that, including:

    • Different instances of working storage tables can be used by different instances of the same process.
    • Different columns can be used in select and group by clauses.
    • Different criteria can be introduced. 
    • A different number of terms in an IN() condition.

    Occasionally, and I really mean very occasionally when I have exhausted other alternatives, I have dynamically created groups of SQL Profiles (still with forced matching) to cover every permutation of the variations of the dynamic SQL statement.

    Example
    Here is a dynamic statement from such a COBOL process, FSPCCURR.  This General Ledger Revaluation process adjusts the base currency value of the account balances by creating adjusting entries for the accounts being revalued. It creates corresponding entries for any gain or loss that results from the revaluation.  It was captured by AWR, and I have extracted it with DBMS_XPLAN.DISPLAY_AWR.
    The ledger table (PS_LEDGER) is joined to a working storage table of tree selectors (PS_TREE_SEL10_R001) and working storage table (PS_CURR_WRK_RT001) and the result is put into another working storage table (PS_CURR_WRK_TBL001).
    INSERT INTO PS_CURR_WRK_TBL001 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFI
    LIATE_INTRA1,AFFILIATE_INTRA2,ALTACCT,BOOK_CODE,BUDGET_REF,CHARTFIELD1,C
    HARTFIELD2,CHARTFIELD3,CLASS_FLD,CURRENCY_CD,DATE_CODE,DEPTID,FISCAL_YEA
    R,FUND_CODE,GL_ADJUST_TYPE,LEDGER,OPERATING_UNIT,POSTED_BASE_AMT,POSTED_
    TOTAL_AMT,POSTED_TOTAL_CR,POSTED_TOTAL_DR,POSTED_TRAN_AMT,POSTED_TRAN_CR
    ,POSTED_TRAN_DR,PROCESS_INSTANCE,PRODUCT,PROGRAM_CODE,PROJECT_ID,STATIST
    ICS_CODE,RATE_DIV,RATE_MULT,GAINLOSS_ACCT,RESTATED_AMT,REVAL_ADJUSTMENT,
    TARGET_CURRENCY) SELECT A.ACCOUNT,003,AFFILIATE,' ',' ','
    ',A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,'
    ',A.CURRENCY_CD,DATE_CODE,DEPTID,2020,FUND_CODE,GL_ADJUST_TYPE,'X_UKCORE
    ',OPERATING_UNIT,SUM(A.POSTED_BASE_AMT),SUM(POSTED_TOTAL_AMT),0,0,0,0,0,
    0002858795,PRODUCT,PROGRAM_CODE,PROJECT_ID,' ',R.RATE_DIV,R.RATE_MULT,'
    ',ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02
    ),ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02 )
    - SUM(A.POSTED_BASE_AMT),'USD' FROM PS_LEDGER A , PS_TREE_SEL10_R001 B
    , PS_CURR_WRK_RT001 R WHERE A.LEDGER='X_UKCORE' AND A.FISCAL_YEAR =
    2020 AND A.ACCOUNTING_PERIOD IN ( 0, 1, 2, 3) AND B
    .PROCESS_INSTANCE=0002858795 AND B .CHARTFIELD='ACCOUNT' AND
    A.ACCOUNT>=B .RANGE_FROM_10 AND A.ACCOUNT<=B .RANGE_TO_10 AND
    A.BUSINESS_UNIT='96013' AND A.CURRENCY_CD <> 'GBP' AND FROM_CUR =
    A.CURRENCY_CD AND TO_CUR = 'GBP' AND R.PROCESS_INSTANCE = 0002858795
    GROUP BY A.ACCOUNT,AFFILIATE,A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIE
    LD2,CHARTFIELD3,A.CURRENCY_CD,DATE_CODE,DEPTID,FUND_CODE,GL_ADJUST_TYPE,
    OPERATING_UNIT,PRODUCT,PROGRAM_CODE,PROJECT_ID,RATE_DIV,RATE_MULT
    However there are several copies of each of these working storage tables, and different concurrent instances of this process may be allocated different copies. 
    There is also an in clause that lists the accounting periods to be processed. 
    So the statement can vary. Here is another version of what is essentially the same statement with different literal values and different tables and for a different accounting period (this time period 5). The parts in bold a the ones that vary from statement to statement that are not literal values
    INSERT INTO PS_CURR_WRK_TBL001 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFI
    LIATE_INTRA1,AFFILIATE_INTRA2,ALTACCT,BOOK_CODE,BUDGET_REF,CHARTFIELD1,C
    HARTFIELD2,CHARTFIELD3,CLASS_FLD,CURRENCY_CD,DATE_CODE,DEPTID,FISCAL_YEA
    R,FUND_CODE,GL_ADJUST_TYPE,LEDGER,OPERATING_UNIT,POSTED_BASE_AMT,POSTED_
    TOTAL_AMT,POSTED_TOTAL_CR,POSTED_TOTAL_DR,POSTED_TRAN_AMT,POSTED_TRAN_CR
    ,POSTED_TRAN_DR,PROCESS_INSTANCE,PRODUCT,PROGRAM_CODE,PROJECT_ID,STATIST
    ICS_CODE,RATE_DIV,RATE_MULT,GAINLOSS_ACCT,RESTATED_AMT,REVAL_ADJUSTMENT,
    TARGET_CURRENCY) SELECT A.ACCOUNT,005,AFFILIATE,' ',' ','
    ',A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,'
    ',A.CURRENCY_CD,DATE_CODE,DEPTID,2020,FUND_CODE,GL_ADJUST_TYPE,'XXXX',OP
    ERATING_UNIT,SUM(A.POSTED_BASE_AMT),SUM(POSTED_TOTAL_AMT),0,0,0,0,0,0002
    991789,PRODUCT,PROGRAM_CODE,PROJECT_ID,' ',R.RATE_DIV,R.RATE_MULT,'
    ',ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02
    ),ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02 )
    - SUM(A.POSTED_BASE_AMT),'AUD' FROM PS_LEDGER A , PS_TREE_SEL10_R B ,
    PS_CURR_WRK_RT002 R WHERE A.LEDGER='XXXX' AND A.FISCAL_YEAR = 2020 AND
    A.ACCOUNTING_PERIOD IN ( 0, 1, 2, 3, 4, 5) AND B
    .PROCESS_INSTANCE=0002991789 AND B .CHARTFIELD='ACCOUNT' AND
    A.ACCOUNT>=B .RANGE_FROM_10 AND A.ACCOUNT<=B .RANGE_TO_10 AND
    A.BUSINESS_UNIT='13051' AND A.CURRENCY_CD <> 'AUD' AND FROM_CUR =
    A.CURRENCY_CD AND TO_CUR = 'AUD' AND R.PROCESS_INSTANCE = 0002991789
    GROUP BY A.ACCOUNT,AFFILIATE,A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIE
    LD2,CHARTFIELD3,A.CURRENCY_CD,DATE_CODE,DEPTID,FUND_CODE,GL_ADJUST_TYPE,
    OPERATING_UNIT,PRODUCT,PROGRAM_CODE,PROJECT_ID,RATE_DIV,RATE_MULT
    If I want to use SQL Profiles to introduce hints to control the execution plan, then I will need a different profile for every possible permutation.
    I start by using Carlos Sierra's coe_xfr_sql_profile.sql script.  This is a part of Oracle's SQLTEXPLAIN (SQLT) tool. It generates a SQL script that generates a SQL profile to reproduce a given execution plan for a given SQL statement that was captured by AWR.
    From there is not a big jump to add a SQL statement to generate all the permutations of the variations in the SQL (other than for bind variables) and create a profile inside a loop.  The exact details will vary depending on the behaviour of the program.  However, in this particular example I need:
    • Different SQL profiles will be needed for each accounting period because there will be a different list of accounting periods in the IN() condition.  Subquery factors n and n1 produce a list of accounting periods.
    WITH n AS (SELECT rownum n FROM dual CONNECT BY level<=12
    ), n1 AS (
    SELECT n1.n period, LISTAGG(n2.n,', ') WITHIN GROUP (ORDER BY n2.n) periods
    FROM n n1, n n2
    WHERE n2.n <= n1.n
    AND n1.n <= 12
    GROUP BY n1.n
    )
    SELECT * FROM n1
    /

    PERIOD PERIODS
    ---------- ----------------------------------------
    1 1
    2 1, 2
    3 1, 2, 3
    4 1, 2, 3, 4
    5 1, 2, 3, 4, 5
    6 1, 2, 3, 4, 5, 6
    7 1, 2, 3, 4, 5, 6, 7
    8 1, 2, 3, 4, 5, 6, 7, 8
    9 1, 2, 3, 4, 5, 6, 7, 8, 9
    10 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    11 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
    12 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    • Lists of the various tables used for working storage can be queried from the PeopleSoft data dictionary, PSRECDEFN.  I can see that there are 5 versions of the current work table that the process can choose from.  Note that these are ordinary tables, so there are 5 different records in PSRECDEFN. 
    SELECT TO_NUMBER(substr(recname,13)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'CURR_WRK_TBL%'
    /

    SEQ TABLE_NAME
    ---------- ------------------
    PS_CURR_WKK_TBL
    1 PS_CURR_WKK_TBL001
    2 PS_CURR_WKK_TBL002
    3 PS_CURR_WKK_TBL003
    4 PS_CURR_WKK_TBL004

    • However, if I was working on a temporary record used in an Application Engine program, I would need to look up the number of instances of that record.  
      • The number of non-shared temporary record tables is the sum of all the instances defined on each application engine program to which the record is allocated, plus the number of global instances, up to a maximum of 99.  Instance 0 is the shared instance.  The number can be altered in development and the additional tables built by Application Designer.  This can require additional SQL Profiles be built. 
    WITH n AS (
    S
    ELECT rownum-1 n FROM dual CONNECT BY LEVEL<=100
    ), c AS (
    SELECT c.recname
    , n.n instance
    , DECODE(r.sqltablename, ' ', 'PS_'||r.recname,r.sqltablename)||DECODE(n.n,0,'',n.n) table_name
    FROM n
    , pstemptblcntvw c
    , psrecdefn r
    , psoptions o
    WHERE r.recname = c.recname
    AND n.n <= c.temptblinstances+o.temptblinstances
    )
    SELECT instance, table_name
    FROM c
    WHERE recname = 'WRK_XREF_CAL'
    /

    INSTANCE TABLE_NAME
    ---------- ----------------------------------------------------------
    0 PS_WRK_XREF_CAL
    1 PS_WRK_XREF_CAL1
    2 PS_WRK_XREF_CAL2
    3 PS_WRK_XREF_CAL3
    4 PS_WRK_XREF_CAL4
    5 PS_WRK_XREF_CAL5
    6 PS_WRK_XREF_CAL6
    • In this particular example, I know that every permutation of all three tables could occur in all accounting period, so I simply Cartesian join all the subquery factors.  
      • In other cases, only some permutations may occur.  This must be handled in the code that is written.  Literal values do not need to be considered because the profile will be created with force matching.
      • In Application Engine, although you often see the same instance of different temporary records used in the same process, there is nothing to prevent different instances of different records being used, and so all permutations must be considered.
    • I will also concatenate the ID for each table, and also the accounting period to produce an ID string that I can use in the name of the SQL profile.
    'TS'||ts.seq||'_RT'||rt.seq||'_TBL'||wk.seq||'_'||n1.period
    The profile generated by coe_xfr_sql_profile.sql will contain the complete profile of hints for the SQL statement captured by AWR.  That is officially the only way to guarantee a particular execution plan.  However, as in this example, I could specify the just that I want to introduce in the statement, effectively treating a SQL Profile as if it was a SQL Patch.  See also:
    Here is the modified script.  Note the sections in bold.
    • The SQL statement in the FOR clause returns all the permutations of the variations in the SQL statement in an implicit cursor.  
    • Table names are concatenated into the SQL text from the columns in the implicit cursor.
    • Single quotation marks are doubled so that the string contains the single quotation mark.
    • It is important not to add or remove any spaces when introducing these changes.
    • Profiles are dropped and created inside the loop.  Force Matching is enabled.
    REM coe_xfr_sql_profile_FSPCCURR_tree_rate_curr_wrk.sql
    SPO coe_xfr_sql_profile_FSPCCURR_tree_rate_curr_wrk.log;

    WHENEVER SQLERROR CONTINUE
    REM WHENEVER SQLERROR EXIT SQL.SQLCODE;
    VAR signature NUMBER;

    DECLARE
    sql_txt CLOB;
    h SYS.SQLPROF_ATTR;
    e_no_sql_profile EXCEPTION;
    PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
    BEGIN
    FOR i IN(
    WITH n AS (SELECT rownum n FROM dual CONNECT BY level<=12
    ), n1 AS (
    SELECT n1.n period, LISTAGG(n2.n,', ') WITHIN GROUP (ORDER BY n2.n) periods
    FROM n n1, n n2
    WHERE n2.n <= n1.n
    GROUP BY n1.n
    ), ts AS (
    SELECT TO_NUMBER(substr(recname,13)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'TREE_SEL10_R%'
    ), rt AS (
    SELECT TO_NUMBER(substr(recname,12)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'CURR_WRK_RT%'
    ), wk AS (
    SELECT TO_NUMBER(substr(recname,13)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'CURR_WRK_TBL%'
    )
    SELECT 'TS'||ts.seq||'_RT'||rt.seq||'_TBL'||wk.seq||'_'||n1.period id
    , ts.table_name ts_table_name
    , rt.table_name rt_table_name
    , wk.table_name wk_table_name
    , n1.period, n1.periods
    FROM n1, ts, rt, wk
    ) LOOP
    sql_txt := 'INSERT INTO '||i.wk_table_name||' (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFILIATE_INTRA1,AFFILIATE_INTRA2,ALTACCT,
    BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,CLASS_FLD,CURRENCY_CD,DATE_CODE,DEPTID,FISCAL_YEAR,
    FUND_CODE,GL_ADJUST_TYPE,LEDGER,OPERATING_UNIT,POSTED_BASE_AMT,POSTED_TOTAL_AMT,POSTED_TOTAL_CR,POSTED_TOTAL_DR,
    POSTED_TRAN_AMT,POSTED_TRAN_CR,POSTED_TRAN_DR,PROCESS_INSTANCE,PRODUCT,PROGRAM_CODE,PROJECT_ID,STATISTICS_CODE,
    RATE_DIV,RATE_MULT,GAINLOSS_ACCT,RESTATED_AMT,REVAL_ADJUSTMENT,TARGET_CURRENCY)
    SELECT A.ACCOUNT,011,AFFILIATE,
    '' '','' '','' '',A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,'' '',A.CURRENCY_CD,DATE_CODE,DEPTID,2016,
    FUND_CODE,GL_ADJUST_TYPE,''X_UKCORE'',OPERATING_UNIT,SUM(A.POSTED_BASE_AMT),SUM(POSTED_TOTAL_AMT),0,0,0,0,0,
    0001234567,PRODUCT,PROGRAM_CODE,PROJECT_ID,'' '',R.RATE_DIV,R.RATE_MULT,'' '',
    ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02
    ),ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02 )
    - SUM(A.POSTED_BASE_AMT),''GBP'' FROM PS_LEDGER A , '||i.ts_table_name||' B
    , '||i.rt_table_name||' R WHERE A.LEDGER=''X_UKCORE'' AND A.FISCAL_YEAR =
    2016 AND A.ACCOUNTING_PERIOD IN ( 0, '||i.periods||')
    AND B .PROCESS_INSTANCE=0001234567 AND B .CHARTFIELD=''ACCOUNT'' AND
    A.ACCOUNT>=B .RANGE_FROM_10 AND A.ACCOUNT<=B .RANGE_TO_10 AND
    A.BUSINESS_UNIT=''12345'' AND A.CURRENCY_CD <> ''GBP''
    AND FROM_CUR = A.CURRENCY_CD
    AND TO_CUR = ''GBP''
    AND R.PROCESS_INSTANCE = 0001234567
    GROUP BY A.ACCOUNT,AFFILIATE,A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,A.CURRENCY_CD,DATE_CODE,
    DEPTID,FUND_CODE,GL_ADJUST_TYPE,OPERATING_UNIT,PRODUCT,PROGRAM_CODE,PROJECT_ID,RATE_DIV,RATE_MULT';
    h := SYS.SQLPROF_ATTR(
    q'[BEGIN_OUTLINE_DATA]',
    q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
    q'[ALL_ROWS]',
    q'[LEADING(@"SEL$1" "B"@"SEL$1" "A"@"SEL$1" "R"@"SEL$1")]',
    q'[USE_NL(@"SEL$1" "A"@"SEL$1")]',
    q'[SWAP_JOIN_INPUTS(@"SEL$1" "R"@"SEL$1")]',
    q'[END_OUTLINE_DATA]');
    :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
    dbms_output.put_line(i.id||':'||:signature);

    BEGIN
    DBMS_SQLTUNE.drop_SQL_PROFILE (name => 'FSPCCURR_'||i.id);
    EXCEPTION
    WHEN e_no_sql_profile THEN NULL;
    END;

    IF 1=1 THEN
    DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
    sql_text => sql_txt,
    profile => h,
    name => 'FSPCCURR_'||i.id,
    description => 'coe FSPCCURR '||i.id||' @ '||:signature||'',
    category => 'DEFAULT',
    validate => TRUE,
    replace => TRUE,
    force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL).
    FALSE:EXACT (similar to CURSOR_SHARING) */ );

    END IF;

    END LOOP;
    END;
    /

    column name format a30
    select name, status, sql_text
    from dba_sql_profiles
    where name like '%FSPCCURR%'
    order by 1
    /
    SPO OFF;

    When I implemented this particular example, it created 900 SQL profiles.  Each one has a different force matching signature.  Having this many SQL Profiles is not a problem for the database because they are looked up by the signature. 

    TS1_RT4_TBL4_12:1278395827722096644
    TS2_RT4_TBL4_12:4126874017730826984

    PL/SQL procedure successfully completed.

    However, managing that many profiles could become a cause for concern by the DBA.  Therefore, I think it is important that they have a sensible naming convention so that it is obvious to what they relate.
    NAME                           STATUS   SQL_TEXT                                                                        
    ------------------------------ -------- --------------------------------------------------------------------------------

    FSPCCURR_TS1_RT4_TBL4_12 ENABLED INSERT INTO PS_CURR_WRK_TBL004 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFILIATE_IN

    FSPCCURR_TS2_RT4_TBL4_12 ENABLED INSERT INTO PS_CURR_WRK_TBL004 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFILIATE_IN


    900 rows selected.
    Further reading

    PeopleSoft Financials Ledger Partitioning Recommendations

    Tue, 2020-11-24 07:54

    I have written previously about partitioning the PS_LEDGER table in Financials for the benefit of General Ledger (GL) reporting.  I have always recommended top-level range partitioning on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD with separate partitions for each accounting period.  It also leaves sub-partitioning available to use another column, usually LEDGER.

    However, recent research into partition change tracking (PCT) and materialized views has made me question that advice.  The decision is not as clear cut.

    Summary Ledgers or Materialized Views

    You can create summary ledger tables in PeopleSoft Financials that are incrementally maintained by batch processes, and then nVision reports can reference the summary ledgers instead.  If the summary ledgers are not up to date, then the reports will produce out of date results.

    Similarly, materialized views can be used in an Oracle database to create pre-generated reporting tables.  An application can reference a materialized view directly, or the database can dynamically rewrite submitted SQL queries to use the materialized view if they are sufficiently similar to the query that created the materialized view and if the optimizer judges that it is cheaper to query the materialized view.  By default, the database will check that the materialized view is up to date, that is no change has been made to the underlying tables since the last refresh commenced, before it can rewrite the SQL query.  So the query will always return the same data, but if the materialized view is out of date you don't get the performance improvement.

    You can optionally choose to configure the database to write SQL queries to use stale materialized views by setting QUERY_REWRITE_INTEGRITY=stale_tolerated at either database or session-level.  

    Materialized views can be created for the nVision queries that you wish to optimise, and no further code change is required because the database will rewrite the SQL.  You can see a typical example of this in my blog about PCT.

    Partition Change Tracking is a mechanism the Oracle database uses to 'track freshness to a finer grain than the entire materialized view'.  It can identify which partitions and subpartitions are fresh and can be used for query rewrite, and to refresh just the partitions that are stale or that contain stale sub-partitions.  

    Alternatives for Partitioning PS_LEDGER

    If you wish to create materialized views on the main ledger table, and rely upon query rewrite, and keep the materialized views up to date with respect to the ledger table, and only use them when they are up to date, then you probably want PCT to help with both rewrite and refresh.

    1. Multi-column composite partitioning 

    I usually like to range partition PS_LEDGER on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD to produce a separate range partition for each accounting period.   Partition pruning works very effectively with the SQL generated by nVision.  It also allows sub-partitioning on another column, usually LEDGER.  This is desirable when a system has multiple actuals ledgers, and especially since 9.2 where the budget data is also stored in PS_LEDGER rather than PS_LEDGER_BUDG.  

    However, amongst the documented restrictions of PCT is that partitioned tables must use either range, or list partitioning, or composite partitioning with range or list as the top-level partitioning strategy.  Also, the top-level partition key must consist of only a single column (see Database Data Warehousing Guide -> Advanced Materialized Views -> About Partition Change Tracking)

    If I want to use query rewrite to materialized views for queries on LEDGER table then I have a few choices.

    • If I stick with multi-column range partitioning, then I cannot use PCT.  I must either keep the materialized views fresh, or the queries remain on the ledger table.  Any update to any partition in the ledger table will render the entire materialized view stale and prevent query rewrite.  Many customers run a suite of nVision reportbooks overnight.  I could set QUERY_REWRITE_INTEGRITY=stale_tolerated at session-level for the report books processes using a trigger on the process scheduler request table (PSPRCSRQST) - see Setting Oracle Session Parameters for Specific Process Scheduler Processes.  In this case, I would have to take responsibility for refreshing the materialized views prior to running, say, a suite of report processes.  This is effectively the same situation as using summary ledgers, but without code change to the reports.
      • I have created materialized views on summary ledger tables in order to provide compressed copies of the summary ledger.  Again, in this case, the materialized views had to be refreshed after the summary ledger maintenance process.
    • Or, I have to produce a simpler partitioning strategy for the ledger table that is still compatible with PCT.

    2. Composite Range-Range Partitioning on FISCAL_YEAR and ACCOUNTING_PERIOD

    I could composite partition both the LEDGER table and the materialized views by FISCAL_YEAR and ACCOUNTING_PERIOD, but then I cannot further subpartition by other columns.  This would degrade queries on smaller ledgers that could not be rewritten to dedicated materialized views.

    CREATE TABLE ps_ledger
    (business_unit VARCHAR2(5) NOT NULL

    ) PCTFREE 10 PCTUSED 80
    PARTITION BY RANGE (FISCAL_YEAR)
    SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
    SUBPARTITION TEMPLATE
    (SUBPARTITION ap_bf VALUES LESS THAN (1)
    ,SUBPARTITION ap_01 VALUES LESS THAN (2)
    ,SUBPARTITION ap_02 VALUES LESS THAN (3)
    ,SUBPARTITION ap_03 VALUES LESS THAN (4)
    ,SUBPARTITION ap_04 VALUES LESS THAN (5)
    ,SUBPARTITION ap_05 VALUES LESS THAN (6)
    ,SUBPARTITION ap_06 VALUES LESS THAN (7)
    ,SUBPARTITION ap_07 VALUES LESS THAN (8)
    ,SUBPARTITION ap_08 VALUES LESS THAN (9)
    ,SUBPARTITION ap_09 VALUES LESS THAN (10)
    ,SUBPARTITION ap_10 VALUES LESS THAN (11)
    ,SUBPARTITION ap_11 VALUES LESS THAN (12)
    ,SUBPARTITION ap_12 VALUES LESS THAN (13)
    ,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
    (PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
    ,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS
    ,PARTITION ledger_2020 VALUES LESS THAN (2021)
    ,PARTITION ledger_2021 VALUES LESS THAN (2022)
    )
    ENABLE ROW MOVEMENT NOLOGGING
    /
    The materialized view will be similarly composite partitioned.  Note that I have created the materialized view for specific fiscal years and for a specific ledger.  I would create materialized views for each combination of ledger and each distinct set of analysis columns that are regularly reported upon.
    CREATE MATERIALIZED VIEW mv_ledger_2020
    PARTITION BY RANGE (FISCAL_YEAR)
    SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
    SUBPARTITION TEMPLATE
    (SUBPARTITION ap_bf VALUES LESS THAN (1)
    ,SUBPARTITION ap_01 VALUES LESS THAN (2)
    ,SUBPARTITION ap_02 VALUES LESS THAN (3)
    ,SUBPARTITION ap_03 VALUES LESS THAN (4)
    ,SUBPARTITION ap_04 VALUES LESS THAN (5)
    ,SUBPARTITION ap_05 VALUES LESS THAN (6)
    ,SUBPARTITION ap_06 VALUES LESS THAN (7)
    ,SUBPARTITION ap_07 VALUES LESS THAN (8)
    ,SUBPARTITION ap_08 VALUES LESS THAN (9)
    ,SUBPARTITION ap_09 VALUES LESS THAN (10)
    ,SUBPARTITION ap_10 VALUES LESS THAN (11)
    ,SUBPARTITION ap_11 VALUES LESS THAN (12)
    ,SUBPARTITION ap_12 VALUES LESS THAN (13)
    ,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
    (PARTITION ledger_2019 VALUES LESS THAN (2020)
    ,PARTITION ledger_2020 VALUES LESS THAN (2021)
    ) PCTFREE 0 COMPRESS PARALLEL
    REFRESH COMPLETE ON DEMAND
    ENABLE QUERY REWRITE AS
    SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
    sum(posted_total_amt) posted_total_amt
    FROM ps_ledger
    WHERE fiscal_year >= 2019
    AND ledger = 'ACTUALS'
    AND currency_cd = 'GBP'
    GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
    /

    Note that I have equality criteria on LEDGER and CURRENCY_CD in the materialized view, but I have not included those columns in the select clause, and so they are not in the view.  Oracle can still rewrite queries to use this materialized view and that specify the same criteria on PS_LEDGER

    PCT will determine whether any unpruned partitions or subpartitions are stale and if so prevent query rewrite.  It is documented behaviour that the materialized view refresh will truncate and rebuild the whole top-level partitions, in this case each fiscal year, where the partition or any constituent sub-partitions are stale.  So even if just one subpartition, for one accounting period is stale, the whole fiscal year is refreshed.

    3. Composite Range-Range Partitioning on ACCOUNTING_PERIOD and FISCAL_YEAR

    I investigated making ACCOUNTING_PERIOD the top-level partitioning key, and sub-partitioning on FISCAL_YEAR.  

    • Partitioning pruning works correctly, so the query performance will be unaffected, 
    • PCT refresh processed all years for the single accounting period, rather than all accounting periods for the single year.  That is less work if you have fewer fiscal years than accounting periods.  Generally, this is the case.  I usually see systems that contain 3 to 6 fiscal years of data.

    However, it has a number of problems.

    • I can't specify storage options or compression attributes on sub-partitions in the create table DDL command, so I have to come along afterwards with PL/SQL to alter the sub-partitions.
    • It is not possible to interval range sub-partition an object, so I can't automatically add partitions for future fiscal years on demand.  Instead, I am going to have to add new fiscal year subpartitions to each of the 14 range partitions.

    On balance, I don't think I would choose to implement this.

    Conclusion

    There is no single clear recommendation for partitioning and PCT.  It will depend on the circumstances.

    • If I don't need to introduce materialized views on PS_LEDGER then I would usually stick with the multi-column composite partitioning with the top-level range partition on FISCAL_YEAR and ACCOUNTING_PERIOD.
      • PeopleSoft provides summary ledgers to improve the performance of the ledger queries and compressed materialized views can be built on these.
    • If you only have a single actuals ledger then composite range-range partition on FISCAL_YEAR and ACCOUNTING_PERIOD is attractive.  
      • I do not recommend interval partitioning on FISCAL_YEAR because this affects the scope of the refresh process.  It processes a second top-level range partition.
    • If the budget data is stored in the PS_LEDGER table rather than LEDGER_BUDG then consider building separate materialized views for each value of LEDGER.  
      • If you can manage to build materialized views to support rewrite of most the queries on the smaller ledgers, then the lack of sub-partitioning by LEDGER is not going to be a problem unless the query doesn't get rewritten because the materialized views are stale.  Keeping the materialized views up to date is going to be a bigger challenge.

    Practical Application Performance Tuning: An nVision Case Study

    Mon, 2019-12-02 16:41
    I gave this presentation at the UKOUG Techfest 19 conference.  It is closely based on a previous presentation about PeopleSoft nVision performance tuning, and uses the experience of a PeopleSoft project as a case study, so I am also posting here on my PeopleSoft blog.
    This video was produced as a part of the preparation for this session.  The slide deck is also available on my website.

    Learning about and understanding the principles and mechanics of the Oracle database is fundamentally important for both DBAs and developers. It is one of the reasons we still physical conferences.
    This presentation tells the story of a performance tuning project for the GL reporting on a Financials system on an engineered system. It required various techniques and features to be brought to bear. Having a theoretical understanding of how the database and various features work allowed us to make reasonable predictions about whether they would be effective in our environment. Some ideas were discounted, some were taken forward.
    We will look at instrumentation, ASH, statistics collection, partitioning, hybrid columnar compression, Bloom filtering, SQL profiles. All of them played a part in the solution, some added further complications that had to be worked around, some had to be carefully integrated with the application, and some required some reconfiguration of the application into order to work properly.
    Ultimately, performance improvement is an experimental science, and it requires a similar rigorous thought process.

    nVision Bug in PeopleTools 8.55/8.56 Impacts Performance

    Tue, 2019-11-12 13:12
    I have recently come across an interesting bug in nVision that has a significant performance impact on nVision reports in particular and can impact the database as a whole.
    Problem nVision SQLThis is an example of the problematic SQL generated by nVision.  The problem is that all of the SQL looks like this. There is never any group by clause, nor any grouping columns in the select clause in from of the SUM().
    SELECT SUM(A.POSTED_BASE_AMT) 
    FROM PS_LEDGER A, PSTREESELECT10 L2, PSTREESELECT10 L1
    WHERE A.LEDGER='ACTUAL' AND A.FISCAL_YEAR=2018 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 8
    AND L2.SELECTOR_NUM=159077 AND A.ACCOUNT=L2.RANGE_FROM_10
    AND (A.BUSINESS_UNIT='10000')
    AND L1.SELECTOR_NUM=159075 AND A.DEPTID=L1.RANGE_FROM_10
    AND A.CURRENCY_CD='GBP' AND A.STATISTICS_CODE=' '
    Each query only returns a single row, that only populates a single cell in the report, and therefore a different SQL statement is generated and executed for every cell in the report.  Therefore, more statements are parsed and executed, and more scans of the ledger indexes and look-ups of the ledger table and performed.  This consumes more CPU, more logical I/O.
    Normal nVision SQLThis is how I would expect normal nVision SQL to look.  This example, although obfuscated, came from a real customer system.  Note how the query is grouped by TREE_NODE_NUM from two of the tree selector tables, so this one query now populates a block of cells.
    SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
    FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
    WHERE A.LEDGER='S_UKMGT'
    AND A.FISCAL_YEAR=2018
    AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
    AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
    OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
    OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
    OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
    OR A.DEPTID='B9150' OR A.DEPTID=' ')
    AND L2.SELECTOR_NUM=10228
    AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
    AND L3.SELECTOR_NUM=10231
    AND A.ACCOUNT=L3.RANGE_FROM_10
    AND A.CHARTFIELD1='0012345'
    AND A.CURRENCY_CD='GBP'
    GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
    The BugThis Oracle note details an nVision bug:
    "UPTO SET2A-C Fixes - Details-only nPlosion not happening for Single Chart-field nPlosion Criteria.
    And also encountered a performance issue when enabled details-only nPlosion for most of the row criteria in the same layout
    Issue was introduced on build 8.55.19.
    Condition: When most of the row filter criteria enabled Details-only nPlosion. This is solved in 8.55.22 & 8.56.07.
    UPTO SET3 Fixes - Performance issue due to the SET2A-C fixes has solved but encountered new one. Performance issue when first chart-field is same for most of the row criteria in the same layout.
    Issue was introduced on builds 8.55.22 & 8.56.07.
    Condition: When most of the filter criteria’s first chart-field is same. The issue is solved in 8.55.25 & 8.56.10."
    In summary
    • Bug introduced in PeopleTools 8.55.19, fully resolved in 8.55.25.
    • Bug introduced in PeopleTools 8.56.07, fully resolved in 8.56.10.

    Which Version Number Corresponds Which PeopleTools Object?

    Tue, 2019-10-15 10:15
    Recently somebody asked me "Our “CRM” values in PSLOCK and PSVERSION are growing tremendously and we don’t know why. We will be opening a case with Oracle but … do you know what that “CRM” value is related to? We are not using the CRM product in our PeopleSoft installation."
    There is some documentation on the Oracle Support website, but it is not exhaustive and may not be completely up to date.
    The page in the PTRef utility that describes the relationship of version numbers to PeopleTools tables is one of the few static pages in the tool.  I have now updated it with the information in the above Oracle support notes, but there are other version numbers.
    In the previous blog post, I showed how to increment version numbers before updating PeopleTools objects.  I knew RDM (the Record Definition Manager) is the OBJECTTYPENAME for PSRECDEFN because I worked that out by tracing Application Designer while it saved a record change.  That remains the only completely reliable way to determine the relationship.
    However, version number matching is also a useful technique, though when it does not provide a completely definitive answer, it dramatically cuts down the amount of work then necessary.
    I have written a little PL/SQL code, that is delivered with PTRef, that extracts the maximum version number for each PeopleTools table in PeopleSoft (other than the PS%DEL) tables and stores it on a working storage table (I used the PLAN_TABLE because it is always there on an Oracle database).  Then you can compare the version number on PSLOCK/PSVERSION with the maximum version on the PeopleTools object.
    If the version number is 1, you can’t do the match because the version number has never been incremented, at least since it was last reset by the VERSION application engine.
    If the version is only updated occasionally you may have some trouble with duplicate matches. In my example, 3 tables have a maximum version number of 80, while only one version number is 80.
    RECNAME                   MAX        CNT
    ------------------ ---------- ----------
    PSAEAPPLDEFN 80 3504
    PSMSGNODEDEFN 80 78
    PSOPRVERDFN 80 1468
    I would guess that OPRVM matches PSOPRVERDFN, and the above support notes confirm this, but otherwise, you would have to check that manually with trace.
    OBJECTTY    VERSION
    -------- ----------
    OPRVM 80
    Higher version numbers are easier to match because they are less likely to have duplicate matches.
    So to return to the original question, what is CRM?  In my sample system, version numbers CRM and SCATM are both 3.
    OBJECTTY    VERSION
    -------- ----------
    CRM 3
    SCATM 3
    However, only PSCONTDEFN has a maximum version of 3.
    RECNAME                   MAX        CNT
    ------------------ ---------- ----------
    PSCONTDEFN 3 7567
    Again not a perfect match, but again Oracle Support Note 664848.1 confirms that CRM corresponds to PSCONTDEFN.  CRM stands for Content Registry Manager.
    So the question now becomes what is updating the content definitions, and hence increasing the version number?  It turned out to be an entity registry synchronisation process that was run daily.
    It is perfectly legitimate for many updated rows on the PeopleTools table can be given the same version number.  The version number does not need to be incremented again for each row being updated, but then the row-level locks on PSVERSION and PSLOCK created by their updates must be held until the update on the PeopleTools table is committed.  That can increase contention on the version number update.  The alternative is to commit after each update and then increment the version numbers again.  Many PeopleSoft processes do exactly that, and it can, in turn, lead to massive increase in some version numbers.

    Understanding PeopleTools Object Version Numbers

    Mon, 2019-10-14 13:36
    I was recently asked a question about PeopleSoft version numbers, but before I address that directly, I think it would be useful to explain what is their general purpose.
    CachingThe PeopleSoft data model and application are mostly stored in the database in PeopleTools tables.  These tables are queried as the application executes.  For example, when you open a component, the component and pages, including all the PeopleCode, the definition of any records used, and so on have to be loaded into the component buffer.  Ultimately this information comes from the PeopleTools tables.  To save the overhead of repeatedly querying these tables, PeopleSoft caches this data locally in physical files the application server and process scheduler domains.  The application servers also cache some of this information in memory to save visiting the local physical cache.  Application Designer also maintains a physical cache.
    Over time, as the application executes, the cache files build up.  Occasionally, when it is necessary to delete the cache files and then it becomes clear just how significant is the overhead of the PeopleTools queries as a period of poor performance is seen as the application builds up fresh cache files.
    Physical cache files are created in directories in the application server and process scheduler Tuxedo domains.  By default, each process maintains its own private cache.  Separate directories of cache files are created for each type of PeopleSoft server process in each domain.    Pairs of cache files are created in each directory for each object type as needed.  There is also a CACHE.LOK file in each directory that is used to ensure that only one process is accessing that cache directory concurrently.
    It is possible to run with a shared physical cache, but then it is read-only and must be pre-generated.  It is very rare to see this implemented, because everyone expects to continuously deliver changes over time, and if you had a shared cache you would have to deliver an updated set of shared cache file to every domain every time you delivered a new PeopleTools object.
    The cache files come in pairs.  The name of the cache files is the Object Type Name.  This corresponds to the OBJECTTYPENAME on the PSLOCK and PSVERSION tables.  The .DAT file contains the data to be cached.  The .KEY file is an index for the .DAT file, and it also holds the version number of the cached object.

    -rw------- 1 psadm2 oracle 5228492 Jun 12 06:37 RDM.DAT
    -rw------- 1 psadm2 oracle 69120 Jun 12 06:37 RDM.KEY
    -rw------- 1 psadm2 oracle 0 Oct 26 2015 ROLM.DAT
    -rw------- 1 psadm2 oracle 24192 Oct 26 2015 ROLM.KEY
    -rw------- 1 psadm2 oracle 0 Oct 26 2015 RSM.DAT
    -rw------- 1 psadm2 oracle 24192 Oct 26 2015 RSM.KEY
    Version Numbers
    Version numbers track when a cached PeopleTools object has been changed, either by Application Designer, or a change in configuration, or the application.  The version numbers are sequences generated from two PeopleTools tables PSLOCK and PSVERSION that hold the highest version number for each type of object.  These two tables have the same structure.
    SQL> desc psversion
    Name Null? Type
    ----------------------------------------- -------- ----------------------------
    OBJECTTYPENAME NOT NULL VARCHAR2(8 CHAR)
    VERSION NOT NULL NUMBER(38)
    There are now over 100 different version numbers, each with a specific object type name that each track a specific PeopleTools object.  There is a global version number, with the object type name of SYS, that is incremented whenever any other version number is incremented.
    I have no idea why two identical tables of version numbers were created.  I can see no reason for this, but it has been like this since the version numbers were changed (if I remember correctly) in PeopleTools 7.  In early versions of PeopleTools, not all version numbers were on both tables, but in at least PeopleTools 8.55 only one object type appears on PSVERSION and not PSLOCK.
    When an object is changed, the object and global version numbers are incremented, and the incremented object version number is recorded on the object in the PeopleTools table.  The version number on the object is also stored in the physical cache files when the object is cached.  If the version on the database is higher than that in the cache file, then the PeopleSoft process knows it must query the latest version from the PeopleTools table and update the cache file.
    How to Update Version Numbers
    It is not generally recommended, nor strictly speaking supported, to update PeopleTools tables directly with SQL.  Apart from the risk of updating them incorrectly, or to invalid values, you also need to ensure that the changes are picked up by PeopleTools processes and that they do not simply continue to read the cached data.  However, occasionally, it is the pragmatic way to doing something.  
    Here is an example from Chapter 5 of PeopleSoft for the Oracle DBA that shows how to maintain version numbers so the change is picked up by PeopleTools processes.  I want to mark alternate search key indexes as unique where there is a unique key on a record because they are unique because the unique key is a subset of their columns.  Then Application Designer will build the indexes as unique.  
    UPDATE psversion SET version = version + 1
    WHERE objecttypename IN('SYS','RDM');

    UPDATE pslock SET version = version + 1
    WHERE objecttypename IN('SYS','RDM');

    UPDATE psrecdefn
    SET version = (
    SELECT version FROM psversion WHERE objecttypename = 'RDM')
    WHERE recname = '';

    UPDATE psindexdefn a
    SET a.uniqueflag = 1
    WHERE a.uniqueflag = 0
    AND a.indextype = 3
    AND EXISTS(
    SELECT 'x'
    FROM psindexdefn k
    WHERE k.recname = a.recname
    AND k.indexid = '_'
    AND k.indextype = 1
    AND k.uniqueflag = 1)
    AND a.recname = '';
    I am updating a PeopleTools object (PSINDEXDEFN) that doesn't have a version number, but its parent is PSRECDEFN that does have a version number.  I happen to know that object type RDM (the Record Definition Manager) generates the version number for PSRECDEFN.  I found that out by tracing Application Designer while it saved a record change.  That is the only completely reliable method to determine which sequence is used for which record.  However, I will discuss another less onerous matching method in a subsequent blog post.
    I must increment the RDM and SYS version numbers and write the new RDM version number to the updated rows on PSRECDEFN.  Next time a PeopleSoft process needs to read a record definition it will check the version numbers.  The increment of the SYS object tells PeopleSoft than an object number has changed, and then it will detect that the RDM version number has changed so it has to reload and cache objects with version numbers greater than the last cached version number for that object.

    PeopleTools Table Reference Generator

    Sun, 2019-06-09 15:46
    Like many other PeopleSoft professionals, I spend a lot of time looking at the PeopleTools tables because they contain meta-data about the PeopleSoft application. Much of the application is stored in PeopleTools tables. Some provide information about the Data Model. Many of my utility scripts reference the PeopleTools tables, some of them update them too. Therefore, it is very helpful to be able to understand what is in these tables. In PeopleSoft for the Oracle DBA, I discussed some tables that are of regular interest. I included the tables that correspond to the database catalogue and that are used during the PeopleSoft login procedure. The tables that are maintained by the process scheduler are valuable because they contain information about who ran what process when, and how long they ran for.
    I am not the only person to have started to document the PeopleTools tables on their website or blog, most people have picked a few tables that are of particular interest. I wanted to produce a complete reference.  However, with over 4000 PeopleTools tables and views, it is simply not viable to do even a significant number of them manually.  So, I wrote some SQL and PL/SQL to dynamically generate a page for each PeopleTools table and views and put the generated pages on my website.  If you use Google to search for a PeopleTools name you will probably find the reference page.
    I have now revisited that code and made a number of changes and improvements.
    • I have used a later version of PeopleTools to generate the reference on my website.  The list of PeopleTools tables is no longer defined in PeopleTools by object security, so I have used an independent and somewhat broader definition: Table or View records that are either owned by PPT or whose SQLTABLENAME is the same as the record name.
    • There is much less reliance on static pages.  There are now only 3 such pages everything else is generated.  Instead, additional data is loaded from static scripts into the PLAN_TABLE that should always be present in an Oracle database and so it doesn't have to be created.  It should be a global temporary table so there is no problem with debris being left behind or interference with other processes.  That data is then combined with data in the PeopleTools tables during the generation process.
    • The utility has been rewritten as a PL/SQL package that should be created in the PeopleSoft Owner ID schema (usually SYSADM).
    • The generated HTML is simply tidier, and more consistent.
    The source is available on GitHub at https://github.com/davidkurtz/PTRef, so you can download and generate your own reference on your own current version of PeopleTools. An example of the generated output can be found on my website.
    The idea of the original PeopleTools tables reference was the people could contribute additional descriptions and information that were not in the PeopleTools tables.  That can still happen, and indeed should be easier, by making changes to the scripts that load the additional data and uploading new versions to GitHub.


    PS360 enhancement: Added report of DDL models

    Fri, 2019-06-07 09:08
    I have written several blogs and presentations recently about how and how not to collect statistics in PeopleSoft.
    • Managing Cost-Based Optimizer Statistics for PeopleSoft recommends
      • If you are going to continue to use DBMS_STATS in the DDL model then
        • Do not specify ESTIMATE_PERCENT because it disables the hash-based number-of-distinct-values calculation, forcing it to go back to the COUNT(DISTINCT ...) method that requires a sort, and may not produce accurate values because it only samples data.
        • Do not specify METHOD_OPT='FOR ALL INDEXED COLUMNS SIZE 1' because this will not collect histograms on indexed columns, and will not update column statistics on unindexed columns.
        • Do specify FORCE=>TRUE so that you can lock and delete statistics on temporary records.
      • However, the alternative is to use GFCSTATS11 package to collects these statistics.  This package is controlled by a metadata table so you can defined statistics collection behaviour for specific records.
    • How Not to Collect Optimizer Statistics in an Application Engine Program
      • This blog explains why you should not explicitly code DBMS_STATS calls into Application Engine programs.
    This has prompted me to add a new report to the PS360 utility that simply reports the various DDL models for Oracle.  Thus during a health check, I can see how statistics are collected during batch processes.

    PS360 can be download from https://github.com/davidkurtz/ps360.




    Pages