Posted by: repettas | May 18, 2008

Beta Beyond Compare

Beta Version of Beyond Compare

Scooter Software home of Beyond Compare has a Beta version of their popular Beyond Compare software application available, there a three flavors available for download – Normal Debug Build, Full Debug Build and Optimized Build. Build data for WIndows is May 6, 2008 (the version that is available to the public for testing). If you decide to download the beta version you should go with the Normal Debug Build so you can assist in the testing effort. If the software crashes or has flakly behaviour you can send this information to them. They recommend that you use one of the debug builds. However, if you aren’t interested or willing to help them out you can download the Optimized Build.

The beta version runs on Linux now which is awesome and I’m looking forward to testing it out on my Apple to see if I can get it to work on OS X. The other good news is the support for SSH. I’ve included a few screen shots below to give you an idea of the what the new User Interface looks like as well as the Authentication tab.

I’ve installed it and I will be testing out the SSH capability later this afternoon. If the SSH feature set works then there will no longer be a need to use the Bitvise Tunnelier to establish a Tunnel between your Windows machine and a remote Unix server. Now that Beyond Compare includes the capability to communicate with Unix servers within the application there isn’t any need to use another application to create the tunnel and map the remote directories to the established local end point.

Beyond Compare UI

The Authentication Tab Screenshot:

Beyond Compare UI - Authentication Tab

I’ll post a review later today on my experiences using the builtin SSH capability.

I’m using Beyond Compare on Windows, either booting up Parallels Desktop or having to connect to a Windows Server in order to use this tool. No news on a Mac version, although Scooter Software is shipping versions for Red Hat Enterprise Linux 4, 5; Fedora Core 4 – 9, Novell Suse Linux Enterprise Desktop 10, openSUSE 10.3, 11 and Ubuntu 6.06 – 8.04. Their site has a note that it isn’t compatible with Red Hat Enteprise Linux 3 and it isn’t test on any 64-bit Linux kernel.

I’ve sent an email to the Development Team asking about the possibility of a beta for OS X and I’ll post a follow-up comment when I hear back. I hate having to boot Windows on my Mac Book Pro just to run BC3, I’ve tried Crossover and I just don’t have the patience to run a Windows app in that mode. Parallels Version 4 has boosted performance some for Vista but I still think XP was faster in that type of environment than using VIsta.

I’ve been using KDIFF which runs on OS X and isn’t too bad and I’m going to check out Switchback this evening.

http://sourceforge.net/projects/kdiff3 is the link to download the kdiff3 software which runs on
Windows, OS X, Linux, etc.

Bitvise Tunnelier – SSH2 Client

Bitvise Tunnelier is a free SSH and SFTP client for Windows which incorporates one of the most graphical SFTP clients out there, state-of-the-art terminal emulation with support for bvterm, xterm and vt100 protocols, support for corporation-wide single sign-on using SSPI (GSSAPI) Kerberos 5 and NTLM user authentication as well as Kerberos 5 host authentication, support for RSA and DSA public key authentication with comprehensive keypair management, powerful SSH port forwarding capabilities including dynamic forwarding through integrated SOCKS and HTTPS CONNECT proxy, powerful command-line parameters which make Tunnelier highly customizable and suitable for use in specific situations and controlled environments, contains an advanced scriptable command-line SFTP client (sftpc), a scriptable command-line remote execution client (sexec) and a command-line terminal emulation (stermc), an FTP-to-SFTP bridge allowing you to connect to an SFTP server using legacy FTP applications, WinSSHD remote administration features and single-click Remote Desktop forwarding…..whew that’s a lot of features and that is taken straight from their web site and I can attest that this is a very powerful tool and opens up Windows clients to the Unix World.

In addition to the above features it also includes Graphical SFTP Files Transfers and Terminal Access via Remote Console as well as single sign-on integration.

I initially was looking for a way to use Beyond Compare to compare PeopleSoft Tools Windows directories to my Unix Server’s PeopleTools directories. Normally this work be no big deal but our Unix Servers do not permit legacy FTP access, they only support SFTP and since Beyond Compare does not yet have that capability I had to search for a way to make it work.

What I found was Bitvise Tunnelier software and it allowed me to use Beyond Compare as described above to compare my Windows Source directories to my Unix Source directories. I accomplished this by use Bitvise Tunnelier to create a tunnel SSH connection to the Unix Server and then point Beyond Compare to this tunnel.

I’m going to attempt to describe how to download, install and configure Bitvise Tunnelier to accomplish this. If you decide you do not have a need to use Beyond Compare for this purpose there may be a similar type of operation that currently doesn’t work for you because it doesn’t have SSH capability but if you look at using Bitvise Tunnelier you may be able to add this feature to that tool enabling you to extend that software’s lifecycle and you may see other features such as the SSH Client and SFTP CIient that far exceed your present tools capabilities.

Bitvise Tunnelier – Web Site

http://www.bitvise.com – they offer several products and please note that Tunnelier is free for individual use only!

Download Area

http://www.bitvise.com/download-area

Please note that you need to read and agree to the terms which are available via the hyperlink on download-area page. You need to read and agree to the terms of the software before proceeding with the download.

Once you have read and agreed to the terms, click on the hyper-link to download the software and then install it. I am not going to document the installation process, its Windows software and as such it is very straight forward.

I’ve also placed a copy of the latest version (at the time I wrote this post, version 4.26) on the Y Drive in the location for the client install software.

BitVise Tunnelier – SSH2 Client – Create Profiles

This software uses profiles to connect to remote hosts. In order to set up a tunnel you need to create a profile with the information required to connect to the remote host. In this example the remote host is HP-UX 11i v2 Itanium Blade.

The first image is of the Tunnelier’s Login tab. You’ll see in the left pane the options: Load Profile, Save Profile, Save Profile As and Close Profile.

Login Tab

I filled in the information previous into a profile and named it tdpfap02 and saved in a directory that contains all of my SSH related files. The bottom half of this tab contains a log box where the various software commands are logged along with their result.

The next screen represents the Options tab. On this tab you specify the Reconnection option, whether or not to Open a Terminal session as well as an SFTP session. If you just want to create a tunnel you can unselect Open Terminal and Open SFTP. I choose the option to automatically reconnect if the connection breaks after successfully being established. If you do not want to enable this feature just don’t select and pick Never reconnect automatically instead. The other options on this tab instruct the software whether or not to display the user authentication banner and to logout when the session activity comes to an end.

Options Tab

The next tab is for the Terminal Emulation, you select the type of Terminal Environment you want to use and the last section is for the Remote Desktop settings. You’ll see I’ve select 127.0.0.1 for Target Host and the Target Port is 3389.

Terminal Tab

The next tab is for the SFTP settings. This tab contains options for Local and Upload Settings, Remote and Download Settings as well as features such as auto refresh of the folders, Upload Mode and Download Mode. You can also specify the initial local directory as well as the initial remote directory.

SFTP Tab

The next tab is for the Services options. This is the tab you are interested in when you want to set up and establish a tunnel between your Windows machine and the remote Unix server. The section – SOCKS/HTTP Proxy Forwarding – you want to enable this feature, you specify 127.0.0.1 as the Interface to Listen on (this is the IP address of your workstation, the loopback address actually) and the Server Bind Interface is left set to 0.0.0.0. The right portion of the screen is for the – FTP-to-SFTP Bridge – settings where you enabled this feature and set the Listen Interface to 127.0.0.1 and the Listen Port to 21 and make sure the SFTP Newline is set to Automatic. This last feature will handle any cr/lf (line termination) conversions on the fly.

Services Tab

The next tab C2SFwding you do not need to do anything with.

CS2Fwding Tab

The same is true for the next tab S2CFwding.

S2CFwding

The last tab before the final tab About, is the SSH tab. On this tab you specify the encryption options, whether or not to use Compression (zlib), the DH Key Exchange, the MAC key exchange, Host Key algorithms and whether or not to use Keep-Alive and to Start Re-Exchange. You can leave the defaults set as they are as they will work for you 80% of the time if not more. I choose Keep-Alive to sustain connections, in the event a tunnel is inactive for a long period of time if you do not select this option then most likely you will have to re-establish the tunnel because it will time-out and become dead. The Start Re-Exchange is to initiate the exchange of keys again in the event the connection has timed out and is attempting to re-establish itself.

SSH Tab

The last tab is the About tab and contains the license information as well as the URLs regarding licensing, support, etc.

About Tab

Now that you have entered all of the information in you will want to save this information as a profile so it can be used again at another time. Personally, I try to name my profiles so they contain the host and username because it gives me a way to associate a profile with a specific host and user account.

Once you have the profile saved, click on the Login button and as long as you have entered in the information correctly you will be connected to the remote host. If you selected the same options as I have on the preceeding screenshots once you’ve connected you’ll have a Terminal Session, an SFTP explorer session and the tunnel will have been established.

SFTP Window

Now we ready to bring up Beyond Compare:

Beyond Compare Unix

You have to use the format: ftp://127.0.0.1/ – for the remote server you established the tunnel with. Then the rest of the path information is relative to the Unix home directory of the user you used to authenticate to the remote Unix server with. In this example, the remote user is psoft and we use /usr/local/psoft at the common base for all of our PeopleSoft Environments.

Posted by: repettas | May 16, 2008

Oracle Unicode Character Sets

Oracle started supporting Unicode based character sets in Oracle 7. Below is a summary of the Unicode character sets supported in Oracle:

Oracle Unicode Supported Character Sets

AL24UTFFSS

AL24UTFFSS was the first Unicode character set supported by Oracle. It was introduced in Oracle 7.2. The AL24UTFFSS encoding scheme was based on the Unicode 1.1 standard, which is now obsolete. AL24UTFFSS as been de-supported from Oracle 9i. The migration path for existing AL24UTFFSS databases is to upgrade the database to 8.0 or 8.1, then upgrade the character set to UTF8 before upgrading the database further to 9i or 10g.

UTF8

UTF8 was the UTF-8 encoded character set introduced in Oracle 8 and 8i. It followed the Unicode 2.1 standard between Oracle 8.0 and 8.1.6, and was upgraded to Unicode version 3.0 for versions 8.1.7, 9i, 10g and 11g. To maintain compatibility with existing installations this character set will remain at Unicode 3.0 in future Oracle releases. Although specific supplementary characters were not assigned to Unicode until version 3.1, the allocation for these characters were already defined in 3.0 So if supplementary characters are inserted in a UTF8 database, it will not corrupt the actual data inside the database. They will be treated as 2 separate undefined characters, occupying 6 bytes in storage. Oracle recommends that customers switch to AL32UTF8 for full supplementary character support.

UTFE

This is the UTF8 database character set for the EBCDIC platforms. It ahs the same properties as UTF8 on ASCII based platforms. The EBCDIC Unicode transformation format is documented in Unicode Technical Report #1 UTF-EBCDIC. Which can be found at http://www.unicode.org/unicode/reports/tr16/

AL32UTF8

This is the UTF-8 encoded character set introduced in Oracle 9i. AL32UTF8 is the database character set that supports the latest version (5.0 in Oracle 11.1) of the Unicode standard. It also provides support for the newly defined supplementary characters. All supplementary characters are stored as 4 bytes. AL32UTF8 was introduced because when UTF8 was designed (in the time of Oracle 8) there wasn’t a concept of supplementary characters, there UTF8 has a maximum of 3 bytes per character. Changing the design of UTF8 wold break backward compatibility, so a new character set was introduced. The introduction of surrogate pairs should mean that no significant architecture changes are needed in future versions of the Unicode standard, so currently the plan is to keep enhancing AL32UTF8 as necessary to support future versions of the Unicode standard. For example, in Oracle 10.1 this character set was implemented the Unicode 3.2 standard, in Oracle 10.2 that has been updated to support the Unicode 4.01 standard and in Oracle 11.1 to the Unicode 5.0 standard.

Please note that pre-Oracle 9 software can have some serious problems connecting to a AL32UTF8 database.

AL16UTF16

This is the first UTF-16 encoded character set in Oracle. It was introduced in Oracle 9i as the default national character set (NLS_NCHAR_CHARACTERSET). AL16UTF16 supports the latest version (5.0 in Oracle 11.1) of the Unicode standard. It also provides support for the newly define supplementary characters. All supplementary characters are stored as 4 bytes. As with AL32UTF8, the plan is to keep enhancing AL16UTF16 as necessary to support future versions of the Unicode standard. AL16UTF16 cannot be used as a database character set (NLS_CHARACTERSET), only as the national character set (NLS_NCHAR_CHARACTERSET). The database character set is used to identify and to hold SQL, SQL metadata and PL/SQL source code. It must have either single byte 7-bit ASCII or single byte EBCDIC as a subset, whichever is native to the deployment platform. Therefore, it is not possible to use a fixed-width, multi-byte character set (such as AL16UTF16) as the database character set. Trying to create a database with AL16UTF16 as the database character set in 9i and up will give “ORA-12706: THIS CREATE DATABASE CHARACTER SET IS NOT ALLOWED”. AL16UTF16 is always in Big Endian byte order, regardless of the processor endianess.

There are only a few circumstances where you actually have an advantage of using the national characterset. In 99% of the cases simply use a UTF8 or AL32UTF8 database.

The following URLs contain a complete list of hex values and character descriptions for every Unicode character:

Unicode Version 5.0: http://www.unicode.org/Public/5.0.0/ucd/UnicodeData.txt
Unicode Version 4.0 http://www.unicode.org/Public/4.0-Update1/UnicodeData-4.0.1.txt
Unicode Version 3.2 http://www.unicode.org/Public/3.2-Update/UnicodeData-3.2.0.txt
Unicode Version 3.1 http://www.unicode.org/Public/3.1-Update/UnicodeData-3.1.0.txt
Unicode Version 3.0 http://www.unicode.org/Public/3.0-Update/UnicodeData-3.0.0.txt
Unicode Versions 2.x http://www.unicode.org/unicode/standard/versions/enumeratedVersions.html
Unicode Version 1.1 http://www.unicode.org/Public/1.1-Update/UnicodeData-1.1.5.txt

A description of the file format can be found at: http://www.unicode.org/Public/UNIDATA/UnicodeData.html

For a glossary of Unicode terms, see: http://www.unicode.org/glossary

On above locations you can find the unicode standard, all characters that are there are referenced with their UCS-2 codepoint.

Oracle currently has no plans to desupport UTF8, they simple encourage everyone to use AL32UTF8. All codepoints defined in UTF8 are also valid in AL32UTF8. So there is never an issue with going from UTF8 to AL32UTF8.

How does NLS affect Import/Export

Import and Export are client products, in the same way as SQL*Plus or Oracle Forms, and will therefore translate characters from the database character set to that defined by NLS_LANG. The character set used for the export will be stored in the export file and when the file is imported, the import will check the character set that was used. If it is different than that defined by NLS_LANG at the import site, the characters will be translated to the import character set and then, if necessary to the database character set.

How Should NLS_LANG Be Set When Using Export?

Oracle recommends setting the character set part of the NLS_LANG environment variable to the same character set as the character set of the database you are using.

select value from nls_database_parameters where parameter=’NLS_CHARACTERSET';

That way no conversion will take place and the export file will be created in the same character set as the original database and contain ALL data from the original database (even incorrectly stored data if that would be the case). Even if the plan is to import this data into a database with a different character set later the conversion can be postponed until the import.

Note that this has no relation to the Operating System. If you have a WE8MSWIN1252 database on a Unix Server (which is totally supported) then you should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.

During the interaction with the database (SQL*Plus for example) you need to configure your Unix client properly and that can not be 1252 as Unix does not have a 1252 character set.

How Should NLS_LANG Be Set When Using Import?

If the source and target database have the same character set, the character set part of the NLS_LANG should be set to that same character set on both the export and the import.

Even if the character sets of the exporting and importing databases are not the same the best (preferred) value to use for the character set part of NLS_LANG on both export and import is still the character set of the source database.

select value from nls_database_parameters where parameter=’NLS_CHARACTERSET';

But setting the NLS_LANG to the character set of the target database during import is also correct. That way conversion only takes place once, either on export or on import!

However, the preferred place to do the conversion is between the import executable and the target database.

Note that this has no relation with the Operating System. If your source database is a WE8MSWIN1252 database then you simply should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before import.

During the interaction with the database (SQL*Plus) you need to configure your Unix client properly and that can nt be 1252 as Unix does not have a 1252 character set.

Example: You want to go from an WE8MSWIN1252 to an UTF8 database:

Note that this is only the exp/imp example, if you want to migrate to UTF8 you need to use another method.

  • Double check the NLS_CHARACTERSET on the SOURCE database: select * from nls_database_parameters where parameter=’NLS_CHARACTERSET'; and export with the NLS_LANG set to AMERICAN_AMERICA.<NLS_CHARACTERSET>. In this case we want to create an export file containing WE8MSWIN1252 data. (This is also the setting you want to use if you take an export as a backup)
  • On Unix this would be: export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252; exp ….
  • On Windows this would be: C:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252; C:\>exp …
  • Import with the NLS_LANG set to AMERICAN_AMERICA.WE8MSWIN1252 (=source NLS_CHARACTERSET) into the new UTF8 database.
  • On Unix this would be: export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252; imp ….
  • On Windows this would be: C:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252; C:\> imp …
  • The conversion to UTF8 is done while inserting the data in the UTF8 database.

Oracle recommends setting the NLS_LANG explicit in the current shell for UNIX or in the DOS box used for the exp or imp tool on Windows.

How is Import affected by the NLS_LANGUAGE and NLS_TERRITORY?

Not. Normally you use the AMERICAN_AMERICA default, but if you imported with the NLS_LANG set to FRENCH_FRANCE for example then you will not have a problem, even if the originating environment used GERMAN_GERMANY or so on.

What about the Message (possible ncharset conversion) during Import?

If you see messages similar to the above in the import log read further down.

How To Know in What Characterset a dump (export) file is created?

To determine this information simply type in: imp system/oracle@database show=yes file=test.dmp

The output looks similar to the following:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters and Data Mining Scoring Engine options

Export file created by EXPORT:V09.02.00 via conventional path

Warning: the objects were exported by SYS, not by you

import done in WE8ISO8859P15 character set and UTF8 NCHAR character set
-> this is the current NLS_LANG value set in the environment and the NCHAR characterset of the target database
import server uses AL32UTF8 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during the import session is different from the target database characterset, so if you see 3 lines you might have problems
export client uses WE8ISO8859P1 character set (possible charset conversion
-> this is the characterset used during the export session and the characterset used in the dump file
How does NLS affect datapump (expdp/impdp)?

Datapump does not use the NLS_LANG to do conversion between databases. Conversion between 2 database charactersets is done purely based on the NLS_CHARACTERSET (or NLS_NCHAR_CHARACTERSET for NCHAR, NVARCHAR and NCLOB datatypes) of the source and target database.

However, if you specify a parameter file then the NLS_LANG is used. This is only important if you use non-English characters (e.g. for the QUERY parameter) in the parameter file.

If you use non-English characters in the parameter file then the NLS_LANG environment variables should be set (in the session where the Data Pump job is started) to the correct encoding of the parameter file.

Note: There is a chance to have data corruption when going from a 8 bit characterset to UTF8 or another multibyte characterset on ALL 10g versions (including 10.1.0.5 and 10.2.0.3) and 11.1.0.6. Impdp may provoke data corruption unless patch 5874989 is applied. This is caused by Bug 5874989. The “old” exp/imp works fine. This problem is fixed in upcoming 10.2.0.4 and 11.1.0.7 patchsets.

All existing patches for this bug are found here: http://updates.oracle.com/download/5874989.html

For 10.1.0.5, 10.2.0.2, 10.2.0.3 and 11.1.0.6 you can ask for a backport for Bug 5874989 if there isn’t a patch yet for your version/platform.

The patch is technically only needed on the impdp side, but if you use expdp/impdp between different charactersets Oracle suggests you patch all of your systems.

What Causes ORA-01401 or ORA-12899 during import (imp and impdp)?

9i and lower gives ORA-01401: inserted value too large for column, 10g and up give ORA-12899: value too large for column.

This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET (like WE8ISO8859P1, WE8MSWIN1252, WE8DEC …) or 16-bit NLS_CHARACTERSET (like JA16SJIS, ZHS16GBK, KO16MSWIN949) to a database with a NLS_CHARACTERSET set to AL32UTF8 or UTF8.

(AL32)UTF8 users more BYTES to store a character then a 8 and 16 bit charactersets. This is explained in Note: 119119.1 AL32UTF8 (Unicode) Database Character Set Implications.

CHAR and VARCHAR2 column sizes are defined by default in BYTES not characterset.

The best solution is to precreate the tables using char semantics (9i and up) Note 144808.1 Examples and limits of BYTE and CHAR semantics usage.

Posted by: repettas | May 16, 2008

National Character Set Conversion in Oracle 9i and 10g

What is the National Character Set?

The National Character set (NLS_NCHAR_CHARACTERSET) is a character set which is defined in addition to the (normal) database character set and is used for data stored in NCHAR, NVARCHAR2 and NCLOB columns. You can find your current value for the NLS_NCHAR_CHARACTERSET with the following select statement:

SQL> select value from NLS_DATABASE_PARAMETERS where parameter = ‘NLS_NCHAR_CHARACTERSET';

You can not have more than 2 charactersets defined in Oracle: The NLS_CHARACTERSET is used for CHAR, VARCHAR2, and CLOB columns; the NLS_NCHAR_CHARACTERSET is used for NCHAR, NVARCHAR2 and NCLOB columns.

NLS_NCHAR_CHARACTERSETis defined when the database is created and specified with the CREATE DATABASE command. The NLS_NCHAR_CHARACTERSET defaults to AL16UTF16 if nothing is specified. From 9i onwards the NLS_NCHAR_CHARACTERSET can only have 2 values: UTF8 or AL16UTF16 which are the Unicode character sets in the Oracle database.

A lot of people think they need to use the NLS_NCHAR_CHARACTERSET to have UNICODE support in Oracle but this is not true, NLS_NCHAR_CHARCTERSET (NCHAR, NVARCHAR2) in 9i is always Unicode but you can perfectly use the “normal” CHAR and VARCHAR2 columns for storing unicode in a database that has a AL32UTF8 / UTF8 NLS_CHARACTERSET.

When trying to use another NATIONAL characterset the CREATE DATABASE command will fail with ORA-12714: invalid national character set specified – If you try to create an Oracle 9i or 10g database with an invalid database character set.

Which Datatypes Use the National Character Set?

There are three datatypes which can store data in the national character set:

NCHAR – is a fixed-length national character set character string. The length of the column is ALWAYS defined in characters (it always uses CHAR semantics)

NVARCHAR2 – is a variable-length national character set character string. The length of the column is ALWAYS defined in characters (it always uses CHAR semantics)

NCLOB – stores national character set data of up to four gigabytes in size. Data is always stored in UCS2 or AL16UTF16, even if the NLS_NCHAR_CHARACTERSET is UTF8. If you use N-types, DO use the (N’…’) syntax when coding it so that the literals are denoted as being in the national character set by preprending the letter ‘N’, for example: create table test values(N’this is a NLS_NCHAR_CHARACTERSET string’);

How Do I Know if I Use N-Type Columns?

This select statement will list all tables containing a N-type column:

select distinct owner, table_name from dba_tab_columns where data_type in (‘NCHAR’,’NVARCHAR2′,’NCLOB’);

On a 9i database created without (!) the “sample” schema you will see these rows (or less) returned:

OWNER TABLE_NAME
—————————
SYS ALL_REPRIORITY
SYS DBA_FGA_AUDIT_TRAIL
SYS DBA_REPRIORITY
SYS DEFLOB
SYS STREAMS$_DEF_PROC
SYS USER_REPRIORITY
SYSTEM DEF$_LOB
SYSTEM DEF$_TEMP$LOB
SYSTEM REPCAT$_PRIORITY
9 Rows selected.
These SYS and SYSTEM tables may contain data if you are using:
  • Fine Grained Auditing -> DBA_FGA_AUDIT_TRAIL
  • Advanced Replication -> ALL_REPRIORITY, DBA_REPRIORITY, USER_REPRIORITY, DEF$_TEMP$LOB, REPCAT$_PRIORITY
  • Advanced Replication or Deferred Transactions Functionality -> DEFLOB
  • Oracle Streams -> STREAMS$_DEF_PROC

If you created the database with DBCA (database assistant) and included the sample schema then you will typically see:

OWNER TABLE_NAME
——————————–
OE BOMBAY_INVENTORY
OE PRODUCTS
OE PRODUCT_DESCRIPTIONS
OE SYDNEY_INVENTORY
OE TORONTO_INVENTORY
PM PRINT_MEDIA
SYS ALL_REPRIORITY
SYS DBA_FGA_AUDIT_TRAIL
SYS DBA_REPRIORITY
SYS DEFLOB
SYS STREAMS$_DEF_PROC
SYS USER_REPRIORITY
SYSTEM DEF$_LOB
SYSTEM DEF$_TEMP$LOB
SYSTEM REPCAT$_PRIORITY
15 Rows selected.
The OE and PM tables contain just sample data and be dropped if need be.

Should I Worry When I Upgrade from 8i or lower to 9i or 10g?

  • When upgrading from Version 7 – The National Character Set did not exist in Oracle Version 7, so you can not have N-type columns. Your database will just have the -default- AL16UTF16 NLS_NCHAR_CHARACTERSET declaration and the standing sys/system tables. So there is no need to worry about anything when upgrading 7.
  • When upgrading from Version 8 and 8i – If you only have the SYS / SYSTEM tables listed above then you don’t have USER data using N-type columns. Your database will just have the -default- AL16UTF16 NLS_NCHAR_CHARACTERSET declaration AL16UTF16 NLS_NCHAR_CHARACTERSET declarations after the upgrade and the standard sys/system tables. So there is nothing to worry about…
  • If you have more tables than SYS / SYSTEM listed above (and they are also not the “sample” tables) then there are 3 possible cases (again the next to points are only relevant when you DO have n-type USER data)
    • Your current 8 / 8i NLS_NCHAR_CHARACTERSET is in this list: JA16SJISFIXED, JA16EUCIXED, ZHT32TRISFIXED, KO16KSC5601FIXED, KO16DBCSFIXED, S16TSFIXED, ZHS16CBG231280FIXED, ZHS16GBKFIXED, ZHS16DBCSFIXED, ZHT16DBCSFIXED, ZHT16BIG5FIXED, ZHT32EUCFIXED – Then the new NLS_NCHAR_CHARACTERSET will be AL16UTF16 and your data will be converted to AL16UTF16 during the upgrade.
    • Your current 8 / 8i NLS_NCHAR_CHARACTERSET is UTF8: Then the new NLS_NCHAR_CHARACTERSET will be UTF8 and your data will not be touched during the upgrade.
    • Your current 8 / 8i NLS_NCHAR_CHARACTERSET is NOT in the list of points above and IS NOT UTF8: Then you will need to export your data and drop it before upgrading.

The NLS_NCHAR_CHARACTERSET is NOT Changed to UTF8 or AL16UTF8 after Upgrading to 9i:

This may happen if you have not set the ORA_NLS33 environment variable correctly to the 9i Oracle Home during the upgrade. Strongly consider restoring your backup and performing the migration again or create a case with Oracle.

Can You Change the AL16UTF16 to UTF8/ People Hear that there are Problems with AL16UTF16?

  • If you do not use N-types then there is NO problem at all with AL16UTF16 because you are simply not using it and Oracle strongly advises you to stay with the default AL16UTF16 NLS_NCHAR_CHARACTERSET.
  • If you do use N-types then there will be a problem with the 8i clients and lower accessing the N-type columns (note that you will NOT have a problem selecting from the “Normal” non-N-types (columns). If this is a situation you find yourself in Oracle recommends that you use UTF8 as NLS_NCHAR_CHARACTERSET or create a second 9i database using UTF8 as NCHAR and use this as “inbetween or staging” database between the 8i and the 9i database so you can create views in this new database that do a select from the AL16UTF16 9i database, the data will then be converted from AL16UTF16 to UTF8 in the “inbetween” database and that can be read by Oracle 8i. This is one of the two reasons why you should use UTF8 as NLS_NCHAR_CHARACTERSET, if you are NOT using N-type columns with pre-9i clients then there is NO reason to go to UTF8.
  • If you want to change to UTF8 because you are using transportable tablespaces from 8i database then you want to check if you are using N-types in the 8i database that are included in the tablespace that you are transporting. Use the following select statement: select distinct owner, table_name from dba_tab_columns where data_type in (‘NCHAR’,’NVARCHAR2′,’NCLOB’); If yes, then you have the second reason to use UTF8 for NLS_NCHAR_CHARACTERSET. If not, then leave it to AL16UTF16 and create a TAR with Oracle for the solution for any ORA-19736 error(s).
  • You are in one of the 2 situations where it’s really needed to change from AL16UTF16 to UTF8 then the correct steps to go from AL16UTF16 to UTF8 are:
    • install csscan
    • for 10g use the csscan 2.x version found in $ORACLE_HOME/bin, no need to install a new version (skip the next step)
    • For 9.2 and lower: Please DO install version 1.2 or higher from OTN for your OS version. Copy all of the scripts and executables found in the zip file you downloaded to your $ORACLE_HOME directory and overwrite (replace) the older versions. Note: Do NOT use the csscan of a 10g Installation on a 9i/8i database!
    • Execute csminst.sql using the following commands and executables:
cd $ORACLE_HOME/rdbms/admin
export ORACLE_SID=<Oracle SID>
sqlplus “/ as sysdba”
set termout on
set echo on
spool csminst.log
START csminst.sql
Check the csminst.log for errors. If you get the error “Character set migrate utility schema not compatible” then check to be sure you replace all of the older csscan files and executables with the ones in the download zip file (check the file permissions in the target directory, make sure you are starting the executable from the right directory, check your $PATH settings, make sure you are not starting csscan from an inappropriate Oracle Home or you have not executed a cmsinst.sql from a different location or wrong version).
For 9i export all of the user N-data, drop/truncate all the user N-data (if you do not drop all N-data then you will run into ORA-12717: Cannot issue ALTER DATABASE NATIONAL CHARACTERSET when NCLOB, NCHAR or NVARCHAR2 data exists, execute csscan to check if everything is OK by:
csscan FULL=Y TOCHAR=UTF8 LOG=check CAPTURE=Y ARRAY=10000000 PROCESS=2
Always execute csscan as sysdba. Review the check.txt file very carefully to verify that ALL character type data in the data dictionary remains the same in the new character set and ALL character type application data remains the same in the new character set. If this is true you can then execute an ALTER DATABASE NATIONAL CHARACTERSET UTF8; command – next shutdown the listener and any applications that connects locally to the database. There should only be ONE connection to the database during the WHOLE time and that’s the SQL*Plus session that you are using to perform the change.
  1. Make sure the parallel server parameter in init.ora is set to false or it is not set at all. If you are using RAC see Oracle’s MetaLink note for changing the Character Set for a RAC database.
  2. Execute the following commands in a SQL*Plus session connected as “/ as sysdba”:
spool Nswitch.log
shutdown immediate;
startup mount;
alter system enable restricted session;
alter system set job_queue_processes=0;
alter system set aq_tm_processes=0;
alter database open;
alter database national character set UTF8;
shutdown immediate;
Restore the parallel_server parameter in init.ora, if necessary:
Import the user N-data again.
For 10g and up:
  • export any user N-data
  • drop/truncate any user N-data
  • truncate these 2 xdb tables if they return 7 rows
SQL>select LOCALNAME from XDB.XDB$QNAME_ID;
SQL>select NMSPCURI from XDB.XDB$NMSPC_ID;
sql> truncate table xdb.xdb$qname_id;
sql> truncate table xdb.xdb$nmspc_id;

If you have more rows than 7 open a SR with Oracle. Otherwise, execute csscan to verify if everything is ok.

csscan FULL=Y TONCHAR=UTF8 LOG=check CAPTURE=Y ARRAY=10000000 PROCESS=2

always execute csscan as “/ as sysdba”

Review the results in the check.txt file. You should see the following:

All character type data in the data dictionary remain the same in the new character set
All character type application data remain the same in the new character set
Next run csalter (using alter database is in a 10g system for the national characterset is not really a problem, however, ….
Shutdown the listener and any application that connects locally to the database. There should only be ONE connection to the database during the WHOLE time and that’s the SQL*Plus session you are using to execute the commands. Once you’ve verified that your session is the only current session connect “/ as sysdba” and execute the following set of commands:
– Make sure that the parallel_server and CLUSTER_DATABASE parameters are set to FALSE or they are not set
– at all. If you are using RAC you will need to start the database in single instance mode by setting the
– initialization parameter CUSTER_DATABASE=FALSE
show parameter CLUSTER_DATABASE
show parameter PARALLEL_SERVER
– check if you are using an spfile
show parameter spfile
– if this displays the path to an SPFILE then you are using spfile and in that case you need to do:
show parameter job_queue_processes
show parameter aq_tm_processes
– then do
shutdown immediate;
startup restrict;
spool Nswitch.log
@?/rdbms/admin/csalter.plb
– if you are using an spfile then you need to also do this
alter system set job_queue_processes=<original value> scope=both;
alter system set aq_tm_processes=<original value> scope=both;
shutdown;
startup;

After the above you will need to update the XDB tables with the following inserts:

(these are located in the $ORACLE_HOME/rdbms/admin/catxdbtm.sql script)

/************ Insert reserved values into tables *********************/

insert into xdb.xdb$nmspc_id values
(‘http://www.w3.org/XML/1998/namespace&#8217;,          HEXTORAW(’01’));
insert into xdb.xdb$nmspc_id values
(‘http://www.w3.org/XML/2000/xmlns&#8217;,              HEXTORAW(’02’));
insert into xdb.xdb$nmspc_id values
(‘http://www.w3.org/2001/XMLSchema-instance&#8217;,     HEXTORAW(’03’));
insert into xdb.xdb$nmspc_id values
(‘http://www.w3.org/2001/XMLSchema&#8217;,              HEXTORAW(’04’));
insert into xdb.xdb$nmspc_id values
(‘http://xmlns.oracle.com/2004/csx&#8217;,              HEXTORAW(’05’));
insert into xdb.xdb$nmspc_id values
(‘http://xmlns.oracle.com/xdb&#8217;,                   HEXTORAW(’06’));
insert into xdb.xdb$nmspc_id values
(‘http://xmlns.oracle.com/xdb/nonamespace&#8217;,       HEXTORAW(’07’));

insert into xdb.xdb$qname_id values (HEXTORAW(’01’), ‘space’, HEXTORAW(’01’), HEXTORAW(’10’));
insert into xdb.xdb$qname_id values (HEXTORAW(’01’), ‘lang’, HEXTORAW(’01’), HEXTORAW(’11’));
insert into xdb.xdb$qname_id values (HEXTORAW(’03’), ‘type’, HEXTORAW(’01’), HEXTORAW(’12’));
insert into xdb.xdb$qname_id values (HEXTORAW(’03’), ‘nil’, HEXTORAW(’01’), HEXTORAW(’13’));
insert into xdb.xdb$qname_id values
(HEXTORAW(’03’), ‘schemaLocation’, HEXTORAW(’01’), HEXTORAW(’14’));
insert into xdb.xdb$qname_id values
(HEXTORAW(’03’), ‘noNamespaceSchemaLocation’, HEXTORAW(’01’), HEXTORAW(’15’));
insert into xdb.xdb$qname_id values (HEXTORAW(’02’), ‘xmlns’, HEXTORAW(’01’), HEXTORAW(’16’));

commit;

Import any user N-data.

Important: Do NOT use the N_SWITCH.SQL script any more AFTER migrating from Oracle 8i to 9i, using this script a second time will corrupt existing NCHAR data!!!!!!!

Is the AL32UTF8 Problem the Same as the AL16UTF16  / Do I need the same Patches?

No, they may look similar but they are different problems. This refers to the problems connecting to AL32UTF8 databases with older client software.

But I Still Want <characterset> as NLS_NCHAR_CHARACTERSET, like I had in 8i:

This is simply not possible. From 9i onwards the NLS_NCHAR_CHARACTERSET can have only 2 values: UTF8 or AL16UTF16.  Both UTF8 and AL16UTF16 are unicode charactersets, so they can store whatever <characterset> you had as NLS_NCHAR_CHARACTERSET in 8(i). If you are not using N-types then keep the default AL16UTF16 (or use UTF8 if you really want), it doesn’t matter if you don’t use N-types. There is one condition in which this “limitation” can have an undesired affect, when you are importing an Oracle 8i Transportable Tablespace into Oracle 9i you can run into ORA-19736 (as well with AL16UTF16 as with UTF8). Simply send Oracle the output of: select distinct owner, table_name from dba_tab_columns where data_type in (‘NCHAR’,NVARCHAR2′,’NCLOB’); In that case open a TAR with Oracle refer to the Metalink note ans ask them to assign the TAR to the NLS/globalization team. That team can then assist you with a work around. Do NOT try to change the national characterset in 8i to AL16UTF16 or update system tables like you find sometimes on the Oracle DBA Sites or Blogs on the Internet at least if you don’t want to jeopardize your data integrity as well as possible destroying you Oracle instance.

Do You Need to set NLS_LANG to AL16UTF16 when Creating / Using the NLS_NCHAR_CHARACTERSET?

NLS_LANG is used to let Oracle know what characterset your client’s OS is using so that Oracle can perform (if needed) the conversion from the client’s characterset to the Server’s database characterset.  NLS_LANG is a CLIENT parameter and has no influence on the server (database side) whatsoever.

AL32UTF8 fails with NLS_NCHAR_CHARACTERSET with ORA-12714:

From 9i onwards the NLS_NCHAR_CHARACTERSET can have only 2 values: UTF8 or AL16UTF16. UTF8 is possible so that you can use it (when needed) for 8.x backwards compatibility. In all other conditions AL16UTF16 is the preferred and best value. AL16UTF16 has the same unicode revision as AL32UTF8 so there is no need for AL32UTF8 as NLS_NCHAR_CHARACTERSET.

You have messages like “possible ncharset conversion)” During Import:

This is normal and is not an error condition. If you do not use N-types than this is a purely informative message. But even in the case that you use N-types like NCHAR and NCLOB then this is not a problem. The database will convert from the “old” NCHAR characterset to the new one automatically (and – unlike the “normal” characterset – the NLS_LANG has no impact on this conversion – during the exp/imp). AL16UTF16 or UTF8 (the only 2 possible values in 9i) are unicode charactersets and so you can store any character…So no data loss is to be expected.

Can You use AL16UTF16 as NLS_CHARACTERSET?

No, AL16UTF16 can only be used as NLS_NCHAR_CHARACTERSET in 9i and above. Trying to create a database with  a AL16UTF16 NLS_CHARACTERSET will fail.

Do You Need to Change the NLS_NCHAR_CHARACTERSET in 8i to UTF8 BEFORE Upgrading to 9i/10g?

No, see previous statements.

Having a UTF8 NLS_CHARACTERSET Database is There an Advantage to use AL16UTF16 N-Types?

There might be 2 reasons: One possible advantage is storage (disk space). UTF8 uses 1 to 3 bytes, AL16UTF16 always uses 2 bytes. If you have a lot of non-Western data (Cyrillic, Chinese, Portugese, etc….) UTF8 will use in most cases less disk space than AL16UTF16. This is NOT true for NCLOB and CLOB, they are both encoded with an internal fixed-width Unicode character set so they will both use the same amount of disk space. The other possible advantage is extending the limits of CHAR semantics. For a single-byte character set encoding, the character and byte length are the same. However, multi-byte character set encodings do not correspond to the bytes, making sizing the column more difficult. Hence the reason why CHAR semantics was introduced. However, we still have some physical underlying byte based limits and development has chosen to allow full usage of CHARacters occupying the MAX datalength that can be stored for a certain datatype in 9i and up.

The MAX column is the MAXIMUM amount of CHARACTERS that can be stored occupying the MAXIMUM data length. Seen that UTF8 and AL32UTF8 are VARYING charactersets this means that a string of X chars can be X to X*3 (or X*4 for AL32) bytes.

The MIN column is the maximum size that you can define and that Oracle can store if all data is the MINIMUM datalength (1 byte for AL32UTF8 and UTF8) for that character.

N-types (NVARCHAR2, NCHAR) are always defined in CHAR semantics, you cannot define them in BYTE.

All numbers are CHAR definitions.

Maximum Unicode Column Values

This means that if you try to store more than 666 characters that occupy 3 bytes in UTF8 column you still will get a ORA-01401: inserted value too large for column (or from 10g onwards: ORA-12899: value too large for column) error, even if you have defined the column as CHAR (2000 CHAR) so here it might be a good idea to define that column as NCHAR that will raise the MAX to 100 char’s …

Disadvantages of using N-types:

  • You might have some problems with older clients if using AL16UTF16
  • Be sure that you use (AL32)UTF8 as NLS_CHARACTERSET, otherwise you will run into one of the above issues.
  • Do not expet a higher performance by using AL16UTF16, it might be faster on some systems, but that has more to do with I/O then with the database kernel.
  • Forms 6i/9i does not support NCHAR / NVARCHAR2
  • If you use N-types, Do use the (N’…’) syntax when coding it so that literals are denoted as being in the national character set by prepending letter ‘N’

Oracle’s recommendation that if your environment (clients and servers) consists entirely of Oracle 9i or higher, use AL32UTF8 as NLS_CHARACTERSET, otherwise use UTF8.

For the time being do NOT use expdp/impdp (Data Pump) when going to UTF8 or another multibyte characterset on ALL 10g versions including 10.1.0.5 and 10.2.0.3. This also includes 11.1.0.6. It will provoke data corruption unless patch 5874989 is applied. This is caused by the impdp Bug 5874989. The older import/export executables work fine and should be used.

If you are using 10.1.0.5, 10.2.0.2, 10.2.0.3 or 11.1.0.6 you can request a backport for Bug 5874989 if there hasn’t been a patch issued for your platform.

Some Things to Be Aware Of:

For the database side you do not need “Unicode” support from the OS where the database is running on because the Oracle database is running on because the Oracle AL32UTF8 implementation is not depending on OS features.

Server Side Considerations

AL32UTF8 is a varying width characterset, which means that the code for a character can be 1, 2, 3, or 4 bytes long. This is a big difference with charactersets like WE8ISO8859P1 or WE8MSWIN1252.

US7ASCII characters (A-z,a-z,0-1 and ./?,*,# etc…) are in UTF8 1 byte, so for most West European languages the impact is rather limited as only “special” characters like the unprintable ones will use more bytes then in a 8 bit characterset. But if you convert a Cyrillic or Arabic system to AL32UTF8 then the data will take considerable more bytes to store.

This also means that you have to make sure your columns are big enough to store the additional bytes. By default the column size is defined in BYTES and not in CHARACTERS. So a “create table <name> (<column name> VARCHAR2(2000));” means that the column can store 2000 bytes.

From 9i onwards however, it is possible to really define a column with the number of CHARACTERS you want to store. How this works and what the limits and current known problems are will be explained another day.

Codepoints:

There is a common misconception that a character is always the same code, for example the £is often referred to as “code 163″ character. This is not correct! The code itself means nothing if you do not know what characterset you are using. The difference may look small, but it’s not. The pound sign for example is indeed 163 (A3 in hexadecimal) in the WE8ISO8859P1 and WE8MSWIN1252 charactersets, but in AL32UTF8 the pound sign is code 49827 (C2 A3 in hex).

So be careful if you use for example the ASCII or CHR(<code>) function, the code for a character using CHR depends on the database characterset! If you do chr(163) in a AL32UTF8 database then this 163 code is an illegal character, as 163 simply does not exist, for the pound you should use chr(49827) in a AL32UTF8 system.

Instead of CHR() you might use Unistr(‘\<code>’). Unistr() (is a 9i new feature) always works on every characterset that can display the character.

Objects and User Names are Max 30 Bytes not Characters:

Identifiers of database objects are max 30 bytes long. Names must be from 1 to 30 bytes long with these exceptions:

  • Names of databases are limited to 8 bytes.
  • Names of database links can be as long as 128 bytes.

Oracle strongly suggests never to use non-US7ASCII names for a database or a database link. The following SQL statement will return all objects having a non-US7ASCII name:

select object_name from dba_objects where object_name <> convert(object_name,’US7ASCII’);

A username can be a maximum of 30 bytes long. The name can only contain characters from your database characterset and must follow the rules described in the section “Schema Object Naming Rules”. Oracle recommends that the user name contain at least one single-byte character regardless of whether the database character set also contains multibyte characters. The following SQL statement will return all users having a non-US7ASCII name:

select username from dba_users where username <> convert(username, ‘US7ASCII’);

Using CHAR semantics is for the moment not supported in the SYS schema and that’s where the database object and user names are stored. If you have objects or users with non-US7ASCII names that take more than 30 bytes in UTF8 there is no alternative besides renaming the affected objects or user to use a name that will occupy no more than 30 bytes.

The password for users can contain only single-byte characters from your database character set regardless of whether the character set also contains multi-byte characters.

This means that in a (AL32)UTF8 database the user password can only contain US7ASCII characters as they are the only single-byte characters in UTF8. This may provoke a problem, if you migrate from (for example) a CL8MSWIN1251 database then your users can use Cyrillic in their passwords seen in CL8MSWIN1251 cyrillic i single byte, in UTF8 it is not. Note that passwords are stored in a hashed way will NOT be seen in csscan. You will need to reset for those clients the password to US7ASCII string. This restriction has been lifted in 11g, there you can use multi-byte characters as a password string. Please note that they need to be updated in 11g before they use the new 11g hashing system.

Other Things to Watch Out For:

Make sure you use a correct NLS_LANG setting when using export/import when exporting or importing to or from a UTF8 database. Using DBMS_LOB.LOADFROMFILE may have some implications when used in a UTF8 database. When using SQLLDR make sure you define the correct characterset of the “file” when you load. String functions work with characters not byte (length, like, substr …). There are of course exceptions like lengthB, substrB and instrB who explicitly deal with bytes. Since you never know the exact length of a string in BYTES in a UTF8 environment, operations based on the BYTE length should be avoided, unless you have a compelling reason to use them.

If you do not use CHAR semantics then the column size will be tripled (a CHAR 20 byte will show up as 60 Bytes in the AL32UTF8 db). If you really need to use BYTE you can use the _keep_remote_column_size=true at the MV side, but be aware that this will provoke ORA-1401 or ORA-12899 as the data WILL expand when it contains non-US7ASCII characters. Avoid the use of a mixture of BYTE and CHAR semantics in the same table and _keep_remote_column_size=true is NOT compatible with using CHAR semantics.

The Client Side:

Most people think that the NLS_LANG should be UTF8 or AL32UTF8 because they are connecting to a AL32UTF8 database. This is not necessarily true, the NLS_LANG has in fact no relation with the database characterset. It’s purpose is to let oracle know what the client characterset is, so that Oracle can do the needed conversion.

Please make sure that the difference between a client who can’t connect to a Unicode database (which is any 8.0 and up client for a UTF8 database and any 9i+ client for AL32UTF8 database) and a “real” Unicode client. A “real” Unicode client means a client who can display/insert all characters known by Unicode without the need to recompile or change the OS settings.

Configuring Your Client To Be a Unicode Client on UNIX:

To hae a Unix Unicode client you need to configure your Unix environment first to use UTF8, then you have to check your telnet software to be able to use Unicode and then (as the last step) you can set the NLS_LANG environment variable equal to AMERICAN_AMERICA.AL32UTF8 and start SQL*Plus.

Configuring Your Client To Be a Unicode Client on Windows:

On Windows you cannot use sqlplusw.exe or sqlplus.exe as a Unicode / UTF8 client. Using sqlplusw.exe with NLS_LANG set to AL32UTF8 is totally incorrect. There are 2 Oracle provided Unicode clients you can use on Windows: iSQL*Plus and Oracle SQL Developer. If you want to use / write a Unicode application on Windows then the application should be specifically written to use the Windows Unicode API, setting NLS_LANG to AMERICAN_AMERICA.AL32UTF8 is not enough. You should consult the application vendor or the vendor of the development environment to see how to properly code in Unicode.

The popular tool TOAD is NOT able to run against (AL32)UTF8 databases. Toad does not support databases with a Unicode character set such as UTF8 or AL16UTF16. The International version of Toad is the only version of Toad that does support (AL32)UTF8.

All Oracle 9i and up clients are compatible with a AL32UTF8 database, even if they use a non-AL32UTF8 NLS_LANG setting:

Note: You can use sqlplusw.exe on a Western European / US Windows Client (using a NLS_LANG set to AMERICAN_AMERICA.WE8MSWIN1252, which is the correct value for a Western European US Windows system to connect to a AL32UTF8 database. However, the SQL*Plus client will only be able to insert / view Western European characters. If another user using SQL*Plus on a correctly configured Chinese Windows System inserts data then this will not be visible in the Western European SQL*Plus client. If you try to update Chinese data using the Western European client this will also fail.

Posted by: repettas | May 11, 2008

Register New Database in RMAN

To register a new incarnation or register a database for the first time in RMAN:

  • Connect to RMAN and list the incarnation(s) of the database (to identify existing incarnations or to check if the database has been registered with RMAN already):

rman target <username>/<password>@TNSname rcvcat <rman userid>/<rman password>

To list the incarnations(s):

RMAN> list incarnation;

If the database has already been registered you will see output similar to the following:

RMAN> list incarnation;

List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time

——- ——- ——– —————- — ———- ———-
1 2 PSEHR83 2199643792 CURRENT 1 01-APR-08

Otherwise, you will not see anything. If you don’t see anything you will be registering the database for the first time. If you do see a list of Database Incarnations you need to decide if you want to register a new Incarnation or continue to work with the existing incarnation. To register a database issue the register database command as follows:

RMAN> register database;

To display a list of the default configuration parameters, enter the command show all; – as shown below (the semicolon instructs RMAN to execute the text preceding the semicolon):

RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM ‘AES128′; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/opt/app/oracle/product/10.2.0/db_1/dbs/snapcf_PSEHR83.f'; # default

In the output above there are only two changes to the default parameters. I set the value of BACKUP OPTIMIZATION to ON and the value of CONTROLFILE AUTOBACK to ON by issuing the following two commands:

RMAN> configure backup optimization on;
RMAN> configure controlfile autobackup on;

Whenever you issue a command to change the value of one of the parameters RMAN will display the current value and the new value as follows:

RMAN> configure controlfile autobackup on;

old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>

In RMAN 10g R2, RMAN remembers the changed parameters from one RMAN session to another. There is no need to issue the configuration commands in each RMAN session. However, I issue a set of RMAN configuration commands in the RMAN scripts I create to ensure that the parameters are set to the values I need for that RMAN execution.

In our environments we generate RMAN backupsets, controlfile copies, archivelog copies, etc. to disk and to be more specific to ASM disks. The following configuration parameters are examples of the parameters we set in some of our RMAN scripts:

RMAN> configure backup optimization on;
RMAN> configure controlfile autobackup on;
RMAN> configure controlfile autobackup format for device type disk to ‘+ps89fn_disk3/ps89hrrf/autobackup/%F';
RMAN> configure device type disk backup type to backupset parallelism 2;

I choose a value of 2 for parallelism because this example is for RMAN running on a Itanium Blade that only has two processes and although my ASM storage consists of three logical volumes I only write RMAN backupsets and copies to two of the three ASM volumes.

The following is an example or an RMAN script that generates a level 0 incremental backupset:

[oracle@pdpfdb01:psprman:/opt/app/oracle/admin/scripts/psprman>more rman_full.rman
CONNECT target “<oracle userid>/<user password>@ps89fnp1″
CONNECT catalog <rman userid>/<rman password>
list backup;
configure channel 1 device type disk format ‘+ps89fn_disk1′ maxpiecesize 1750 M;
configure channel 2 device type disk format ‘+ps89fn_disk3′ maxpiecesize 1750 M;
backup incremental level 0 database tag=”prod_bkup” noexclude;
crosscheck backupset;
crosscheck copy;
delete obsolete;

The Oracle DB user account and password are in quotes because our passwords are required to contain alpha numeric characters and special characters. If a user account or password contains numeric characters and/or special characters it must be passed to Oracle program in quotes. The first CONNECT statement contains the user account and password for the Oracle Database to be backed up and the second CONNECT statement contains the user account and password for the RMAN Database.

To invoke an RMAN script you can use the following example as a guide:

rman @<script name>.rman

I create very very simple Korn shell scripts to invoke an RMAN script so I can run the RMAN task out of crontab. Here is a very simple example of a Korn shell script that invokes an RMAN backup script named rman_backup.rman:

[oracle@pdpfdb01:psprman:/opt/app/oracle/admin/dba/scripts/monitor>more rman.ksh
#!/bin/ksh -xv

echo “Start: `/bin/date`”
echo ” “
export ORACLE_HOME=/opt/app/oracle/product/10.2.0/db_1
export ORACLE_SID=psprman
export CURRENT_DIR=/opt/app/oracle/admin/dba/scripts/monitor
export PATH=$PATH:/usr/bin:/bin:${ORACLE_HOME}/bin:${CURRENT_DIR}

rman @/opt/app/oracle/admin/scripts/psprman/rman_backup.rman

echo ” “
echo “End: `/bin/date`”

The actual RMAN scripts I use contain shell commands to send an email out with the execution log and notify me of the success and/or failure of the RMAN task.

Set up PAGENT to Forward Keys for Auto-login without User Prompt

Steps to set up RSA and DSA SSH Key Authorization to automatically connect to a Unix Server without being prompted for a password.

Download PuTTY, PSCP, PSFTP, PuTTYgen, and Pagent These are all executables. You do not need to unzip or extract any of the downloads. Put these files into a directory that is in your Environment PATH variable so you do not have to qualify the executable with a path. I put these files right into C:\Windows\system32. If you choose a different directory you will have to update your

PuTTY Key Generator

1) puttygen.exe is the executable that you run to generate the private/public key combinations.

2) I download and move the PuTTY executables into my C:\Windows\System32 director. If you choose a different location you need to update your environment variables to include the directory you choose to store the executables in.

3) execute puttygen.exe

4) Go to the bottom section and to generate the first key combination select SSH-2 RSA (in the Parameters->Type of key to generate:) section and enter 1024 as the value for Number of bits in a generated key:

To generate the key click on the Generate Action button and move your mouse around the screen until the key has been generated.

5) The text that appears in the box below the description: Public key for pasting into OpenSSH authorized_keys file: is the text that you will use to concatenate at the end of the authorized_keys file on the server.

6) Put you mouse into the text box and right-click, using the Mouse Menu click Select All, then right-click and Select Copy

7) Use Notepad.exe to create a blank text file – paste the text from your buffer into the blank text file. Save the file in the directory location that you will use to store your public and private keys and I use the same directory to save all of my PuTTY related files and I put this on my Desktop so I can easliy acccess it. Name this file something that is meaningful to you, I use the hostname of my workstation prefixed to pub. For example: imac.pub.

8) Next click on the Save public key button and enter in the name of your RSA publick key, example: username-rsa.pub

9) Click on Save private key and enter in the name of your RSA private key, example: username-rsa.ppk

PuTTY Key Generator – Step-by-Step for DSA

1) Invoke the PuTTY Key Generator by typing in puttygen.exe and if you have updated your environment variable(s) Windows will locate the executable and invoke it.

2) Select Type of key to generate: SSH-2 DSA and enter 1024 for the Number of bits in a generated key

3) Click on the Generate button.

Step to Generate the Key Pair

1) Click on the Generate button and move the mouse around your screen to generate the key.

Save public key as:

1) Save your public key: Click on the Save public key button and enter the name for your public key file

Save your private key file.

1) Click on the button Save private key and you will see the above pop-up box, just click Yes.

1) Select the path and the name of the file you want to use for your private key file. Example: imac-dsa.ppk

Set up Pageant – PuTTY Key Agent

PuTTY Key Agent – this is the executable that you run that handles the forwarding of the keys between machines.

1) pagent.exe – start up the pagent.exe

2) you can create a command macro to start this up in the future or do as I did and create a Windows Service to start up the program and load the keys at boot time.

1) Select your private key files, one for RSA and one for DSA, and load them both in at the same time or one at a time if you prefer.

Screenshot displaying the Pageant Key List with the keys loaded.

Create a Command Macro to start a PuTTY Session.

1) Use Notepad or your favorite text editor to create a macro file to start up the indivdual PuTTY (SSH) sessions. You need to have a PuTTY session or profile defined for each server and userid combination you want to create a macro for. The command line syntax is as follows: putty -load <profile name>.

2) The start command instructs the Operating System to execute the macro and exit so if you run the command macro from a DOS Shell the DOS Shell won’t wait until the session exits.

3) Give the file whatever name you want. For example I name mine as follows: <server>-<user>.cmd

4) The “cmd” extension instructs the Operating System to treat this file as an executable command macro.

5) I save the macros I create in the same directory with my keys so I can easiliy locate these files.

Example of signing into a server using Oracle without being prompted for a password.

Posted by: repettas | May 2, 2008

Flashback Database

RMAN> SHUTDOWN IMMEDIATE;

SQL> select prior_incarnation# from v$database_incarnation where status = ‘CURRENT';
sql> select prior_incarnation# from v$database_incarnation where status =

sql> select prior_incarnation# from v$database_incarnation where status = ‘CURRENT';

sql> select prior_incarnation# from v$database_incarnation where status = ‘CURRENT';When space pressure arises Oracle will select objects for automatic purging from the recycle bin, objects are selected for purging based on a first-in, first-out basis, that is, the first objects dropped are the first select for purging. Actual purging of objects is done only as needed to meet ongoing space pressure, that is, the database purges the minimum possible number of objects selected for purging to meet the immediate demand for space. This policy serves two purposes: 1) it minimizes the overhead by not performing unnecessary purging this is especially crucial during transaction processing during which time Oracle determines it has an issue with space pressure by not spending any more resources than necessary by purging only the minimum number of objects to relieve the space pressure and 2) it maximizes the length of time that objects will remain in the recycle bin, by leaving them there until space is needed.

Dependent objects such as indexes belonging to a table has been selected for purging will be selected first, purged and then if the space pressure demands have been met the table will remain in the recycle bin. If space pressure is due to an individual user’s quota on a tablespace becoming exhausted, the recycle bin purges objects belonging to the tablespace which count against that user’s space quota.

Tablespaces that are set to AUTO EXTEND – objects are purged from the recycle bin to reclaim space before the datafile is extended.

Recycle Bin Objects and Segments – The recycle bin operates at the object level, in terms of tables, indexes and so on. An object may have multiple segments associated with it, such as partitioned tables, partitioned indexes, lob segments, nested tables, and so on. Because the database reclaims only the segments needed to immediately statisfy the space pressure demand, it can happen that not all of the segments belonging to an object are reclaimed. When this happens, any segment(s) of the objects not reclaimed immediately are marked as temporary segments. These temporary segments are the first candidates to be reclaimed the next time a space pressure arises.

In the case the partially-reclaimed object can no longer be removed from the recycle bin with the Flashback drop command. For example, if one partition of a partitioned table is reclaimed, the table can no longer be the object of a Flashback Drop.

Performing Flashback Database to Guarantee Restore Point – You can list the available guaranteed restore points using the V$RESTORE_POINT view as follows:

select name, scn, time, database_incarnation#, guarantee_flashback_database
from v$restore_point
where guarantee_flashback_database = ‘YES';

Having identified the restore point to use, mount the database and run the FLASHBACK DATABASE command, using the restore point. For example:

RMAN> SHUTDOWN IMMEDIATE;
RMAN> STARTUP MOUNT;
RMAN> FLASHBACK DATABASE TO RESTORE POINT ‘BEFORE_TAX_UPDATE';

When the command completes, you may open the database read-only and inspect the effects of the operation and if satisfied, open the database with the RESETLOGS option.

Performing Flashback Database to Undo and OPEN RESETLOGS

The basic procedure for using the Flashback Database to reverse an unwanted OPEN RESETLOGS is similar to the open example, however, rather than specifying a particular SCN or a point in time mark for the FLASHBACK DATABASE command, you use the FLASHBACK DATABASE TO BEFORE RESETLOGS, as in the following example:

Before performing the flashback, verify that the beginning of the flashback window is earlier than the time of the most recent OPEN RESETLOGS:

SQL> select resetlogs_change# from v$database;
SQL> select oldest_flashback_scn from v$flashback_database_log;

If V$DATABASE.RESETLOGS_CHANGE# is greater than the value of V$FLASHBACK_DATABASE_LOG.OLDEST_FLASHBACK_SCN, then you can use the Flashback Database to reverse the OPEN RESETLOGS by following the steps outlined below:

Shutdown the database, mount it, and re-check the flashback window. If the resetlogs SCN is still within the flashback window, then use this form of the FLASHBACK DATABASE command:

RMAN> FLASHBACK DATABASE TO BEFORE RESETLOGS;

As with other uses of FLASHBACK DATABASE, if the target SCN is prior to the beginning of the flashback database window, an error is returned and the database is not modified. If the command completes successfully, then the database is left mounted and recovered to the last SCN before the OPEN RESETLOGS in the previous incarnation.

You can open the database read-only and perform queries to make sure that the data is in the proper state To make the database available for updates again, use ALTER DATABASE OPEN RESETLOGS.

Flashback Database Across OPEN RESETLOGS With Standby Databases

Support for Flashback Database across OPEN RESETLOGS enables several applications of Flashback Database with standby databases. These include:

  • Flashback to undo logical standby switchovers, in which the database reverts back to its role (primary or standby) at the target time for the Flashback Operation.
  • Undo of a physical standby activation, so that you can temporarily activate a physical standby database, use it for testing or reporting purposes, then use Flashback Database to return it to its role as a physical standby
  • Ongoing use of a clone or standby database for testing, without requiring the use of storage snapshots

Flashback Database to the RIght of Open Resetlogs:

In some cases, you may need to return the database to a point in time in its parent incarnation, later than the SCN of the OPEN RESETLOGS at which the current incarnation path branched from the old incarnation. These points, which correspond to abandoned changes in the parent incarnation, can be described as being “to the right” of the last OPEN RESETLOGS, with reference to an incarnation diagram such as the following. An example would be, in the diagram, the database might be in incarnation 3, and you might need to return to the abandoned SCN 1500 in incarnation 1. You can use the RMAN RESET DATABASE TO INCARNATION command to specify the current incarnation referred to by the SCN to use with the Flashback Database. The process works as follows:

  • Verify that the flashback logs contain enough information to flash back to that SCN

sql> select oldest_flashback_scn from v$flashback_database_log;

  • Determine the target incarnation number for the flashback, that is, the incarnation key for the parent incarnation:

SQL> select prior_incarnation# from v$database_incarnation where status = ‘CURRENT';

  • In RMAN, shut down the database, then mount it:

RMAN> SHUTDOWN IMMEDIATE;

RMAN> STARTUP MOUNT;

  • Set the database incarnation to the parent incarnation:

RMAN> RESET DATABASE TO INCARNATION 1;

  • Run the FLASHBACK DATABASE command:

RMAN> FLASHBACK DATABASE TO SCN 1500;

Once the flashback is complete, you can verify the results, and if successful, open the database with RESETLOGS.

Performing Database Point-In-Time Recovery

Database point-in-time recovery (DBPITR) restores the database from backups prior to the target time for recovery, then uses the incremental backups and redo to roll the database forward to the target time. Understanding DBPITR requires background information on database incarnations and how RMAN treats backups from times not in the current incarnation path. In particular, there are special considerations if you are returning your database to a point in time prior to the most recent OPEN RESETLOGS. The following section contains the following topics: Understanding Parent, Ancestor and Sibling Database Incarnations, Incarnation History of a Database, and Database Incarnations and Orphaned Backups.

Understanding Parent, Ancestor and Sibling Database Incarnations

A new incarnation of a database is created whenever each time the database is opened with the RESETLOGS option. Performing an OPEN RESETLOGS archives the current online redo logs, Incarnation resets the log sequence number to 1 and then gives the online redo logs a new time stamp and SCN. It also increments the incarnation number, which is used to uniquely tag and identify a stream of redo. Incarnations can stand in several relationships to each other as described below:

  • The incarnation from which the current incarnation branched following an OPEN RESETLOGS operation is called the parent incarnation of the current incarnation.
  • The parent incarnation and all of its parent incarnations are the ancestor incarnations of the current incarnation.
  • Two incarnations that share a common ancestor are sibling incarnations if neither one is an ancestor of the other.

Incarnation History of a Database: Example

“Database Incarnation History with Multiple Resetlogs” shows a database that goes through several incarnations. Incarnation 1 of the database starts at SCN 1 and continues through SCN 65366080601 (see example). For illustration purposes say at SCN 2000 in incarnation 1, you perform a point-in-time recovery back to SCN 1000, and open the database with a RESETLOGS operation. This creates incarnation 2, which begins at SCN 1000 and continues to SCN 3000. At SCN 3000 in incarnation 2, you perform another point-in-time recovery and RESETLOGS operation. This creates incarnation 3, starting at SCN 2000.

RMAN> list incarnation;

starting full resync of recovery catalog
full resync complete

List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
——- ——- ——– —————- — ———- ———-
1 2 PS89FNPD 509360695 PARENT 1 06-APR-07
1 3284 PS89FNPD 509360695 CURRENT 65366080601 23-OCT-07

The value in the Reset SCN column is the SCN at which the RESETLOGS was performed. The Inc Key column is the incarnation key. RMAN uses the incarnation key to identify the database incarnation in some commands, such as using RESET DATABASE TO INCARNATION to change the current incarnation in some complex recovery scenarios.

Sibling Incarnations, Ambiguous SCNs and RESET DATABASE INCARNATION:

When working with a database where flashback or point-in-time recovery operations have produced sibling incarnations, note that when a given SCN value can refer to more than one point in time, depending upon which incarnation has been set as the current incarnation.

You can run a CHANGE … UNCATALOG command for the backups that you deleted from the Operating System with Operating System commands. Optionally, you can view the relevant recovery catalog view, for example, RC_DATAFILE_COPY or RC_CONTROLFILE_COPY, to confirm that a given record was removed.

Flash Recovery Area Maintenance:

While the Flash Recovery Area is largely self-managing, there are some situations in which DBA intervention may be required. Resolving a Full Flash Recovery Area - you have a number of choices on how to resolve a full flash recovery area when there are no files marked for deletion.

  1. Make some disk space available and increase DB_RECOVERY_FILE_DEST_SIZE to reflect the new space.
  2. Move backups from the flash recovery area to a tertiary space such as tape. One convenient way to back up all of your flash recovery area files to tape at once is the BACKUP RECOVERY AREA command.
  3. After you transfer backups from the flash recovery area to tape, you can resolve the full recovery area condition by deleting files from the flash recovery area, using forms of the RMAN DELETE command.
  4. Delete unnecessary files from the flash recovery area using the RMAN DELETE command.

(Note that if you use the host operating system commands to delete files, then the database will not be aware of the resulting free space. You can run the RMAN CROSSCHECK command to have RMAN re-check the contents of the flash recovery area and identify expired files, and then use the DELETE EXPIRED command to remove missing files from the RMAN repository.)

Note: Flashback logs, by design, cannot be backed up outside the flash recovery area. Therefore, in a BACKUP RECOVERY AREA operation the flashback logs are deleted automatically to satisfy the need for space for other files in the flash recovery area. However, a guaranteed restore point can force the retention of flashback logs required to perform Flashback Database to the restore point SCN.

You may also need to consider changing your backup retention policy and, if using Data Guard, consider changing your archivelog deletion policy.

Changing the Flash Recovery Area to a New Location – if you need to move the flash recovery area to a new location, you can invoke this procedure

  1. Invoke SQL*Plus to change the DB_RECOVERY_FILE_DEST initialization parameter. For example

alter system set db_recovery_file_dest=’+disk1′ scope=both sid=’*';

After you change this parameter, all new flash recovery files will be created in the new location.

2. The permanent files (control files and online redo log files), flashback logs and transient files can be left in the old flash recovery area location. The database will delete the transient files from the old flash recovery area location as they become eligible for deletion.

If you need to actually move your current permanent files, transient files or flashback logs to the new flash recovery area, see the Oracle Database Backup and Recovery Advanced User’s Guide for detailed instructions. The process outlined there for moving database files into and out of an ASM disk group with RMAN will also work when moving files into and out of a flash recovery area location.

Oracle will clean up transient files remaining in the old flash recovery area location as they become eligible for deletion.

Flash Recovery Area Behavior when Instance Crashes During File Creation – As a rule, the flash recovery area is self-maintaining, but when an instance crashes during the creation of a file in the flash recovery area, Oracle may leave the file in the flash recovery area. When this occurs, you will see the following error in the alert log:

ORA-19816: WARNING: Files may exist in a location that are not known to the database. Where location is the location of the flash recovery area. In such a situation, you should use the RMAN command CATALOG RECOVERY AREA to re-catalog any such files so that they appear in the RMAN repository. If the file header of the file in question is corrupted, then you will have to delete the file manually using an operating system-level utility.

You can query the V$RECOVER_FILE_DEST view to find out the current location, disk quota, space in use, space reclaimable by deleting files and total number of files in the Flash Recovery Area.

Oracle does not delete eligible files from the Flash Recovery Area until the space must be reclaimed for some other purpose. The effect is that files recently moved to tape are often still available on disk for use in recovery. The recovery area can thus serve as a kind of cache for tape. Once the FRA is full, Oracle automatically deletes eligible files to reclamin space in the FRA as needed.

The following rules apply for files to become eligible for deletion from the FRA:

  • Files that are obsolete under the configured retention policy
  • Transient files that have been copied to tape
  • In a Data Guard environment, archived redolog deletion policy governs when archived redolog files can be deleted from the Flash Recovery Area

The database issues a warning alert when reclaimable space is less than 15% and a critical alert when reclaimable space is less than 3%. To warn the DBA of this condition, an entry is added to the alert log and to the DBA_OUTSTANDING_ALERTS table (used by Enterprise Manager). However, the database continues to consume space in the Flash Recovery Area until there isn’t any reclaimable space left. When the Flash Recovery Area is completely full, the following error will be reported:

ORA-19809: Limit exceeded for recovery files
ORA-19804: Cannot reclaim <nnnn> bytes disk space from <nnnnnnnn> limit

where <nnn> is the number of bytes required and <nnnnnn> is the disk quota for the Flash Recovery Area.

The following error would be reported in the alert.log:

ORA-19815: WARNING: db_recovery_file_dest_size of <size of FRA configured> bytes is 100.00% used, and has 0 remaining bytes available.

Issue the following query to see the message:

SQL> select object_type, message_type, message_level, reason, suggested_action from dba_outstanding_alerts;

The following actions can be taken to resolve the space issue:

  • Add disk space to the Flash Recovery Area
  • Back up your files to a tertiary device
  • Delete the files from the Flash Recovery Area using RMAN
  • Changing the RMAN retention policy.

The database handles a Flash Recovery Area with insufficient reclaimable space just at it handles a disk full condition. Often, the result is a hang of the database.

Posted by: repettas | May 2, 2008

PeopleTools 8.49

This note is under construction:

Oracle Application Server 10.1.3.1 OAS 10 R3

Patches for OAS 10.1.3.1:

p5569155_101310_WINNT.zip
p5669155_101310_HPUX-IA64.zip
p5669155_101310_HPUX.zip

p5918901_101310_GENERIC.zip
p5640464_101310_GENERIC.zip

Order of Patch Installation:

1. Install p5669155_101310_.zip
2. Install p2617419_10102_GENERIC.zip
Note: This is an optional patch. On some Unix platforms (HPUX PA-RISC), the OPatch that is required in order to apply patches is missing because it is not packaged with the OAS Installation. On these platforms, this patch has to be installed prior to apply patches 3 and 4.
3. Install p5918901_101310_GENERIC.zip
4. Install p5640464_101310_GENERIC.zip

Installation Steps:

All the patches need to be extracted before applying them.

Note: On Windows platform the extraction path CANNOT contain ANY spaces.

Steps to install patch p5669155_101310_.zip:

1. unzip the patch to a temp or your patch repository (in our case we use an NFS volume to serve as the repository for all of our software).
2. Make the path where the patchset was extracted to your current directory.
3. Execute the appropriate install or setup.exe file
4. In the Destination section select the Oracle Home (OAS Home) that needs to be patched, this patch is installed via the Oracle Universal Installer that ships with the patchset.
5. Click Next and accept the defaults for everything and click Install on the final page.

Steps to install p2617419_10102_GENERIC.zip

1. unzip the patch into ORACLE_HOME (OAS) location. If you’ve already installed OAS on Unix then you should remember that before kicking off the runInstaller shell script you had to “unset” all of your ORACLE related environment variables and make sure that there weren’t any ORACLE path references for the various library environment variables (which library variables are used are dependent on the flavor of your OS). You needed to make sure LD_LIBRARY_PATH, PATH, SHLIB_PATH and CLASSLIB_PATH did not contain any Oracle related directories. After the install you need to make sure that when you are working with OAS and not PeopleSoft to set your ORACLE_HOME to the location of the Oracle Application Server home, an easy way to do this is to define an environment variable for OAS_HOME and point it the patch containing the Oracle Application Server home and then execute export ORACLE_HOME=$OAS_HOME and if this is done before setting the library environment variables, etc. and assuming you use $ORACLE_HOME as the reference for the library environment variables (i.e. SHLIB_PATH=/usr/lib:$ORACLE_HOME/lib:$ORACLE_HOME/lib32).
2. set ORACLE_HOME to point to the location of OAS 10.1.3.1 Installation. Windows – set ORACLE_HOME=OAS_HOME
3. Stop OAS Instance with the appropriate command below:
Windows: %ORACLE_HOME%\opmn\bin\opmnctl stopall
Unix: $ORACLE_HOME/opmn/bin/opmnctl stopall
4. Set your current directory to the local where the patch was extracted to (ex. cd /stage/patches/p5650644_101310_GENERIC/5640464).
5. Ensure that the directory containing the location of the OPatch script (opatch) appears in your PATH environment variable.
Windows: set PATH=%ORACLE_HOME%\OPatch;%PATH%
Unix: export PATH=$ORACLE_HOME/OPatch:$PATH
6. Apply the patch by running the following command:
opatch apply

To list the applied patches:
opatch lsinventory
7. Start the OAS instance by running the following command:
Windows: %ORACLE_HOME%\opmn\bin\opmnctl startll
Unix: $ORACLE_HOME/opmn/bin/opmnctl startall

« Newer Posts - Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.