Thursday 23 July 2015

ORA-00392: log 1 of thread 1 is being cleared, operation not allowed, ORA-00312: online log 1 thread 1: '+DATA'

The above error occured during test restore of a RAC database (ASM based) to a single instance filesystem based database.

This was encountered at the point of opening the database after a successful restore of data files and necessary archive logs.

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-00392: log 1 of thread 1 is being cleared, operation not allowed
ORA-00312: online log 1 thread 1: '+DATA'

Solution:

Check the redo log create destination.

SQL> show parameter create

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_bitmap_area_size              integer     8388608
create_stored_outlines               string
db_create_file_dest                  string      +DATA
db_create_online_log_dest_1          string
db_create_online_log_dest_2          string
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string

Set the parameters to reflect the filesystem based format to ensure redo logs are created in the appropriate destination.

SQL> alter system set db_create_file_dest='/filesystem_location/oradata';

System altered.

SQL> alter system set db_create_online_log_dest_1='/filesystem_location/onlinelog';

System altered.

SQL> alter system set db_create_online_log_dest_2='/filesystem_location/onlinelog';

System altered.

Check status of the affected redo group #1.


SQL> select status from v$log where GROUP#=1;
STATUS 
----------------
CLEARING_CURRENT

SQL> alter database clear unarchived logfile group 1;
Database altered.
 
 
SQL> select group#, status from v$log where GROUP#=1;

    GROUP# STATUS
---------- ----------------
         1 CURRENT

Open the database

SQL> alter database open resetlogs;

Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE
 
 

Tuesday 21 July 2015

'WARNING: Subscription for node down event still pending' in Listener Log

This is a single instance database 11.2.0.3 running on RHEL box.

This is related to the Oracle TNS Listener's default subscription to the Oracle Notification Service (ONS). It is recommended to disable this subscriptionIn a non-RAC environment.

Work around for this is to modify the listener.ora file to include the following:

SUBSCRIBE_FOR_NODE_DOWN_EVENT_<listener_name>=OFF


Then reload or restart the listener.


$ lsnrctl reload  <listener_name>


or


$lsnrctl stop <listener_name>


$lsnrct; start <listener_name>

ORA-00060: deadlock detected while waiting for resource. Global Enqueue Services Deadlock detected. More info in file

Environment:

2-node RAC 11.2.0.3.

The following messages are constantly showing up in the alert log.

ORA-00060: deadlock detected while waiting for resource


Tue Jul 21 11:29:29 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.
Tue Jul 21 11:29:40 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.
Tue Jul 21 11:32:12 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.
Tue Jul 21 11:33:03 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.
Tue Jul 21 11:33:29 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.
Tue Jul 21 11:34:22 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.
Tue Jul 21 11:35:04 2015
Global Enqueue Services Deadlock detected. More info in file
 /apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc.


Checking the specified LMD tracefile revealed the locks, sql text and other information.

Note: LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources. The LMD process also handles deadlock detection and remote enqueue requests. Remote resource requests are the requests originating from another instance.

Tracefile "/apps/oracle/diag/rdbms/testdb/testdb2/trace/testdb2_lmd0_21823642.trc" shows the following:

Submitting asynchronized dump request [28]. summary=[ges process stack dump (kjdglblkrdm1)].
Global blockers dump end:-----------------------------------
Global Wait-For-Graph(WFG) at ddTS[0.a5c4] :
BLOCKED 0x7000001cbfccba8 5 wq 2 cvtops x1 TX 0x1470008.0x72e(ext 0x5,0x0)[2009-0094-000001E3] inst 2
BLOCKER 0x7000001c60cd8b8 5 wq 1 cvtops x28 TX 0x1470008.0x72e(ext 0x5,0x0)[2010-010B-00000008] inst 2
BLOCKED 0x7000001c7267ed8 5 wq 2 cvtops x1 TX 0x88000e.0x3578(ext 0x5,0x0)[2010-010B-00000008] inst 2
BLOCKER 0x7000001c7267d08 5 wq 1 cvtops x28 TX 0x88000e.0x3578(ext 0x5,0x0)[2009-0094-000001E3] inst 2

*** 2015-07-21 12:13:52.792



From the above, it is clear that two sessions involved in TX 0x1470008.0x72e and TX 0x88000e.0x3578 form a deadlock, both sessions are from instance 2.

The trace file also contain the following:

 user session for deadlock lock 0x7000001b1ff2b68
  .........
  current SQL:
  delete from DATA where CLEANUP <= :1 and rownum <= :2 



Oracle states that "this is a typical application transaction TX enqueue lock, usually caused by SQL commit sequence and high concurrency. To avoid such deadlock, application code and logic need to be modified."

This information was communicated to the application team for further investigation.
 


 


 





Thursday 16 July 2015

RMAN-03002: failure of restore command at 07/16/2015 20:08:00 RMAN-06004: ORACLE error from recovery catalog database: RMAN-20005: target database name is ambiguous

RMAN-03002: failure of restore command at 07/16/2015 20:08:00
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20005: target database name is ambiguous

The above error was encountered whiles trying to restore a control file to a different host.

Solution:

Set DBID to that of the source database and attempt the restore again. Obtain this from the source database.
syntax: RMAN> set DBID=2378162372

 rman target / catalog rman/XXXXX@CATDB msglog=/tmp/controlfile_restore16jul2015.log
RMAN> set DBID=2378162372
RMAN> run
{
2> 3> allocate channel CH01 type 'sbt_tape';
4> allocate channel CH02 type 'sbt_tape';
allocate channel CH03 type 'sbt_tape';
5> 6> send 'NB_ORA_POLICY=hostname-XXX-HOT-WEEKLY,NB_ORA_SERV=nbumaster,NB_ORA_CLIENT=hostname,NB_ORA_SCHED=Default-Application-Backup';
7> restore controlfile from tag TAG20150716T191835;
8> release channel CH01;
9> release channel CH02;
10> release channel CH03;
11> }



Wednesday 15 July 2015

upgrading RMAN catalog schema to support 12c databases

Existing infrastructure: Recovery catalog database is 11.2.0.3. We have just deployed a 12.1.0.1.0 database and it needs to be included in the backup infrastructure immediately.

An attempt was made to register the 12c database. This was met with errors relating to the version of the catalog schema.

Connecting from the 12c database:

[oracle@mytestbox]$ rman target / catalog rman/xxxxxxx@CATDB

Recovery Manager: Release 12.1.0.1.0 - Production on Mon Jun 22 09:34:59 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

connected to target database: CDB12CR1 (DBID=1111111111)
connected to recovery catalog database
PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.02 in RCVCAT database is too old

RMAN> register database;

PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.02 in RCVCAT database is too old
PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.02 in RCVCAT database is too old
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of register command at 06/22/2015 09:35:19
RMAN-06429: RCVCAT database is not compatible with this version of RMAN


Solution:

We need to upgrade the RMAN owner schema on the catalog database.

1. check the current version of the schema on the catalog database.

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jul 15 15:46:47 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from rman.rcver;

VERSION
------------
11.02.00.03









2. Connect to the 12c database and catalog database.

$ rman target / catalog rman/xxxxxx@CATDB

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Jul 15 14:23:52 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SVWPRD1B (DBID=1111111111)
connected to recovery catalog database
PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.03 in RCVCAT database is too old

RMAN>

RMAN>

RMAN>

RMAN> register database;

PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.03 in RCVCAT database is too old
PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.03 in RCVCAT database is too old
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of register command at 07/15/2015 14:27:59
RMAN-06429: RCVCAT database is not compatible with this version of RMAN

RMAN> upgrade catalog;

recovery catalog owner is RMAN
enter UPGRADE CATALOG command again to confirm catalog upgrade

RMAN> upgrade catalog;

recovery catalog upgraded to version 12.01.00.01
DBMS_RCVMAN package upgraded to version 12.01.00.01
DBMS_RCVCAT package upgraded to version 12.01.00.01

RMAN>

RMAN>

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>

3. Check version of the schema on the catalog database again.

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Jul 15 15:46:47 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from rman.rcver;

VERSION
------------
12.01.00.01







Tuesday 14 July 2015

TNS-12518: TNS:listener could not hand off client connection TNS-12536: TNS:operation would block TNS-12560: TNS:protocol adapter error TNS-00506: Operation would block IBM/AIX RISC System/6000 Error: 11: Resource temporarily unavailable

An application owner was getting the error TNS-12537 while trying to establish connection to a database.

The following errors were found in the listener alert log of an AIX server hosting about 10 database instances.

TNS-12518: TNS:listener could not hand off client connection
TNS-12536: TNS:operation would block
TNS-12560: TNS:protocol adapter error
TNS-00506: Operation would block
IBM/AIX RISC System/6000 Error: 11: Resource temporarily unavailable

oerr utility doesn't show much information.

oratest@db060b8:/ $ oerr ora 12537
12537, 00000, "TNS:connection closed"
// *Cause: "End of file" condition has been reached; partner has disconnected.
// *Action: None needed; this is an information message.



further investigation revealed that the value of maxuproc is set to too low for user "oratest" on the AIX server.

# lsattr -E -l sys0 | grep maxuproc
maxuproc        2048                                 Maximum number of PROCESSES allowed per user      True

For a database that has so many database instances with several client connection requests, we had to adjust this value to a higher one (double in this case).

# chdev -l sys0 -a maxuproc=4096

# lsattr -E -l sys0 | grep maxuproc
maxuproc        4096                                 Maximum number of PROCESSES allowed per user      True


Issue resolved!


Monday 8 June 2015

EXP-00008, ORA-04063, ORA-06508, EXP-00083 PL/SQL: could not find program unit being called: "WMSYS.LTUTIL"

PROBLEM:
The following errors were thrown during export:

. exporting dimensions
. exporting post-schema procedural objects and actions
EXP-00008: ORACLE error 4063 encountered
ORA-04063: package body "WMSYS.LTUTIL" has errors
ORA-06508: PL/SQL: could not find program unit being called: "WMSYS.LTUTIL"
ORA-06512: at "WMSYS.LT_EXPORT_PKG", line 1688
ORA-06512: at line 1
EXP-00083: The previous problem occurred when calling WMSYS.LT_EXPORT_PKG.schema_info_exp
. exporting statistics
Export terminated successfully with warnings.

CAUSES:

1. There are invalid objects in the WORKSPACE, XDB schemas.
2. Absence of execute privilege on UTL_FILE package from PUBLIC.
 
SOLUTION:

CAUSE 1: Connect as SYS and run $ORACLE_HOME/rdbms/admin/owminst.plb to allow the LT_EXPORT_PKG to be validated.

CAUSE 2:

1. Grant execute privilege on SYS.UTL_FILE package to WMSYS user.

SQL> grant execute on SYS.UTL_FILE to WMSYS;

2. check for invalid objects in the WMSYS schema using following query,

 SQL> select object_name,object_type,status from dba_objects where status='INVALID' and owner='WMSYS';

3. Run script $ORACLE_HOME/rdbms/admin/utlrp.sql

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

Run the export again.

Oracle RAC 11gR2: Listener INTERMEDIATE status with "Not All Endpoints Registered"

Problem:

$GRID_HOME/bin/crsctl stat res -t on one RAC node displays the error on node1 ----------------------------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS ----------------------------------------------------------------------------------------------------- Local Resources ----------------------------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE INTERMEDIATE node1 Not All Endpoints Registered ONLINE ONLINE node2


Cause:
Listener was started from $ORACLE_HOME instead of $GRID_HOME

 Solution:

1. Stop currently running local listener from $ORACLE_HOME.

$ORACLE_HOME/bin/lsnrctl stop

2.  Stop SCAN listeners currently running on node1.

$GRID_HOME/bin/crsctl stop res ora.LISTENER.lsnr -n node1
$GRID_HOME/bin/crsctl stop res ora.LISTENER_SCAN1.lsnr -n node1
$GRID_HOME/bin/crsctl stop res ora.LISTENER_SCAN2.lsnr -n node1

3. Start local listener from $GRID_HOME of node1

 $GRID_HOME/bin/crsctl start res ora.LISTENER.lsnr -n node1

4. Start listeners from GRID_HOME.

$GRID_HOME/bin/crsctl start res ora.LISTENER_SCAN1.lsnr -n node1
$GRID_HOME/bin/crsctl start res ora.LISTENER_SCAN2.lsnr -n node1

5. Check CRS status now.

$GRID_HOME/bin/crsctl stat res -t Status should be online now. ----------------------------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS ----------------------------------------------------------------------------------------------------- Local Resources ----------------------------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE node1 ONLINE ONLINE node2

Wednesday 27 May 2015

ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

This error was faced when one customer was trying to load data into an Oracle 12 database. From Oracle Database 12c, there is a new hard limit set for the size of PGA taken up by a particular instance. The parameter is called PGA_AGGREGATE_LIMIT. Read up on this from the Oracle 12c reference. oerr utility shows this: $oerr ora 04036 ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT Cause: Private memory across the instance exceeded the limit specified in the PGA_AGGREGATE_LIMIT initialization parameter. The largest sessions using Program Global Area (PGA) memory were interrupted to get under the limit. Action: Increase the PGA_AGGREGATE_LIMIT initialization parameter or reduce memory usage. $ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Wed May 27 12:38:28 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> show parameter aggregate NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ pga_aggregate_limit big integer 6000M pga_aggregate_target big integer 0 Resolution: Set the parameter to 0 (pre-12c behaviour) SQL> alter system set pga_aggregate_limit=0 scope=both; System altered. Another option is to increase the value if there is enough physical memory available to the database. This resolved the issue and the customer was able to continue loading.

Sunday 17 May 2015

online datafile move in Oracle Database 12c

This is such a cool feature which will help space management both in databases using ASM and filesystem. The challenge: Need to relocate a tablespace to another filesystem or diskgroup. This had to involve some down time in the past, however it can be done online with this new feature. Syntax for this is available in Oracle documentation and several websites. NB: This feature does not work for temp files.

Saturday 16 May 2015

ORA-01114: IO error writing block to file name block

Environment: Oracle Database 12.1.0.1.0 OS: AIX 6.1 The following error was reported in the alert log of one of my databases when a user tried to run a select statement. ORA-01114: IO error writing block to file (block # ) Fri May 15 10:22:00 2015 Errors in file /ORACLE_HOME/app/oracle/diag/rdbms/xxxxxx/XXXXXX/trace/XXXXXX_m001_43188288.trc: ORA-01114: IO error writing block to file (block # ) "oerr ora 01114" shows the following: 01114, 00000, "IO error writing block to file %s (block # %s)" // *Cause: The device on which the file resides is probably offline. If the // file is a temporary file, then it is also possible that the device // has run out of space. This could happen because disk space of // temporary files is not necessarily allocated at file creation time. // *Action: Restore access to the device or remove unnecessary files to free // up space. I looked at the file systems and found one which is 100% used. Checking further revealed that the file system held data files and the database temp file. The datafiles were configured with AUTOEXTENSIBLE OFF and checks on tablespace usage revealed that all tablespaces were below 75% usage threshold. However, since the user was trying to run a select statement which will involve sorting, it became obvious that this has to do with the temporary tablespace. In my case I had another filesystem that had enough free space so I decided to migrate my default temporary tablespace to a new location. Oracle 12c has a new feature where you can migrate datafiles online, however this beautiful feature does not work for temp files! STEPS TO CREATE A NEW TEMPORARY TABLESPACE. ============================================== STEP 1. Create a new temporary tablespace (e.g. TEMP2). SQL> CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/u03/XXXXXX/oradata/XXXXXX/temp/temp01.dbf' size 30g autoextend off; STEP 2. Change default temporary tablespace (assume it was TEMP)to the newly created one (TEMP2). SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2; STEP 3. Check to see that no user sessions are using the original temporary tablespace. If there are any, identify and kill the sessions ONLY if it will not impact production activities. i. Get session number from the V$SORT_USAGE view : SQL> SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE; ii. Get corresponding session ID from the V$SESSION view: This can be done using SESSION_NUM or SESSION_ADDR from the results gotten from the above query. SQL> SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=SESSION_NUM; SQL> SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SADDR=SESSION_ADDR; iii. Kill identified Session(s) with IMMEDIATE option: SQL> ALTER SYSTEM KILL SESSION 'SID,SERIAL#' IMMEDIATE; STEP 4. Drop original temporary tablespace (TEMP in this case) SQL> DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES; STEP 5. Recreate a new/larger temporary tablespace (named TEMP) SQL> CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/u03/XXXXXX/oradata/XXXXXX/temp/temp001.dbf' size 30g autoextend off; STEP 6. Move default temporary tablespace back to TEMP. SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP; STEP 7. Drop TEMP2 temporary tablespace. NB: Dropping TEMP2 tablespace may hand due to usage by a session, so you must check and kill any sessions currently using the temporary tablespace we are about to drop like was done in step 3 above. SQL> SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE; USERNAME SESSION_NUM SESSION_ADDR ------------------------------ ----------- ---------------- SYS 17639 070001004FE710B0 SQL> SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=17639; SID SERIAL# STATUS ---------- ---------- -------- 367 17639 ACTIVE SQL> alter system kill session '367,17639' immediate; System altered. SQL> SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE; no rows selected SQL> DROP TABLESPACE TEMP2 INCLUDING CONTENTS AND DATAFILES; CONCLUSION: The steps stated above helped in resolving the issue and gave joy to the customer!

Monday 27 April 2015

ORA-00230: operation disallowed: snapshot control file enqueue unavailable

This error encountered during archivelog backup of a database. More details of error below: Starting backup at 27-APR-15 channel CH01: starting full datafile backup set channel CH01: specifying datafile(s) in backup set waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue waiting for snapshot control file enqueue cannot make a snapshot control file released channel: CH01 released channel: CH02 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on CH01 channel at 04/27/2015 04:00:14 ORA-00230: operation disallowed: snapshot control file enqueue unavailable Cause: One process is holding an enqueue on the control file when this backup attempt was made. Two options are: 1. Wait for the enqueue holding session/process to complete. 2. Locate and terminate the holder if you can afford to. For option 2. Use the sql below to locate the process: SELECT s.sid, username, program, module, action, logon_time, l.* FROM v$session s, v$enqueue_lock l WHERE l.sid = s.sid and l.type = 'CF' AND l.id1 = 0 and l.id2 = 2; You can use operating system "kill -9" command to terminate the identified process and then re-initiate the backup. For windows, use "orakill" utility to terminate the specific thread. orakill sid thread_id sid: instance ID. thread_id: SPID from above. reference: http://www.arcserve-knowledgebase.com

Wednesday 1 April 2015

ORA-01012: not logged on

The following happened when a user connects to an already running database: $ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 1 08:13:58 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected. SQL> SQL> SQL> SQL> select name from v$database; select name from v$database * ERROR at line 1: ORA-01012: not logged on Process ID: 0 Session ID: 0 Serial number: 0 Cause: This is usually happens to an heavily loaded database. Solution: The following options will help resolve the challenge. 1. Kill Oracle processes> #ps -ef |grep ora_smon* #kill -9 smon_process_id $sqlplus / as sysdba SQL> startup 2. Restart the database server and bring up the database afterwards. 3. Stop the application server and stop listener to prevent new incoming connections. This will release sessions and allow new connections to the database. Additionally, it is advised that the "processes" parameter be increased on the database. SQL> show parameter processes SQL> alter system set processes=new_value scope spfile; SQL> shutdown (immediate) SQL> startup SQL> show parameter processes

Tuesday 31 March 2015

ORA-27300: OS system dependent operation:fork failed with status: 2

This error normally would appear in alert logs in the following way: ORA-27300: OS system dependent operation:fork failed with status: 2 ORA-27301: OS failure message: No such file or directory ORA-27302: failure occurred at: skgpspawn5 Process J001 died, see its trace file kkjcre1p: unable to spawn jobq slave process Errors in file /oracle_home/oracle/myserver/diag/rdbms/myservertest/myserverTEST/trace/myserverTEST_cjq0_18153518.trc: When you try to connect to sqlplus, you may get the following error: $ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 31 11:09:58 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. ERROR: ORA-12536: TNS:operation would block This is caused by the inability of the Oracle software owner to get the OS to spurn more processes. In other words, the user (Oracle software owner) has reached it's maximum number of processes it is allowed to spurn. A check on the server resources will usually show that utilization is no where near maximum. Solution: Check current "maxuproc" for the user # lsattr -E -l sys0 | grep -i maxuproc maxuproc 128 Maximum number of PROCESSES allowed per user True Increase this value to a higher one # chdev -l sys0 -a maxuproc=NewValue Additionally, we can increase number of processes at the database level SQL> show parameter processes SQL> alter system set processes=NewValue scope=spfile;

Saturday 17 January 2015

Prerequisite check "CheckActiveFilesAndExecutables" failed while applying PSU 19121548 on Oracle 11.2.0.3 Home

PROBLEM: [oracleuser@TESTSERVER 19121548]$ opatch apply Oracle Interim Patch Installer version 11.2.0.3.6 Copyright (c) 2013, Oracle Corporation. All rights reserved. Oracle Home : /oracle_home/app/orahome Central Inventory : /oracle_home/app/oracle/oraInventory from : /oracle_home/app/orahome/oraInst.loc OPatch version : 11.2.0.3.6 OUI version : 11.2.0.3.0 Log file location : /oracle_home/app/orahome/cfgtoollogs/opatch/opatch2015-01-17_23-26-27PM_1.log Verifying environment and performing prerequisite checks... Prerequisite check "CheckActiveFilesAndExecutables" failed. The details are: Following executables are active : /oracle_home/app/orahome/lib/libclntsh.so.11.1 UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed. Log file location: /oracle_home/app/orahome/cfgtoollogs/opatch/opatch2015-01-17_23-26-27PM_1.log CAUSE: Some processes are using the named executable file "/oracle_home/app/orahome/lib/libclntsh.so.11.1" SOLUTION: Identify the processes using the executable and kill them. [oracleuser@TESTSERVER 19121548]$ /sbin/fuser /oracle_home/app/orahome/lib/libclntsh.so.11.1 /oracle_home/app/orahome/lib/libclntsh.so.11.1: 416m 422m 813m 838m 873m 879m 1187m 1549m 1746m 1812m 2187m 2222m 2223m 2254m 2505m 2821m 3031m 3158m 3452m 3514m 3552m 3572m 3808m 4112m 4317m 4459m 4730m 4764m 4841m 4882m 5061m 5402m 5572m 5694m 6013m 6060m 6101m 6208m 6353m 6724m 6851m 6954m 7317m 7422m 7471m 7615m 7704m 8176m 8184m 8325m 8636m 8727m 8753m 8980m 9055m 9553m 9592m 9839m 9921m 10073m 10098m 10384m 10435m 10928m 10976m 10994m 11268m 11391m 11399m 11430m 11686m 11701m 12229m 12241m 12544m 12576m 12670m 12676m 12699m 12982m 12998m 13518m 13520m 13841m 13986m 13987m 13988m 14021m 14301m 14340m 14766m 14808m 15216m 15417m 15443m 15457m 15853m 15859m 16206m 16281m 16566m 16720m 16755m 16781m 17140m 17148m 17484m 17549m 17563m 17863m 18006m 18029m 18068m 18453m 18470m 18759m 18836m 18861m 19119m 19299m 19304m 19358m 19727m 19774m 20032m 20118m 20133m 20410m 20559m 20569m 20639m 21025m 21085m 21375m 21400m 21680m 21807m 21829m 22358m 22438m 22672m 22764m 22958m 23004m 23167m 23185m 23629m 23737m 23930m 23996m 24304m 24350m 24427m 24460m 24943m 25052m 25208m 25241m 25604m 25623m 25692m 25757m 26213m 26351m 26455m 26517m 26899m 26910m 26968m 27044m 27513m 27677m 27755m 27829m 28175m 28192m 28243m 28412m 28783m 29001m 29095m 29239m 29455m 29460m 29497m 29705m 30049m 30330m 30360m 30630m 30750m 30763m 30770m 30978m 31340m 31610m 31629m 31993m 32017m 32054m 32055m 32307m 32647m [oracleuser@TESTSERVER 19121548]$ kill -9 422 [oracleuser@TESTSERVER 19121548]$ kill -9 813 [oracleuser@TESTSERVER 19121548]$ kill -9 838 [oracleuser@TESTSERVER 19121548]$ kill -9 873 [oracleuser@TESTSERVER 19121548]$ kill -9 879 [oracleuser@TESTSERVER 19121548]$ kill -9 1187 [oracleuser@TESTSERVER 19121548]$ kill -9 1549 [oracleuser@TESTSERVER 19121548]$ kill -9 1746 credit to http://yichen-oracledba.blogspot.com/