Monday 21 October 2013

ORA-1652: unable to extend temp segment by 128 in tablespace TEMP

Hi all,

Been noticing this error in my alert log for a while. Then I got a call from one user complaining of a failing query..this is a scheduled query that runs early in the mornings daily.

Solution to my case:

1. I looked through my alert log to see if I will find any special incidents around the period which these errors were encountered. I didn't find any!

2. I looked at AWR and ADDM reports for the periods around this occurrence. No serious observation except that I found "Undersized PGA". This I did not take seriously.

3. I immediately proceeded to increase the size of my TEMP tablespace by resizing the one of the datafiles. This had no effect whatsoever because the error continued to show up in the alert logs and the query kept failing.

4. I had them schedule the query to a period when there's less transaction on the database (weekend). It still failed. This morning, I came in a looked at my alert logs only to find several incidents of ORA-1652 all through the weekend! This was a period when activity on the DB is not anywhere near optimal.

6. All the while, "Undersized PGA" kept showing up in my ADDM reports.So I decided to increase my pga_aggregate_target. We ran the query again and it worked!

Lesson:

Pay attention to seemingly unimportant alerts especially from your ADDM reports. I must add that all the while that I had "Undersized PGA", it stated that this had less than 2% impact on my peformance.


Tuesday 17 September 2013

OUI - 10022: Fixing a corrupt OraInventory

I ran into this error while deploying OEM 12c agent to a remote AIX 6.1 server running Oracle DB 11.2.0.2.0.

The following error came up:

OUI-10022:The target area $ORACLE_BASE/oraInventory cannot be used because it is in an invalid state.. This implies my global oracle inventory is corrupt. First I checked to ensure that I have the appropriate permission (755) on the oraInventory direction. This was in place so permissions could not be the issue.

For AIX/UNIX, the OraInventory can be located by checking /etc/oraInst.loc. Having located the file, the contents looked like this:

inventory_loc=<location_for_oraInventory>
inst_group=<name_of_group_for_installtion> 

I navigated to the inventory location and renamed the existing oraInventory directory.

mv oraInventory oraInventory_orig

Next, I navigated to $ORACLE_HOME/oui/bin where I ran OUI as follows:

 ./runInstaller -silent -ignoreSysPrereqs -attachHome ORACLE_HOME="oracle_home_directory" ORACLE_HOME_NAME="OraDB11g_home1"


Verify the newly created oraInventory using:

$ORACLE_HOME/OPatch/opatch lsinventory -detail

Wednesday 7 August 2013

ORA-01516: nonexistent log file, data file, or temporary file .....................

I got this error while trying to resize (increase) one of my datafiles. I was baffled as the message is tell me the file is nonexistent and here I was staring at the file in the directory!

I ensured the said file was online and available, checked my syntax to see if I missed anything or if I had mixed up and upper case for a lower case and vice versa but I found nothing.

Big thanks to Vijay Kumar of http://vijaykumar-dbahub.blogspot.com. His post bailed me out!

I resized describing the datafile by its file#.

SQL> select FILE#,CREATION_TIME,name from v$datafile where name like 'full/path/to/datafile.dbf';

 SQL> alter database datafile (data file number) resize 20000M;

Database altered.

Root Cause Analysis still according to Vijay Kumar.

I created a backup controlfile to trace and then opened it with VI and alas! there was a space right in betweem the .dbf in the datafile name and the ' symbol. i.e.

'/oradata4/ORA/datafiles/ORA_data/XXX_XXX_X_XXXX.dbf  '

instead of

'/oradata4/ORA/datafiles/ORA_data/XXX_XXX_X_XXXX.dbf '


This was rectified by renaming the datafile.


Tuesday 23 July 2013

Running SQL Access Advisor from SQL Plus

Following one of the recommendations from running ADDM on one of my production databases, I decided to run SQL Access Advisor rather than just creating a couple of Indexes suggested by Oracle.

My problem, EM is not available on that database server! I needed to find a way to run SQL Access Advisor without going through EM.

Thankfully, I have some other databases with EM running and knowing that Oracle put this really nice feature in EM where you get the chance to see the exact SQL statements that are being executed against your database. I quickly connected to one of my test databases and I proceeded to Advisor Central > SQL Advisors > SQL Access Advisor.


PS: I decided to run an comprehensive analysis with SQL AA.

I followed the steps until I got the chance to click the SHOW SQL button. The output is below:

DECLARE

taskname varchar2(30) := 'desired_task_name';
task_desc varchar2(256) := 'SQL Access Advisor';
task_or_template varchar2(30) := 'SQLACCESS_EMTASK';
task_id number := 0;
num_found number;
sts_name varchar2(256) := 'desired_task_name_sts';
sts_cursor dbms_sqltune.sqlset_cursor;

BEGIN
/* Create Task */
dbms_advisor.create_task(DBMS_ADVISOR.SQLACCESS_ADVISOR,task_id,taskname,task_desc,task_or_template);
/* Reset Task */
dbms_advisor.reset_task(taskname);
/* Delete Previous STS Workload Task Link */
select count(*) into num_found from user_advisor_sqla_wk_map where task_name = taskname and workload_name = sts_name;
IF num_found > 0 THEN
dbms_advisor.delete_sqlwkld_ref(taskname,sts_name,1);
END IF;
/* Delete Previous STS */
select count(*) into num_found from user_advisor_sqlw_sum where workload_name = sts_name;
IF num_found > 0 THEN
dbms_sqltune.delete_sqlset(sts_name);
END IF;
/* Create STS */
dbms_sqltune.create_sqlset(sts_name, 'Obtain workload from cursor cache');
/* Select all statements in the cursor cache. */
OPEN sts_cursor FOR
SELECT VALUE(P)
FROM TABLE(dbms_sqltune.select_cursor_cache) P;
/* Load the statements into STS. */
dbms_sqltune.load_sqlset(sts_name, sts_cursor);
CLOSE sts_cursor;
/* Link STS Workload to Task */
dbms_advisor.add_sqlwkld_ref(taskname,sts_name,1);
/* Set STS Workload Parameters */
dbms_advisor.set_task_parameter(taskname,'VALID_ACTION_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'VALID_MODULE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'SQL_LIMIT',DBMS_ADVISOR.ADVISOR_UNLIMITED);
dbms_advisor.set_task_parameter(taskname,'VALID_USERNAME_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'VALID_TABLE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_TABLE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_ACTION_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_USERNAME_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_MODULE_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'VALID_SQLSTRING_LIST',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'INVALID_SQLSTRING_LIST','"@!"');
/* Set Task Parameters */
dbms_advisor.set_task_parameter(taskname,'ANALYSIS_SCOPE','ALL');
dbms_advisor.set_task_parameter(taskname,'RANKING_MEASURE','PRIORITY,OPTIMIZER_COST');
dbms_advisor.set_task_parameter(taskname,'DEF_PARTITION_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'TIME_LIMIT',10000);
dbms_advisor.set_task_parameter(taskname,'MODE','COMPREHENSIVE');
dbms_advisor.set_task_parameter(taskname,'STORAGE_CHANGE',DBMS_ADVISOR.ADVISOR_UNLIMITED);
dbms_advisor.set_task_parameter(taskname,'DML_VOLATILITY','TRUE');
dbms_advisor.set_task_parameter(taskname,'WORKLOAD_SCOPE','PARTIAL');
dbms_advisor.set_task_parameter(taskname,'DEF_INDEX_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_INDEX_OWNER',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_MVIEW_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_MVIEW_OWNER',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'DEF_MVLOG_TABLESPACE',DBMS_ADVISOR.ADVISOR_UNUSED);
dbms_advisor.set_task_parameter(taskname,'CREATION_COST','TRUE');
dbms_advisor.set_task_parameter(taskname,'JOURNALING','4');
dbms_advisor.set_task_parameter(taskname,'DAYS_TO_EXPIRE','30');
/* Execute Task */
dbms_advisor.execute_task(taskname);
END;
/




To read the output:
#################################################################
SET LONG 100000
SET PAGESIZE 50000
SELECT DBMS_ADVISOR.get_task_script('desired_task_name') AS script
FROM   dual;
SET PAGESIZE 24



ORA-04030: out of process memory when trying to allocate XXXXXX bytes...

I was running SQL Access Advisor when my alert log threw up this error. First thing that came to my mind obviously was that was I out of memory? The answer to that is a big NO! My initial investigation at the OS level shows that I have enough free memory available.
After pondering for a while, I searched Oracle forums and found what the problem was... I was running the Oracle session while logged in as sysdba from an OS user account that has low ULIMIT SETTINGS!

SOLUTION:

I got the sys admin to change my ULIMIT settings to the below:

# ulimit -a
time(seconds)              unlimited
file(blocks)                  unlimited
data(kbytes)                unlimited
stack(kbytes)              4194304
memory(kbytes)          unlimited
coredump(blocks)       unlimited
nofiles(descriptors)      unlimited
threads(per process)    unlimited
processes(per user)     unlimited


ORA-48913

Found this error in one of my production alert logs....has to do with the sizing of my trace files which is also in a way dependent on OS block size. Mine is 512K, I think most UNIX are the same.
ERROR MESSAGE:
Non critical error ORA-48913 caught while writing to trace file &quot;/dboracle/product/dbhome_1/~

SOLUTION:

I increased the "max_dump_file_size" parameter using the SQL syntax below:

SQL> alter system set max_dump_file_size=51200 scope=both;

There is the option of setting max_dump_file_size to UNLIMITED. This means the trace files will grow as much as the OS permits.

This error has nothing to do with the sizing of your alert logs though.

I'll keep monitoring to see if the message shows up once again!