Thursday, December 30, 2010

Oracle Useful Commands.....Unlimited Edition ;)

Error:

ORA-19815: WARNING: db_recovery_file_dest_size of 65779059917 bytes is 85.06% used

Oracle ASM server will naturally generate this error when the available space falls below 15% at db_recovery_file_dest Area

Please Check


SQL> SELECT object_type, message_type, message_level,reason, suggested_action FROM dba_outstanding_alerts;

db_recovery_file_dest_size of 65779059917 bytes is 85.06% used and has 9828093133 remaining bytes available.

Fix:

sql>SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE .04 0 1
ONLINELOG .65 0 8
ARCHIVELOG 17.61 0 329
BACKUPPIECE 0 0 0
IMAGECOPY 0 0 0
FLASHBACKLOG 71.63 71.5 1180

This will tell us the next course of action instead of leading us to sheer increasing db_recovery_file_dest_size on FRA diskgroup

SQL> select NAME,STATE,TYPE ,TOTAL_MB,FREE_MB,USABLE_FILE_MB from v$asm_diskgroup;

NAME STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB
------------------------------ ----------- ------ ---------- ---------- --------------
dbname_DATA1 CONNECTED EXTERN 243957 64945 64945
dbname_FRA1 CONNECTED EXTERN 69702 11218 11218
dbname_DATA1 MOUNTED EXTERN 522765 367338 367338
dbname_FRA1 MOUNTED EXTERN 34851 24176 24176

The main culprits of this issue are FLASHBACKLOG and ARCHIVELOG file types occupying 71 and 17% respectively
From the alertlog the flashback logs seems to be removed by oracle server itself whenever on demand...

Therefore I tried backing up archivelogs with delete input option and now the space released with no similar outstanding alerts in the ASM box

SQL> SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE .04 0 1
ONLINELOG .65 0 8
ARCHIVELOG 3.48 0 65
BACKUPPIECE 0 0 0
IMAGECOPY 0 0 0
FLASHBACKLOG 70.39 70.27 1160

Useful SQL Server commands

SQLServer :

xp_fixeddrives Returns the list of hard drives and free space

dbcc sqlperf (logspace) Chk Logspace.

Shrink Database and log:

dbcc shrinkdatabase (tempdb)

sp_helpdb d_dbdba002

dbcc shrinkfile ('L001')

dbcc shrinkfile ('MSDBLog')

Chk Blocking on the server :

Blocking in Sql Server:

select spid,cmd,blocked from sysprocesses where blocked>0
sp_who2"SPID"
select spid,blocked,cmd,memusage,physical_io from sysprocesses order by memusage desc
select spid,blocked,cmd,memusage,physical_io from sysprocesses order by physical_io desc

xp_cmdshell "tlist" - Displays the host processes with ‘pid’s

Restarting job from commandshell
xp_cmdshell "nc_restart servername jobname"

select paramname, paramvalue from d_dbdba002..task_params where paramname = 'DUMP_GENERATIONS'

sp_whodo n_cycle

select @@errorlog
select @@boottime

nc_task "DBAUS"

Sqltext capture

dbcc traceon(3604)
dbcc inputbuffer(121)

Db Uptime
select crdate from sysdatabases where dbid=2
Process count
select suser_name(suid),count(*) from sysprocesses group by suid

xp_readerrorlog

select loginname , createdate,updatedate,accdate from master..syslogins where loginname like '%username%'

--SQLserve Bkp compression issues

select databasename, backup_time, compression
from d_dbdba002..backups
where backup_time >= ''
order by backup_time
go

update d_dbdba002..backups
set compression = 'zip'
where backup_time = '2009-02-14 21:32:27.880' --the milli seconds.
and compression != 'zip'
go

--Update the file name.
update d_dbdba002..backup_files
set filename = filename + '.gz'
where backup_time = '2009-02-14 21:32:27.880'
and filename not like '%.gz'

dbcc opentran

sp_monitor - statistics about sqlserver

sp_tempdbspace - returns the total size and the space used by the tempdb database

xp_ServiceControl ‘START’,’SQLServerAgent’ - to start or stop sql server agent > sql 2k and above

sp_MStablespace - returns the number of rows in a table and the space the table and index use

select * from ::fn_virtualservernodes( ) -- returns the list of nodes on which the virtual server can run

select serverproperty('option')
MachineName – Returns the hostname on which server currently running
IsClustered - Returns 1 if server in cluster, else 0.


select has_dbaccess('da name')-- Returns 1 if user has access to the specified database, else 0.

select databaseproperty ('db name', 'option')

Returns 1 for True, 0 for false, else specific.
Useful options:
IsInLoad
IsOffline
IsAutoUpdateStatistics
IsInRecovery
IsSuspect
IsTruncLog
IsAutoShrink
Status


Tsadmin -- to login to host and check active loggedin sessions


How to find out when was an index rebuilt last?

SELECT 'Index Name' = i.name,
'Statistics Date' = STATS_DATE(i.id, i.indid)
FROM sysobjects o, sysindexes i
WHERE o.name = 'object name' AND o.id = i.id
go

sp_helpdb
sp_helpdb tempdb
dump tran tempdb with truncate_only

Saturday, November 27, 2010

Stats pack reports..

excellent explanation..

http://www.akadia.com/services/ora_statspack_survival_guide.html


and how to get sqlplan of a sql statement..

SELECT p.plan_table_output
FROM v$session s,table(dbms_xplan.display_cursor(s.sql_id, s.sql_child_number)) p where s.sid = &1

Thursday, November 4, 2010

Database hung..not allowing new connections

[one more case study from Arup nanda's blog..Amazing stuff ]

Diagnosing Library Cache Latch Contention: A Real Case Study
The other day, our data warehouse database was effectively down. The database was up and running; but all connection attempts to the database would just hang. Since connections just failed, the DBA couldn’t even get in to see if the sessions connected were running normally or not. The ETL team reported that the processing slowed down to a crawl. The DBA, as instructed, went on to check the wait events; but since she couldn’t even log in, she couldn’t do that. Interestingly, CPU was around 70%, normal for the time of the day; and I/O was about 90%, again normal.

So, in simple terms – she was stuck and she applied the age-old technique: when in doubt, reboot. She asked the SA to reboot the server. It took about 30 minutes; and after it cam backup and the database started everything seemed normal for about 10 minutes. And, then the same problem stuck – the database got unresponsive.

At that point, I was called for help. In this blog I am sort of trying to paint a picture of what I did in the next half hour or so to resolve the issue. I hope by giving this running commentary of the events, facts and figures, you will follow my thoughts as I went about diagnosing the issue.

Symptoms

(1) Database connections were hanging
(2) Connections on the server using SQL*Plus as sysdba was also hanging; so we couldn’t even examine the issue.
(3) The system couldn’t be rebooted everytime; every reboot means more work for the ETL folks to clean up the failed sessions.

Initial Analysis

Obviously something is holding some resource back in the database; but what? Unless we connected to the database, we couldn’t check. And since couldn’t check, we couldn’t remediate. It was a classic case of Catch-22.

Action

This is one of those cases where a backdoor entry to the database instance comes really handy. Most people are not aware of the option in SQL*Plus called “prelim”. This option connects to the SGA; but does not open a session.

(1) So, the first thing I did is to open a sessionless SQL*Plus session using the Oracle user:

$ sqlplus -prelim
SQL>

Note, it didn’t say anything familiar like “Connected to Oracle Database 10.2.0.3”, etc. All it showed was the “SQL>” prompt. That was because it didn’t actually connect to the database.

(2) Then I used the “oradebug” utility to analyze the SGA

SQL> oradebug setmypid
SQL> oradebug hanganalyze 12

This produced a tracefile in the user_dump_dest directory. The file wasn’t difficult to find, since it was the last file created. Even if I didn’t find the file, I could have used the process ID to find the file. The file would have been named crmprd1_ora_13392.trc, assuming 13392 was the process ID.

(3) Let’s examine the file. Here are first few lines:

*** 2008-08-23 01:21:44.200
==============
HANG ANALYSIS:
==============
Found 163 objects waiting for
<0/226/17/0x1502dab8/16108/no>
Open chains found:
Chain 1 : :
<0/226/17/0x1502dab8/16108/no>
<0/146/1/0x1503e898/19923/latch:>

This tells me a lot. First it shows that the SID 146 Serial# 17 is waiting for library cache latch. The blocking session is SID 226 Serial# 17. The latter is not waiting for anything of blocking nature. I also noted the OS process IDs of these sessions – 16108 and 19923.

(4) Next I checked for two more tracefiles with these OS PIDs in their names.

crmprd1_ora_16108.trc
crmprd1_ora_19923.trc

(5) I opened the first one, the one that is the blocker. Here are the first few lines:

*** 2008-08-23 01:08:18.840
*** SERVICE NAME:(SYS$USERS) 2008-08-23 01:08:18.781
*** SESSION ID:(226.17) 2008-08-23 01:08:18.781
LIBRARY OBJECT HANDLE: handle=c0000008dc703810 mtx=c0000008dc703940(8000) cdp=32737
name=UPDATE DW_ETL.FRRS_PROFILER SET CONSUMER_LINK = :"SYS_B_0", ADDRESS_LINK = :"SYS_B_1", ADDRESS_MATCH = :"SYS_B_2", PROC
ESSED=:"SYS_B_3" WHERE RNUM = :"SYS_B_4"
hash=a029fce7bb89655493e7e51a544592a4 timestamp=08-23-2008 00:10:23
namespace=CRSR flags=RON/KGHP/TIM/OBS/PN0/MED/KST/DBN/MTX/[504100d0]
kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=10 hpc=0058 hlc=0058
lwt=c0000008dc7038b8[c0000008dc7038b8,c0000008dc7038b8] ltm=c0000008dc7038c8[c0000008dc7038c8,c0000008dc7038c8]
pwt=c0000008dc703880[c0000008dc703880,c0000008dc703880] ptm=c0000008dc703890[c0000008dc703890,c0000008dc703890]
ref=c0000008dc7038e8[c0000008dc7038e8,c0000008dc7038e8] lnd=c0000008dc703900[c0000008dc703900,c0000008dc703900]
LOCK OWNERS:
lock user session count mode flags
---------------- ---------------- ---------------- ----- ---- ------------------------
c0000008d079f1b8 c0000006151744d8 c0000006151744d8 16 N [00]
c0000008d4e90c40 c0000006151bcb58 c0000006151bcb58 16 N [00]
c0000008d0812c40 c0000008151a0438 c0000008151a0438 16 N [00]

(6) This is a treasure trove of information for debugging. First it shows the SID and Serial# (226.17), which confirms the SID we identified earlier. It shows the exact SQL statement being used. Finally it shows all the locks. I didn’t particularly care about the specifics of locks; but it gave me enough information to prove that the SID 226 was causing a wait for a whole lot of other sessions.

(7) My investigation is not done; I need to find out the sessions waiting for this. So, I searched the file for a section called “PROCESS STATE”. Here is a snippet from the file:

PROCESS STATE
-------------
Process global information:
process: c00000081502dab8, call: c000000817167890, xact: 0000000000000000, curses: c00000081519ef88, usrses: c000000815
19ef88
----------------------------------------
SO: c00000081502dab8, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=370, calls cur/top: c000000817167890/c000000817167890, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 115 0 4
last post received-location: kslpsr
last process to post me: c000000615002038 1 6
last post sent: 0 0 24
last post sent-location: ksasnd
last process posted by me: c000000615002038 1 6
(latch info) wait_event=0 bits=20
holding (efd=4) c0000008d7b69598 Child library cache level=5 child#=10
Location from where latch is held: kglhdgc: child:: latch
Context saved from call: 13
state=busy, wlstate=free
waiters [orapid (seconds since: put on list, posted, alive check)]:
291 (197, 1219468295, 197)
279 (197, 1219468295, 197)
374 (197, 1219468295, 197)
267 (197, 1219468295, 197)
372 (197, 1219468295, 197)
... several lines sniped ...
307 (15, 1219468295, 15)
181 (6, 1219468295, 6)
waiter count=58
Process Group: DEFAULT, pseudo proc: c0000008e03150d8
O/S info: user: oracrmp, term: UNKNOWN, ospid: 16108
OSD pid info: Unix process pid: 16108, image: oracle@sdwhpdb1

(8) This told me everything I needed to know. There were 58 sessions waiting for library cache latch held by SID 226. I also knew the OS Process ID and the SQL statement of the blocking session.

(9) At that time we engaged the Application Owner to explain what was going on. As he explained it, he issues the update statement in a loop. And that’s not all; he executes it in 8 different threads. No wonder we have had library cache latch contention. So, we had to track 8; not just one session. We trudged on. All the sessions dumped their information. So, I searched the directory for some other files with the same issues:

$ grep “UPDATE DW_ETL” *.trc

(10) And I found 9 more sessions (or, rather, processes). Here is a snippet from another file:

350 (167, 1219470122, 167)
197 (167, 1219470122, 167)
waiter count=185
Process Group: DEFAULT, pseudo proc: c0000008e03150d8
O/S info: user: oracrmp, term: UNKNOWN, ospid: 16114

This process had 185 waiters! Ouch!

(11) Now comes a decision point. I knew who is blocking and who were being blocked; although I didn’t yet know what latch exactly is being contented for. I could have dumped the library cache latches to get that information; but the application owner volunteered to terminate the sessions. The application, fortunately, was restartable. So, we decided to kill all of these errant sessions on the unix prompt.

$ kill -9

(12) After killing a few processes, the database started responding. After killing all of them, the database wait events came back to completely normal. Connections were established and applications started behaving normally.

After step 11, I could have used the library cache dump to examine the exact library element in the center of the contention; but that’s a topic for another blog.

Takeaways

(1) When something seems to be “hung”, don’t get hung up on that. A session almost always waits for something; rarely it is just “hung”. You should check what it is waiting for by selecting the EVENT column of V$SESSION (10g) or V$SESSION_WAIT (9i).
(2) When you can’t logon to the database to get the information, try using oradebug command.
(3) To use oradebug, you should use SQL*Plus. Since you can’t login, use “sqlplus -prelim" to get the SQL prompt.
(4) Use oradebug setmypid to start the oradebug session, and then use oradebug hanganalyze to create a dumpfile of all hang related issues
(5) Use oradebug help to see all oradebug commands

FAL, Fetch Archived Log server failure..

[ copied from Arup Nanda's Blog..really helpful]

Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup

Recently, we had a glitch on a Data Guard (physical standby database) on infrastructure. This is not a critical database; so the monitoring was relatively lax. And that being done by an outsourcer does not help it either. In any case, the laxness resulted in a failure remaining undetected for quite some time and it was eventually discovered only when the customer complained. This standby database is usually opened for read only access from time to time.This time, however, the customer saw that the data was significantly out of sync with primary and raised a red flag. Unfortunately, at this time it had become a rather political issue.

Since the DBA in charge couldn’t resolve the problem, I was called in. In this post, I will describe the issue and how it was resolved. In summary, there are two parts of the problem:

(1) What happened
(2) How to fix it

What Happened

Let’s look at the first question – what caused the standby to lag behind. First, I looked for the current SCN numbers of the primary and standby databases. On the primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall clock. To know that I used the scn_to_timestamp function to translate the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM

I ran the same query to know the timestamp associated with the SCN of the standby database as well (note, I ran it on the primary database, though; since it will fail in the standby in a mounted mode):

SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point is not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a 10.2 database where FAL server should automatically resolved any gaps in archived logs. Something must have happened that caused the FAL (fetch archived log) process to fail. To get that answer, first, I checked the alert log of the standby instance. I found these lines that showed the issue clearly:


Fri Dec 18 06:12:26 2009
Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700
… …
Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.


Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009
Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery Process encountered an error while receiving the log information from the primary. The error was ORA-12514 “TNS:listener does not currently know of service requested in connect descriptor”. This is usually the case when the TNS connect string is incorrectly specified. The primary is called DEL1 and there is a connect string called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the standby getting the archived logs; so there connect string is fine - now. The standby is receiving log information from the primary. There must have been some temporary hiccups causing that specific archived log not to travel to the standby. If that log was somehow skipped (could be an intermittent problem), then it should have been picked by the FAL process later on; but that never happened. Since the sequence# 700 was not applied, none of the logs received later – 701, 702 and so on – were applied either. This has caused the standby to lag behind since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence# 700 from the primary. To get to that, I looked into the alert log of the primary instance. The following lines were of interest:


Tue Dec 15 19:19:58 2009
Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.


These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory

The archived log simply was not available. The process could not see the file and couldn’t get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs to make some room in the filesystem without realizing that his action has removed the most current one which was yet to be transmitted to the remote site. The mystery surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log sequence# 700 was available on the primary, I could have easily copied it over to the standby, registered the log file and let the managed recovery process pick it up. But unfortunately, the file was gone and I couldn’t just recreate the file. Until that logfile was applied, the recovery will not move forward. So, what are my options?

One option is of course to recreate the standby - possible one but not technically feasible considering the time required. The other option is to apply the incremental backup of primary from that SCN number. That’s the key – the backup must be from a specific SCN number. I have described the process since it is not very obvious. The following shows the step by step approach for resolving this problem. I have shown where the actions must be performed – [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf
… …
piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback
oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl

10.[Standby] Mount the standby database:

SQL> alter database mount standby database;

11.[Standby] RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these files:

$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)
RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database
=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...…
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_8012_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_8013_697108460.dbf
… …

13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected archived log with sequence# 8008 has not been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCN’s in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other. The standby has now caught up.

[FOR ASM ]

using ASM with Oracle Managed Files, the names will not match so do these steps.

RMAN > catalog start with '+DISKGROUP/db_name/datafile';

list datafilecopy all;

Now run this from your database in mount mode.

select 'switch datafile ' ||file#|| ' to copy;' from v$datafile;

This will spool out text. Run it in RMAN.

Then do the restore part.

Friday, August 20, 2010

Popular Linux flavors

Let me start from the popular versions.

Fedora:-


It’s a famous and a very popular Linux distro from the Leader of Open Source. So who is the leader, it's RedHat Linux. RedHat is the registered trademark of Linus.

Fedora gets released with the license of Open source GPL V3. When come to Commercial and licensed part, RedHat Linux is playing the major role with Enterprise Version called RedHat Enterprise Server (RHES).

All the Open Source tools packages are introduced into Fedora and tested well to use in the production environment. Testing is not only conducted for the software packages but also for the popular Hardware platforms (in-case of kernel updates and patches). Every set of Open Source distro is released in the development cycle of three months. Based on the alpha testing the beta version is get released. They also have a separate Open Source development forum for the volunteers and students, who are all interested in OSS (Open Source Software) developments.

This is one of the best distro among all other distro's, which support N number of Hardware’s. You can find RedHat related details and stuffs from www.redhat.com and fedora from www.fedoraproject.org.

Commercial Distro Versions:

RedHat Linux 5 (Stable)

RedHat Linux 6 (Beta)

Open Source Distro Versions:

Fedora 12 (Stable)

Fedora 13 (Beta)


Debian:

The latest version is 5.0 (Stable). This distro provides N number of packages and development utils. It also well knows for its popularity through ease of use.

Debian is also a free Operating System. You can download this version from www.debian.org.

SuSE Linux:

Novell is the father of Network Operating System, who developed Novell Netware and Innovated the new technology called Network Operating System (distributed Operating system). He is the one who first introduced Server which can be deployed in a network. SuSE Linux is the Novell's Linux distro. This is also one of the popular distro in the industries. This distro also has its Open Source version where you can get it from www.opensuse.org.

Commercial Distro Versions:

SLES 11.0 (Enterprise Server)

SLED 11.0 (Enterprise Desktop)

Open Source Distro Version:

Open SuSE: 11.2 (Open Source)

Mandriva:


One of the cute Linux which comes in mini USB (Thumb Drive), just plug and boot your machine to booting from USB. You Linux is ready. Very simple and easy to use. But only support known Hardware’s. Commercial distro Enterprise Linux 5 is the latest version. You can find the distro from www.mandriva.com.

Knoppix:

When come to knoppix, it’s a German product. Comes with powerful monitoring tools deployed. But I didn't see this version using any where (Up to my knowledge). You can find the distro from www.knoppix.net.

Slackware:


Slack is the tough distro of all. It supports only limited Hardware, but it contains all the CLI based tools. No need to go for GUI mode, even we can play songs from CLI mode itself. The current version is 13. You can find the distro from www.slackware.com

Ubuntu:

Ubuntu is one of the competitors for Fedora. Where Ubuntu has more tools and adoptability in the platform and Speed in access. The current and benchmark version is 6.06 LTS (Long Term Support) which comes for all popular Hardware platforms. Latest distro is 9.0. You can find the distro from www.ubuntu.com.

In general there are 300+ distro’s available in the Internet for free. Among those flavors the above mentioned distro are popular. This also providing Live CD, where you, no need to install the OS, just boot it from CD. Your working environment is ready to use, it won't touch your Hard disk and your data is safe.

Friday, August 13, 2010

Business Continuity Volume : BCV

What is BCV
An EMC product, BCV, stands for Business Continuity Volume, using which business today can run with little or no downtime at all for database backup/recovery operation.
In BCV process, BCV Disk volume is mirrored with Production Disk volume . As and when required BCV volume can be split from the mirror and transferred to any other machine and mount it there for backup.
We use BCV process for Reporting database refresh, Database cold backup and DR database refresh.

One of the DR Method : SRDF (Symmetrix Remote Data Facility)

SRDF (Symmetrix Remote Data Facility) is a family of EMC products that facilitates the data replication from one Symmetrix storage array to another through a Storage Area Network or IP network.

SRDF logically pairs a Logical Unit Number (LUN) or a group of LUNs from each array and replicates data from one to the other synchronously or asynchronously. An established pair of LUNs can be split, so that separate hosts can access the same data independently (maybe for backup), and then resynchronised.

In synchronous mode (SRDF/S), the primary array waits until the secondary array has acknowledged each write before the next write is accepted, ensuring that the replicated copy of the data is always as current as the primary. However, the latency due to propagation increases significantly with distance.

Asynchronous SRDF (SRDF/A) transfers changes to the secondary array in units called delta sets, which are transferred at defined intervals. Although the remote copy of the data will never be as current as the primary copy, this method can replicate data over considerable distances and with reduced bandwidth requirements and mimimal impact on host performance.

Other forms of SRDF exist to integrate with clustered environments and to manage multiple SRDF pairs where replication of multiple LUNs must be consistent (such as with the data files and log files of a database application).

Thursday, July 22, 2010

Sybase Prod commands..

Check Blocking session..

select spid," login " = convert(varchar(12),suser_name(suid)),cmd,
" database " = convert(varchar(12),db_name (dbid)),cpu,physical_io,memusage,blocked,hostname,time_blocked from
master..sysprocesses where blocked>0

select spid,uid,blocked,cmd,dbid,'dbname'=db_name(dbid) from master..sysprocesses where blocked > 0

check db is in logsuspend or not

select spid,uid,blocked,cmd,dbid,'dbname'=db_name(dbid) from master..sysprocesses where cmd = 'LOG SUSPEND'

Check opentrans on db..

select spid,loggedindatetime from master..sysprocesses where spid
in (select spid from master..syslogshold)

select * from master..syslogshold
sp_whodo "spid"
sp_whodo "n_cycle"

Check DB startup time

select "dbname" = convert(varchar(12),db_name(dbid)),crdate,"currentdate" =getdate()
from sysdatabases where dbid=2


Check Number of user connections on db

select suser_name(suid),count(*) from sysprocesses group by suid

Check what all sessions on db

select spid,blocked,cmd,memusage,physical_io from sysprocesses order by memusage desc
select spid,blocked,cmd,memusage,physical_io from sysprocesses order by physical_io desc
select spid,suser_name(suid),cpu,cmd,status from sysprocesses where status
like '%run%'
select spid,status,hostname,program_name,hostprocess,cmd, cpu,physical_io from sysprocesses where status like 'runn%'

Check ServerDetails

select * from sysservers

select * from syslisteners -- check port details

To get sqlplan

dbcc traceon(3604)
dbcc sqltext(379)
sp_showplan 379,null,null,null

Checkin User process details

select spid,cmd,db_name(dbid),ipaddr,uid from sysprocesses where suser_name(suid) ='drdapp'

select count(suser_name(suid)),suser_name(suid) login from sysprocesses group by suser_name(suid) order by count(suser_name(suid)) DESC

select spid,cmd,status,suser_name(suid),loggedindatetime from sysprocesses where suser_name(suid)='drdapp' and loggedindatetime <'Oct 29 2009'

select spid,suser_name(suid) from sysprocesses where suser_name(suid)='drdapp' and loggedindatetime <'Oct 28 2009'

how to kill user sessions

kill spid

select distinct 'kill ' + convert(char,blocked) from master..sysprocesses where blocked > 0

select 'kill ', spid from sysprocesses where suser_name(suid)='drdapp' and loggedindatetime <'Oct 28 2009'

select 'kill ', spid from sysprocesses where suser_name(suid)='unxadmin';

kill 325 with statusonly

Useful Commands

sp_monitor
sp_monitorconfig "all" -- all config parameter values
sp_monitorconfig 'locks'
sp_monitorconfig 'procedure cache size

sp_configure "user connections"
sp_configure 'number of user connections',600 - to increase no.of connectins
sp_configure "number of locks"
sp_configure "max_memory"
sp_configure 'auditing'

select @@errorlog
select @@boottime
select @@version
select @@servername
select getdate()


check Parameters on db

select paramname,paramvalue from d_dbdba002..task_params
select paramname,paramvalue from d_dbdba002..task_params where paramname='DUMP_GENERATIONS'
select * from d_dbdba002..task_params where paramname like '%DUMP%'

All jobs/Tasks listed on db
select * from d_dbdba002..tasks

--job configured or not
nc_task 'job_name'

To get backup details
select * from d_dbdba002..backup_files where backup_time like '%Nov%';

checking database status on sysbase cluster db

/opt/VRTSvcs/bin/hastatus -sum | grep 'dbName'

--Login details:

sp_displaylogin 'Username'
use db
sp_helprotect 'Username'
sp_helprotect Bond -- object name
sp_helpgroup 'groupname'
sp_helpuser 'Username'

how to password reset

sp_password "mypwd",newpwd,username


how to dump database or chk bkpserver working or not

dump database master to "/tmp/sud.d"

chk Stable queue Usage

admin disk_space

DB and segment details

select * from sysobjects where name='out'
sp_list U
sp_spaceused out -- 'out' is an object name
sp_help out

sp_helpdb
sp_helpdb "DBNAME"
sp_helpsegment "default"
"DBNAME"..sp_helpsegment "default"

- -To check wheteher table contains any indexes or not :

"DBNAME"..sp_helpindex "OBJECT_NAME"

---StackTrace:
select * from syslocks where spid not in (select spid from sysprocesses)

-- DBSPACE VIOLATION:

login to server isql -Uusername -Sserver_name -w300

use dbname

sp_dbspace

sp_helpdb "DBNAME"

"DBNAME"..sp_helpsegment logsegment

sp_dev_alloc

If client wants to alter or violation reaches -- do alter db on any data device

alter database on = 2000

- -THRESHOLD LOG DUMP ERROR:

use db
sp_helpsegment logsegment

select * from master..syslogshold

select db_id("dbnAME") -- to get the db_id of specifed database


dbcc checkverify(DB_NAME)

use DB_NAME

select object_name(11111)

dbcc indexalloc(tablename,indexnumber)

RepAgent chks

sp_help_rep_agent
admin who_is_down

DBCC CHECKSTORAGE ALERTS..

if it is dba database pls run below..if it is user db, Hv to investigage
dbcc checkstorage(d_dbdba002)
dbcc checkverify(d_dbdba002)

Crontab – Quick Reference

Crontab – Quick Reference
Setting up cron jobs in Unix and Solaris

cron is a unix, solaris utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon. These tasks are often termed as cron jobs in unix , solaris. Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times.

This document covers following aspects of Unix cron jobs
1. Crontab Restrictions
2. Crontab Commands
3. Crontab file – syntax
4. Crontab Example
5. Crontab Environment
6. Disable Email
7. Generate log file for crontab activity

1. Crontab Restrictions
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does not exist, you can use
crontab if your name does not appear in the file /usr/lib/cron/cron.deny.
If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root user can use crontab. The allow/deny files consist of one user name per line.

2. Crontab Commands

export EDITOR=vi ;to specify a editor to open crontab file.

crontab -e Edit your crontab file, or create one if it doesn’t already exist.
crontab -l Display your crontab file.
crontab -r Remove your crontab file.
crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.)

3. Crontab file
Crontab syntax :
A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval.

* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6) (Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)


* in the value field above means all legal values as in braces for that column.
The value column can have a * or a list of elements separated by commas. An element is either a number in the ranges shown above or two numbers in the range separated by a hyphen (meaning an inclusive range).
Notes
A. ) Repeat pattern like /2 for every 2 minutes or /10 for every 10 minutes is not supported by all operating systems. If you try to use it and crontab complains it is probably not supported.

B.) The specification of days can be made in two fields: month day and weekday. If both are specified in an entry, they are cumulative meaning both of the entries will get executed .

4. Crontab Example
A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at 6:30 PM.

30 18 * * * rm /home/someuser/tmp/*

Changing the parameter values as below will cause this command to run at different time schedule below :

min hour day/month month day/week Execution time
30 0 1 1,6,12 * – 00:30 Hrs on 1st of Jan, June & Dec.

0 20 * 10 1-5 –8.00 PM every weekday (Mon-Fri) only in Oct.

0 0 1,10,15 * * – midnight on 1st ,10th & 15th of month

5,10 0 10 * 1 – At 12.05,12.10 every Monday & on 10th of every month
:

Note : If you inadvertently enter the crontab command with no argument(s), do not attempt to get out with Control-d. This removes all entries in your crontab file. Instead, exit with Control-c.

5. Crontab Environment
cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh

Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.

6. Disable Email
By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .

>/dev/null 2>&1

7. Generate log file
To collect the cron execution execution log in a file :

30 18 * * * rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log

Sunday, May 16, 2010

I. RENAME OR MOVE DATAFILE(S) WITH THE DATABASE OPEN

I. RENAME OR MOVE DATAFILE(S) WITH THE DATABASE OPEN
----------------------------------------------------

Datafiles can be renamed or moved while the database is open. However, the
tablespace must be made READ-ONLY. This will allow users to select from the
tablespace, but prevents them from doing inserts, updates, and deletes. The
amount of time the tablespace is required to be read only will depend on how
large the datafile(s) are and how long it takes to copy the datafile(s) to the
new location.

Making the tablespace read only freezes the file header, preventing updates
from being made to the file header. Since this datafile is then at a read
only state, it is possible to copy the file while the database is open.

To do this you must follow these steps:

1. Determine how many datafiles are associated with the tablespace.

> SELECT FILE_NAME, STATUS FROM DBA_DATA_FILES
WHERE TABLESPACE_NAME = '';

TESTING1 >select file_name,status from dba_data_files where tablespace_name='TEST';

FILE_NAME
----------------------------------------------------------------------------------------------------
STATUS
---------
/data/oracle/TESTING1/u01/test.dbf
AVAILABLE


2. Make sure that all datafiles returned have the status AVAILABLE.


3. Make the tablespace is read only.

 ALTER TABLESPACE READ ONLY;

TESTING1 >alter tablespace test read only;

Tablespace altered.


4. Make sure that the tablespace is defined as read only in the data
dictionary.

> SELECT TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES
WHERE TABLESPACE_NAME = '';

TABLESPACE_NAME STATUS
------------------------------ ---------
READ ONLY

TESTING1 >select tablespace_name,status from dba_tablespaces where tablespace_name='TEST';

TABLESPACE_NAME STATUS
------------------------------ ---------
TEST READ ONLY



5. Copy the datafile(s) to the new location using the operating system copy
command. Once the datafile(s) have been copied to the new location compare
the sizes of the datafiles. Make sure that the sizes match.

cp test.dbf /data/oracle/TESTING1/test/



6. Once the datafiles have been copied to the new location alter the
tablespace offline.

> ALTER TABLESPACE OFFLINE;
TESTING1 >alter tablespace test offline;

Tablespace altered.

* At this point the tablespace is not accessible to users.

7. Once the tablespace is offline you will need to rename the datafile(s) to
the new location. This updates the entry for the datafile(s) in the
controlfile.

> ALTER DATABASE RENAME FILE
'/FULL_PATH_OF_OLD_LOCATION/AND_DATAFILE_NAME.DBF'
TO
'/FULL_PATH_OF_NEW_LOCATION/AND_DATAFILE_NAME.DBF';

* You will need to do this for all datafiles associated with
this tablespace. You can use the ALTER TABLESPACE ... RENAME DATAFILE
syntax as well.

TESTING1 >alter database rename file '/data/oracle/TESTING1/u01/test.dbf' to '/data/oracle/TESTING1/test/test.dbf';

Database altered.

8. Once the alter database statement has been processed for the datafile(s)
you can bring the tablespace online.

 ALTER TABLESPACE YOUR_TABLESPACE_NAME ONLINE;

TESTING1 >alter tablespace test online;

Tablespace altered.

9. After you bring the tablespace back online you can make the tablespace
read/write again.

 ALTER TABLESPACE YOUR_TABLESPACE_NAME READ WRITE;

TESTING1 >alter tablespace test read write;

Tablespace altered.

10. You can check the status of the tablespace to make sure it is read/write.
You can also verify that the controlfile has been updated by doing the
following:

 ALTER DATABASE BACKUP CONTROLFILE TO TRACE;


This will produce a readable copy of the contents of your controlfile
which will be placed in your user_dump_dest directory.

Optionally, you can query V$DATAFILE, which gets information from the
controlfile as well.

11. Remove the datafile(s) from the old location at the O/S level.


II. RENAME AND OR MOVE DATAFILE(S) WITH THE DATABASE SHUT DOWN
--------------------------------------------------------------

1. If the database is up, shut it down.

2. Copy the datafile to the new name/location at operating system level.

cp test.dbf /data/oracle/TESTING1/test/

3. Mount the database.

> STARTUP MOUNT

This command will read the control file but will not mount the datafiles.

4. Rename the file inside Oracle.


> ALTER DATABASE RENAME FILE
'/FULL_PATH_OF_OLD_LOCATION/AND_DATAFILE_NAME.DBF'
TO
'/FULL_PATH_OF_NEW_LOCATION/AND_DATAFILE_NAME.DBF';

TESTING1 >alter database rename file '/data/oracle/TESTING1/test/test.dbf' to '/data/oracle/TESTING1/u01/test.dbf';

Database altered.


Do this for all the datafiles that were renamed or moved at the operating
system level.

5. Open the database.

> ALTER DATABASE OPEN;

TESTING1 >alter database open;
alter database open
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/data/oracle/TESTING1/u01/test.dbf'


TESTING1 >recover datafile 4;
Media recovery complete.
TESTING1 >alter database open;


6. Query v$dbfile to confirm that the changes made were correct.

> SELECT * FROM V$DBFILE;

TESTING1 >select * from v$dbfile;

FILE#
----------
NAME
----------------------------------------------------------------------------------------------------
3
/data/oracle/TESTING1/u01/sysaux.dbf

2
/data/oracle/TESTING1/u01/undotbs01.dbf

1
/data/oracle/TESTING1/u01/system.dbf

4
/data/oracle/TESTING1/u01/test.dbf

7. Remove the datafile(s) from the old location at the operating system level.



NOTE: To rename or relocate datafiles in the SYSTEM tablespace you must use
option II, 'Renaming or Moving a Datafile with the Database Shut Down',
because you cannot take the SYSTEM tablespace offline.

Metalink recovery scenario docs..

Metalink recovery scenario docs..

Note:103176.1 - How to Recover Having Lost Controlfiles and Online Redo Logs
Note:184327.1 - Common Causes and Solutions on ORA-1157 Error Found in B&R
Note.198640.1 - How to Recover from a Lost Datafile with Different Scenarios
Note:183327.1 - Common Causes and Solutions on ORA-376 Error Found in B&R
Note:183367.1 - Common Causes and Solutions on ORA-1113 Error Found in B&R
Note:117481.1 - Loss Of Online Redo Log And ORA-312 And ORA-313


online redolog missing..

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=186137.1

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230829.1

Recovery with missing System tablespace.

Recover database after disk loss [ID 230829.1]
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230829.1

How to Recover from a Lost Datafile with Different Scenarios [ID 198640.1]
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=198640.1

Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery [ID 184327.1]

https://support.oracle.com/CSP/main/article?cmd=show&id=184327.1&type=NOT

Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g/11g [ID 28814.1]

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=28814.1

Friday, May 14, 2010

Scenarios

1. Loss of Inactive/current/Active Online Redo Log group/file..

Redo logs
---------
In normal cases, we would not have backups of online redo log files. But the
inactive logfile changes could already have been checkpointed on the datafiles
and even archive log files may be available.

SQL> startup mount
Oracle Instance Started
Database mounted
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '/ORACLE/ORADATA/H817/REDO01.LOG'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

** Verify if the lost redolog file is Current or not.
SQL> select * from v$log;
SQL> select * from v$logfile;

--> If the lost redo log is an Inactive logfile, you can clear the logfile:

SQL> alter database clear logfile '/ORACLE/ORADATA/H817/REDO01.LOG';

Alternatively, you can drop the logfile if you have atleast two other
logfiles:
SQL> alter database drop logfile group 1;


--> If the logfile is the Current logfile, then do the following:
SQL> recover database until cancel;

Type Cancel when prompted

SQL>alter database open resetlogs;


The 'recover database until cancel' command can fail with the following
errors:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error
below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/ORACLE/ORADATA/H817/SYSTEM01.DBF'

In this case , restore an old backup of the database files and apply the
archive logs to perform incomplete recovery.
--> restore old backup
SQL> startup mount
SQL> recover database until cancel using backup controlfile;
SQL> alter database open resetlogs;


If the database is in noarchivelog mode and if ORA-1547, ORA-1194 and ORA-1110 errors occur, then you would have restore from an old backup and start the database.

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230829.1
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=186137.1

Losing an Active Online Redo Log Group
If the database is still running and the lost active log is not the current log, then issue the ALTER SYSTEM CHECKPOINT statement. If successful, your active log is rendered inactive, and you can follow the procedure in "Losing an Inactive Online Redo Log Group". If unsuccessful, or if your database has halted, perform one of these procedures, depending on the archiving mode.
Note that the current log is the one LGWR is currently writing to. If a LGWR I/O fails, then LGWR terminates and the instance crashes. In this case, you must restore a backup, perform incomplete recovery, and open the database with the RESETLOGS option.
To recover from loss of an active online redo log group in NOARCHIVELOG mode:
1. If the media failure is temporary, correct the problem so that Oracle can reuse the group when required.
2. Restore the database from a whole database backup using an operating system utility. For example, enter:
NT > !copy c:\backup\*.dbf c:\data\
3. Mount the database:
SVRMGR> startup mount;
4. Open the database using the RESETLOGS option:
SVRMGR> alter database open resetlogs;
5. Shut down the database normally:
SVRMGR> shutdown immediate;
6. Perform a closed whole backup
To recover from loss of an active online redo log group in ARCHIVELOG mode:
1. If the media failure is temporary, then correct the problem so that Oracle can reuse the group when required.
2. Perform incomplete media recovery. Use the procedure given in Loss of Current Redo Logs, recovering up through the log before the damaged log.

Loss of Current Redo Logs -- database is Closed.

If the Database is closed, media failure may have occurred or a background process may have terminated. Used the steps below to rectify this situation:

1. Attempting to open the database will immediately notify you of the current Redo Log group is missing.
SVRMGR> startup;
ORACLE instance started.
Total System Global Area 73418780 bytes
Fixed Size 75804 bytes
Variable Size 56487936 bytes
Database Buffers 16777216 bytes
Redo Buffers 77824 bytes
Database mounted.
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'C:\ORACLE\ORADATA\INFOMAX\REDO03.LOG'
2. Since the log group 3 is the current log group, it will not have been archived. Using the "Clear logfile" command is no use since:

SVRMGR> alter database clear unarchived logfile group 3;
ORA-01624: log 3 needed for crash recovery of thread 1
ORA-00312: online log 1 thread 1: 'C:\ORACLE\ORADATA\INFOMAX\REDO03.LOG'
3. Incomplete recovery is therefore required. First, you must note the current log sequence number:
SQL> select * from v$log;
GROUP#.... SEQUENCE# BYTES MEMBERS ARC STATUS ........ FIRST_TIME
------------ ---------------- ----------- --------------- ------- -------------- ------------------
1 ..... 448 1048576 1 YES INACTIVE ..... 09-MAR-01
2 ..... 449 1048576 1 YES INACTIVE ..... 09-MAR-01
3 ..... 450 1048576 1 NO CURRENT ..... 09-MAR-01

4. Restore all datafiles from a previous backup:
UNIX> !cp /disk1/backup/*.dbf /disk2/data/
5. Recover using "Until Cancel"
SVRMGR> recover database until cancel';
6. NOTE: When it comes to apply the Redo Logs # 450, type cancel.
7. Open the database using "resetlogs" options:
SVRMGR> alter database open resetlogs;
8. The database should now be operational, since any missing logfiles will be recreated.
NOTE: If the logfiles need to be recreated on another disk due to media failure, use the "alter database drop logfile group" and "alter database add log group" commands to manually create the log files. See Re-creating Redo Logs.
9. Since you just performed incomplete recovery, the database should now be backed up.

The following information is useful for implementation of recovery on redolog group

How to recover using the online redo log
========================================

PURPOSE
-------

To easily and quickly find out if the online redo log files can be used to recover a database.

AUDIENCE
--------

This document is addressed to DBAs that want to quickly find the best recovery solution in case of a database crash.

HOW TO
------

Many databases today are run without archive logging enabled, this reduces the available options to quickly recover a database. Basically 2 options are available:

a) restore from a backup
b) recover the database using the online redo logs.

Option a) is straight forward and will not be covered here. Only important
thing to mention is that option a) WILL cause loss of data if there has
been updates/inserts to the database since the backup was taken.

Let us instead take a look at option b):

In case of a database crash or a database that will not startup
due to ORA-1110, ORA-1113 or both, we first need to identify which
files need to be recovered.

1) First we mount the database then issue the following query:

select * from v$recover_file;

This will give us a list of the files that need media recovery. It
will also give us CHANGE#, i.e. the SCN where the media recovery
must start. To get the name of the file use the FILE# column and
query against v$datafile like this:

select name from v$datafile
where file# =

or like this in 9i:

select substr(d.name,1,30) fname
, r.online_status
, r.error
, r.change#
, r.time
from v$datafile d, v$recover_file r
where d.file# = r.file#

2) Next we do:

archive log list

This will give us the current log sequence. We need the current
log sequence -1. This will give us the last redo log file that
was in use.

3) As the last step we do:
select recid
, stamp
, sequence#
, first_change#
, next_change#
from v$log_history
where recid = less than symbol current log sequence -1)


This will show us the NEXT_CHANGE#, i.e.the highest SCN, in the redo
log file. It will also give us the FIRST_CHANGE# SCN in
this redo log file. We need these 2 SCN numbers to find out if we
can use the redo log file for recovery against the file(s) found
in 1). If the CHANGE# from 1) is between the FIRST_CHANGE# and the
NEXT_CHANGE# then we can use the redo log file for recovery.

If the CHANGE# from 1) is lower than FIRST_CHANGE# we need to
investigate an earlier online redo log file.

When we have identified which redo log files to use for the recovery we
perform the recovery using the redo log file(s) in the following way:
- mount the database
- recover database

The recover process will now display something similar to the following:

ORA-00279: change 12599 generated at 08/18/98 13:25:48 needed for thread 1
ORA-00289: suggestion : /oracle/OFA_base/app/oracle/admin/NE804DB1/arch/129.arc
ORA-00280: change 12599 for thread 1 is in sequence #129
Specify log: {=suggested | filename | AUTO | CANCEL}

As the database is not in ARCHIVELOG mode we will not have the 129.arc
file. Instead the name of the redo log file must be entered on the
command line. The filename must include the full path. After the redo
log file has been applied the recover process will return: Log applied.
At this stage 2 things can happend:

1) Media Recovery Completed
2) or additional redo log files must be applied

If 2) then just specify the next redo log file on the command line and
continue to do so until Media Recovery Completed is displayed. Then the
database can be opened with: alter database open;

If we do not find any online redo log files which covers the CHANGE#
from 1) we cannot do a recover of the database or datafile(s). This
means we are left with only 2 options of bringing the database back online:

1) restore from a valid backup taken before the crash. Doing so and
running the database in NOARCHIVELOG MODE will cause a loss of data.
This is unavoidable.

2) force the database open. This will override Oracle's internal
datafile synchronisation and consistency check. The result is an
inconsistent database. The database MUST now be exported and rebuild
as an inconsistent database is unreliable, and hence not supported.
This last option should only be used in cooperation with Oracle Support

ALTER SYSTEM SWITCH LOGFILE and CHECKPOINT:
Question: Will "alter system switch logfile" trigger checkpoint?
Answer: It will only trigger a complete checkpoint when active redo log group to be overwritten due to circular fashion. In other words, "alter system switch logfile" may trigger complete checkpoint, but not always

http://www.dbaglobe.com/2010/02/alter-system-switch-logfile-and.html

1) Alter system switch logfile.

Well this you can fire even when your database is not in archive log mode.What it does is switch your log from one log file to other even though its not filled.
Now if the database is in archive log mode then after the log switch the archive for the logfile is created. This will archive only inactive logs..

2) Alter system switch archive log all ;

Will archive all your inactive redo log files but not your current redo log file.
In certain scenario where you need to manually archive your existin gredo log file you use alter system archive log all;

3) While alter system swich archive log current;
Will archive all inactive redolog files and also our current redo logfile.

**Now while firing alter system swich logfile and alter system switch archive log current the database has to be open
while firing alter system archive log all it can be done even in mount state

http://forums.oracle.com/forums/thread.jspa?messageID=3296310�

alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';

ALTER SYSTEM ARCHIVE LOG CURRENT and SCN

TESTING1 >select max(next_change#) from v$archived_log;

MAX(NEXT_CHANGE#)
-----------------
4095858

TESTING1 >select current_scn from v$database;

CURRENT_SCN
-----------
4097620

Current changes are written to archived log triggered by "alter system archive log current".

TESTING1 >alter system archive log current;

System altered.

TESTING1 >select max(next_change#) from v$archived_log;

MAX(NEXT_CHANGE#)
-----------------
4097634

TESTING1 >select current_scn from v$database;

CURRENT_SCN
-----------
4097639

http://www.dbaglobe.com/search/label/Oracle

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=186137.1

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230829.1

Ora Presentations

http://www.juliandyke.com/Presentations/Presentations.html

Wednesday, May 5, 2010

11 & 10g New Backgroud processes..

11g:

The following process are added in 11g as new background processes.

1 dbrm DB resource manager
2 dia0 Diagnosability process
3 fbda Flashback data archiver process
4 vktm Virtual Timekeeper
5 w000 Space Management Co-ordination process
6 smc0 Space Manager process

NOTE : The above six are mandatory processes.

But 11g has 56 new processes added which can be queried using

select name,description from V$bgprocess;

10g background processes:

MMAN

Memory Manager (MMAN) coordinates the sizing of different memory components within SGA. MMAN keeps a track of sizes of components and allocates/de-allocates memory based on their usage. This process is used only when you enable Automatic Shared Memory Management.


RVWR

Process responsible for writing flashback logs which stores pre-image of data-blocks.
These logs can be used to flash back database to past point in time. It proves very handy specially in situation of logical data corruptions like an accidental drop/truncate of a table or commit of an unwanted transaction.

CTWR

Change Tracking Writer (CTWR) Process introduced in 10g to track changes to block to assist fast RMAN incremental backups.

MMNL

The Memory Monitor Light (MMNL) process introduced in 10g is responsible for writing out statistics buffer to disk as needed. It works with Automatic Workload Repository (AWR)

MMON

The memory monitor (MMON) process was introduced in 10g is responsible for gathering statistics for Automatic Workload Repository. This helps Oracle doing automatic problem detection and self tuning.

M000

MMON background slave (m000) processes.

RBAL

RBAL (Re-balancer) is the ASM related process that performs rebalancing of disk resources controlled by ASM.

ARBx

These processes are managed by the RBAL process and are used to do the actual rebalancing of ASM controlled disk resources. The number of ARBx processes invoked is directly influenced by the asm_power_limit parameter.

ASMB

The ASMB process is used to provide information to and from the Cluster Synchronization Services used by ASM to manage the disk resources. It is also used to update statistics and provide a heartbeat mechanism.

SAN Vs NAS

Good Article on SAN Vs NAS

http://www.nas-san.com/differ.html

A SAN commonly used Fibre Channel

Any computer that connects to LAN act as NAS using NFS or CIFS

NAS uses TCP/IP and NFS/CIFS/HTTP protcols

SAN uses Encapsulated SCSI protcol

The basic diffence b/w SAN, NAS is
"The kind of I/O Operation ? if it is an Block I/O then it is SAN,
if it is an File I/O then it is an NAS"

Tuesday, April 27, 2010

How to connect SQLserver from cmd

Syntax:
C:net use \\server_name /u:corporateNetwrkname\ur_sqldba_id
C:query session /server:server_name

C:\xx_dba>net use \\srvr_name /u:c.network\xx_dba

The command completed successfully.

C:\xx_dba>query session /server:srvr_name
SESSIONNAME USERNAME ID STATE TYPE DEVICE
console 0 Conn wdcon
rdp-tcp 65536 Listen rdpwd
rdp-tcp#26 xx_dba 2 Active rdpwd

(or) can use tsadmin

Connect isql from command prompt in local machine:

C:\Documents and Settings\xx_dba>osql -E -s SERVER_NAME
1> select spid,cmd,blocked from sysprocesses where blocked>0
2> go
spid Gcmd Gblocked
------G----------------G-------

(0 rows affected)

connect isql from remote machine or from ur local box(sql should install/path)

H:\>osql -U -P -S SERVER_NAME

Tuesday, April 20, 2010

What's oracle DBA and Oracle Application DBA

Basically to maintain Oracle Applications (11i or earlier versions) you need mainly three kind of expertise, apart from others:
Oracle Apps Technical
Oracle Apps Functional
Oracle Apps DBA
Oracle Apps DBA should be one who is having knowledge of Oracle Database Administration along with some of the skills like:Oracle Applications Cloning (Database clone included). Cloning with Rapid clone and Autoconfig Clone etc.Patching using Adpatch utility (patchset and Family Pack etc).Knowledge of MLS.Understanding Application issue w.r.to Database.Conceptual knowledge of Apache, Jserv, Forms & Reports Server, CM etc.Comfortable with OA-Frame Work, iProcurement, iPayment, Configurator, Workflow etc.
Can refer best article.. http://teachmeoracle.com/
http://onlineappsdba.com/index.php/2006/07/29/how-2-become-oracle-apps-dba/
apps contents..http://download-uk.oracle.com/docs/cd/B25516_08/current/html/docset.html
Basic Info..go thru first chapter in below link
http://download-uk.oracle.com/docs/cd/B25516_08/current/acrobat/11iconcepts.pdf



Friday, April 16, 2010

Oralce 11gR2 Performance Tuning


Tools for tuning the db:

Oracle Database 11g Enterprise Edition
Oracle Enterprise Manager

Oracle Diagnostics Pack

- AWR, Automatic workload repository

- ADDM, automatic database diagnostic monitor

- ASH, Active session History

Oracle Database Tuning Pack

--SQL Tuning Advisor
This feature enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations for how to tune statements, along with a rationale for each recommendation and its expected benefit. A recommendation relates to collection of statistics on objects, creation of new indexes, restructuring of the SQL statements, or creation of SQL profiles.
– SQL Access Advisor
This feature enables you to optimize data access paths of SQL queries by recommending the proper set of materialized views and view logs, indexes, and partitions for a given SQL workload.

Oracle Real Application testing

- Database Replay

- SQL Performance analyzer


Performance improvement is an iterative process,removing the first bottleneck may not lead to performance improvement immediately becz another bottleneck leads greater performance impact, so accurately diagnosing the performance proplem is the fist step.

performance proplems results lack of throughput(the amount of work that can be completed in a specified time), unacceptable user or job response time

Gathering Database Statistics Using the Automatic Workload Repository

By default, the database gathers statistics every hour and creates an AWR snapshot, which is a set of data for a specific time that is used for performance comparisons

Initialization parameters are relevant for AWR:

STATISTICS_LEVEL, CONTROL_MANAGEMENT_PACK_ACCESS

statistics_level - TYPICAL or ALL -- for enabling AWR reports

control_management_pack_access - DIAGNOSTIC+TUNING - for enablin ADDM

Setting the DBIO_EXPECTED Parameter
ADDM analysis of I/O performance partially depends on a single argument,DBIO_EXPECTED, that describes the expected performance of the I/O subsystem. The value of DBIO_EXPECTED is the average time it takes to read a single database block, in microseconds. Oracle Database uses the default value of 10 milliseconds..

Set the value one time for all subsequent ADDM executions.For example, if the measured value is 8000 microseconds, then execute the following PL/SQL code as the SYS user:
EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER('ADDM', 'DBIO_EXPECTED', 8000);

The database statistics collected and processed by AWR include:

■ Time Model Statistics■ Wait Event Statistics■ Session and System Statistics■ Active Session History Statistics■ High-Load SQL Statistics


Common Performance Problems Found in Oracle Databases

CPU Bottlenecks

Undersized Memory Strucutures

I/O Capacity issues

Suboptimal use of Oracle Database by the application

Concurrency issues

Database configuration issues
Short-lived performance problems

Degradation of database performance over time

Inefficient or high-load SQL statements

Object contention

Unexpected performance regression after tuning SQL statements

Wednesday, April 14, 2010

Sybase Prod issues..

Tempdb full

–Check the error log if it is the data segment or the log segment that is full
–If it is the log segment, do the following sequentially:
Try to truncate the log
dump tran tempdb with truncate_only
dump tran tempdb with no_log
Check the spid causing the log to get full
select * from master..syslogshold where dbid = 2
Or use d_dbdba002, for the tables syslocks_copy, sysprocesses_copy, syslogshold_copy to check the process that has caused tempdb to get full
Check with the user about the spid identified and kill the spid after confirmation with the application contact
Try to abort the transaction using lct_admin
Select lct_admin("abort",spid)
spid in the syntax is that of the process that initiated the transaction. This also terminates any other suspended transactions in the log that belong to the specified process. Ensure that this is notified to the user and user approval is taken prior to executing this command
Retry executing the dump tran commands
Alter the tempdb on the device that has space

If there is no space on any of the existing devices then create a database device in the location : /app/sybase//tempdb
Check for the name of the last tempdb device created on this server
-Create the new device in the sequential order as per the name in the server
In worst case scenario where no space is available anywhere in the server, extend the tempdb on any device with free space available as the tempdb can be shrunk during a subsequent restart of the server by making changes to the sysusages table

Blocking process

–Execute the following query to check the process that is blocking:
select spid, suser_name(suid), db_name(dbid), blocked from sysprocesses where blocked > 0
Check the physical io for the process:
sp_whodo ‘blocking process’
select spid, suser_name(suid), db_name(dbid), physical_io, memusage from sysprocesses where spid = BlockingSpid
sp_lock ‘BlockingSpid’

Open tran issue..


–Execute the following query to check the process that is open:
select db_name(dbid), spid, starttime, name from master..syslogshold where dbid = db_id('dbname') order by starttime
Check the physical io for the Open Transaction:
sp_whodo ‘OpenTranSpid’
select spid, suser_name(suid), db_name(dbid), physical_io, memusage from sysprocesses where spid =OpenTranSpid
sp_lock ‘OpenTranSpid’

LogSuspend issue..

Check the spid causing the log to get full
select db_name(dbid), spid, starttime, name from master..syslogshold where dbid = db_id('dbname')
Check with the user if the spid has been identified and kill the spid after confirmation with the application contact
If the spid belongs to the truncation point for a replicate database check if there is an issue with replication. If so, correct the problem with replication
Check if the transaction log can be dumped:
Execute the ncycle job for dumping the transaction
Execute dump tran dbname with truncate_only (This will invalidate the future tran dumps)
Execute dump tran dbname with no_log (This will invalidate the future trans dumps)
Check for an existing device where the database can be extended and extend the database
Alter the database on the device that has space
Ensure that the database does not have a data fragment on that device already

Try to abort the transaction using lct_admin
Select lct_admin("abort",spid)
Performance issues..


–Log into the server and check the following:
Check if there are any known issues with that server/region (outages etc.)
Execute sp_monitor and check for CPU utilization
Check if there is blocking
Check if there are issues with the tempdb
Check if there are any open transactions
Ask the application user, if he is aware of the process that is causing the problem
If yes, get the query plan of the process
Look out for table scans, deferred modifications. If you find any, suggest using indexes for the query
If the query is using the correct indexes, check from when has the issue been recurring. If it is a recent issue, check for the following:
Was update statistics successfully executed
Did the index rebuild job complete successfully

If the application user is not aware as to what process is causing the problem, execute the following query:
select spid,status,hostname,program_name,hostprocess,cmd, cpu,physical_io from sysprocesses where status like 'runn%'
Generally the spid showing cpu and physical_io as 0 is the prime candidate to be killed
How to use VCS Commands

Login to the host as sybase to start the servers...
1) STATUS
/opt/VRTSvcs/bin/hastatus -sum grep

) SHUTDOWN
/opt/VRTSvcs/bin/hagrp -offline sg_dbservername -sys lns66h-6001e
3) START
/opt/VRTSvcs/bin/hagrp -online sg_dbservername -sys lns66h-6001e

About Quorum resource in Windows Cluster

Quorum resource
Updated: January 21, 2005
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Quorum resource
In every cluster, a single resource is designated as the quorum resource. This resource maintains the configuration data necessary for recovery of the cluster. This data, in the form of recovery logs, contains details of all of the changes that have been applied to the cluster database. This provides node-independent storage for cluster configuration and state data.

Quorum resource and the cluster database
The cluster database is an integral part of the formation of a server cluster. When a node joins or forms a cluster, the Cluster service must update the node's private copy of the cluster database. When a node joins an existing cluster, the Cluster service can retrieve the data from the other active nodes. However, when a node forms a cluster, no other node is available. The Cluster service uses the quorum resource's recovery logs to update the node's cluster database. To ensure cluster unity, Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition use the quorum resource to ensure that only one set of active, communicating nodes is allowed to operate as a cluster. A node can form a cluster only if it can gain control of the quorum resource. A node can join a cluster or remain in an existing cluster only if it can communicate with the node that controls the quorum resource.
For more information on the cluster database, see
Cluster database.
For more information on cluster resources, see
Server Cluster Resources.
Scenarios
To illustrate the importance of the quorum resource, consider these situations in a simple two-node cluster:
Node 1 fails. Node 2 continues operating, writing changes to the cluster database. But before node 1 can be restarted, node 2 fails. When node 1 becomes active, it must update its private copy of the cluster database with the changes made by node 2. The Cluster service uses the quorum resource's recovery logs to perform the update.
The networks providing communication between nodes 1 and 2 fail. Each node assumes the other node has failed and attempts to continue operating as the cluster. If both could succeed, the result would be two separate clusters using the same cluster name and competing for the same resources. Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition use quorum resource ownership to prevent this problem and maintain cluster unity. In this scenario, the node that gains control of the quorum resource is allowed to form a cluster, and the other fails over its resources and becomes inactive.
The quorum resource can be any resource with the following attributes:
The resource must enable a single node to gain physical control of it and defend its control.
The resource must provide physical storage that can be accessed by any node in the cluster.
The resource must use the NTFS file system.

Types of quorum resources
There are two resources that come with the operating system that can act as a quorum resource for multinode clusters. They are:
The Physical Disk resource
The Majority Node Set resource
For single-node clusters, the Local Quorum resource acts as a quorum resource. However, other developers can create their own quorum resource types for any resources that meet the arbitration and storage requirements

Thursday, April 8, 2010

SQL Server Basic Prod DBA commands..

Blocking in db server..
select spid,cmd,blocked from sysprocesses where blocked>0
select spid,blocked,cmd,memusage,physical_io from sysprocesses order by memusage desc
select spid,blocked,cmd,memusage,physical_io from sysprocesses order by physical_io desc
select spid, loginame, db_name (dbid) 'dbname',status, cmd,hostname from sysprocesses where db_name (dbid) = 'P_DBAPMCOR01' order by dbid

sp_who2 "spid"
dbcc traceon(3604)
dbcc inputbuffer(spid)
dbcc sqltext(spid)
sp_showplan spid,null,null,null
Space Usage check for tempdb or Userdb:
xp_fixeddrives
dbcc sqlperf(logspace)
dbcc shrinkdatabase(tempdb)
sp_helpdb d_dbdba002 - db name
dbcc shrinkfile('L001') - logfilename
other commands
sp_whodo n_cycle
xp_cmdshell "tlist"
xp_cmdshell "nc_restart servername jobname"
xp_readerrorlog
select @@errorlog
select @@boottime
select @@servername
select getdate()
select @@version
nc_task "jobname"
select crdate from sysdatabases where dbid=2
select suser_name(suid),count(*) from sysprocesses group by suid
select paramname, paramvalue from d_dbdba002..task_params where paramname = 'DUMP_GENERATIONS'
select loginname , createdate,updatedate,accdate from master..syslogins where loginname like '%sysfaonycapmsvc%'
Last Gather Stats
STATS_DATE function takes two arguments table_id and index_id. Following is the SQL followed by output. You can run it from query analyzer or management studio.
SELECT t.table_name, i.name as Index_Name, STATS_DATE(i.id, i.indid) as Statistics_last_collected FROM information_Schema.tables t INNER JOIN sysindexes i ON object_id(t.table_name) = i.id WHERE t.table_type = ‘BASE TABLE’ AND i.Name is NOT NULL AND i.Name not like ‘_WA_Sys%’ GO
connetin sqlserver frm command prompt:
H:\>net use
\\GPRI11D11013 /user:csfb\sys0amdbadm1
Enter the password for 'csfb\sys0amdbadm1' to connect to 'GPRI11D11013':The command completed successfully.
H:\>osql -SGPRI11D11013 -E


Tuesday, April 6, 2010

Future of the DatabaseManagment..

http://www.dba-oracle.com/art_dbazine_2020_p2.htm

By 2015, the automation of many of the Oracle DBA functions led to the Oracle professional accepting responsibility for a whole new set of duties:
Data modeling and Oracle database design
Data interface protocols
Managing data security
Managing development projects
Predicting future Oracle trends for hardware usage and user load

http://oracle4dbas.blogspot.com/

Refreshing.....

OraInventory
The first time you install Oracle Database software on a server, you are prompted to specify the location of Inventory directory, called oraInventory. This directory provides a centralized inventory of all Oracle software products installed on the server. You should use the same value for the Oracle inventory directory each time you perform an Oracle software installation on
the server.
Below gud information from Atul's Blog:
What is oraInventory ?oraInventory is repository (directory) which store/records oracle software products & their oracle_homes location on a machine. This Inventory now a days in XML format and called as XML Inventory where as in past it used to be in binary format & called as binary Inventory.There are basically two kind of Inventory Global Inventory (also called as Central Inventory) and Local Inventory also called as Oracle Home Inventory.Global Inventory ?Global Inventory holds information about Oracle Products on a Machine. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server . This global Inventory location will be determined by file oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris). If you want to see list of oracle products on machine check for file inventory.xml under ContentsXML in oraInventory (Please note if you have multiple global Inventory on machine check all oraInventory directories)You will see entry likeHOME NAME="ORA10g_HOME" LOC="/u01/oracle/10.2.0/db" TYPE="O" IDX="1"/......Local InventoryInventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

Can I have multiple Global Inventory on a machine ?Quite common questions is that can you have multiple global Inventory and answer is YES you can have multiple global Inventory but if your upgrading or applying patch then change Inventory Pointer oraInst.loc to respective location. If you are following single global Inventory and if you wish to uninstall any software then remove it from Global Inventory as well.

What to do if my Global Inventory is corrupted ?No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option-attachHome./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.locORACLE_HOME="Oracle_Home_Location" ORACLE_HOME_NAME="Oracle_Home_Name"CLUSTER_NODES="{}"

Simply...oraInventory is where oracle stores the inventory details of all the patches and components it has installed.

1) oraInventory could be placed anywhere which can be determined from inventory_loc in oraInst.loc. For oratab and oraInst.loc, OUI looks for those files usually at:/var/opt/oracle/oraInst.loc (typical)/etc/oraInst.loc (AIX and Linux)HKEY_LOCAL_MACHINE -> Software -> Oracle -> INST_LOC (Windows platforms)2) Correct. "oraInstRoot.sh" will be created for you by OUI and once you run it successfully you do not need it anymore.3) Please note that there are two inventories with the newer releases of OUI (2.x and higher): - The inventory in the ORACLE_HOME (Local Inventory)- The central inventory directory outside the ORACLE_HOME (Global Inventory)For more details about the two inventories, please check Note: 360079.1 - Global and Local Inventory explained


If this is the first time you are installing any Oracle software on this computer, then the Create Inventory Directory window appears. You must specify a local for the inventory, which OUI uses to keep track of all Oracle software installed on the computer. This information is used while applying patches or upgrading an existing installation, and while deinstalling Oracle software. Note that this directory is different from the Oracle home directory. The recommended value for the inventory directory is Oracle_base/../oraInventory, or one level above the Oracle base directory, in the oraInventory subdirectory. If you Oracle base directory is /u01/app/oracle, then the Oracle inventory directory defaults to /u01/app/oraInventory.