Saturday, November 28, 2020

ORA-22275: invalid LOB locator specified , do we really have LOB columns in table

 ORA-22275: invalid LOB locator specified , do we really have LOB columns in table:


I came across a customer problem that query returning ORA-22275: invalid LOB locator specified error  from application logs when they ran a query that has join conditions with just two tables . none of the tables in the query have LOB columns, it all number, char , varchar2 data type columns.

the interesting part is the same query get executed from sqlplus command prompt. sql developer and other tool but not from JDBC connection made applications. the query gave same error when running from OEM worksheet too.

 

SELECT

     ..

    cmtf.t_context,

    ..

FROM

    table1       cmt

    LEFT OUTER JOIN 

    table1_fld   cmtf ON cmt.per_name = cmtf.per_name

WHERE

    cmt.per_name = 'MAX RANO KING'

ORDER BY

    cmtf.seq_num,

    cmtf.LOC_NAME; 


there may some other methods to investigate but the method I followed is :

1. check table have lob columns, none of them have lob columns

2. t_context column in both tables has VARCHAR2(4000)

column that has VARCHAR2(4000) really creates the issue???  perhaps testing I did made me to believe is YES. read on How

I ran this query to see what session NLS parameter get set from calling environment.

SELECT DB.PARAMETER, DB.VALUE "DATABASE", I.VALUE "INSTANCE", S.VALUE "SESSION" FROM   NLS_DATABASE_PARAMETERS DB, NLS_INSTANCE_PARAMETERS I, NLS_SESSION_PARAMETERS S WHERE  DB.PARAMETER=I.PARAMETER(+) AND DB.PARAMETER=S.PARAMETER(+) and db.parameter ='NLS_LENGTH_SEMANTICS' ORDER BY DB.PARAMETER;

SQL command prompt, sql developer and other GUI tools NLS_LENGTH_SEMANTICS=BYTE and query get executed successfully whereas in OEM worksheet (perhaps the same case in application session too), the  NLS_LENGTH_SEMANTICS was set to CHAR.

I don't see any other problem and any other difference. 

Does it really matter if even data in that column not having length of size specified at column level?.

is it the inconsistency in storage semantics or NLS settings that we should address in application ?. 
is it because of session nls parameters not compatible with nls database parameters or instance nls parameters ? this seems to make to believe the case YES. 

however more to come on this as when I found more on this. 

any comments and inputs or help is welcome. thanks. 




Configuring Local Undo AND Configuring Shared Undo

Configuring Local Undo:

Shtudown the instance:

SQL> shutdown immediate;

SQL> startup upgrade;

SQL> show con_name

CON_NAME

-----------------------------

CDB$ROOT

Change the Undo mode to “Local Undo”:

SQL> alter database local undo on;

Reboot the instance:

SQL> shutdown immediate;

SQL> startup;

Verify that the Local Undo is now used:

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME PROPERTY_VALUE

-------------------- ---------------

LOCAL_UNDO_ENABLED TRUE


Configuring Shared Undo

Shutdown the instance:


SQL> shutdown immediate;

SQL> startup upgrade;

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT


Change the Undo mode to “Shared Undo”:

SQL> alter database local undo off;

Reboot the instance:

SQL> shutdown immediate;

SQL> startup;

Verify that the new Undo mode is now used:


SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME PROPERTY_VALUE

-------------------- ---------------

LOCAL_UNDO_ENABLED FALSE


Thursday, November 12, 2020

How to export and import schema stats from one database to another database in oracle

How to import schema stats from one database to another database in oracle:

It is not just importing schema stats from one database to another database, the one of the problem I have come across is export delay or import delay when export or importing whole schema stats with including statistics. 

I found this is useful when making my export and imports faster however the solution is FYI only, you can test and see it yourself.

The steps in exporting and importing schema stats  in summary:

Create the STATS table.

Export the statistics to the STATS table.

Export the STATS table using export(exp) or datapump(expdp)

Transfer the dump file to the destination database.

Import the STATS table to the destination database.

In detail:

Please note to generate stats on the schema as you see fit prior to export and import just in case.
you may use code like this or can be added to more options.. 



begin
    dbms_stats.gather_schema_stats('MYSCHEMA_DEV');
end;
/

1. Create the STATS table.

begin
    dbms_stats.create_stat_table( ownname => 'MYSCHEMA_DEV', stattab => 'exp_stats');
end;
/


2. Export the statistics to the STATS table.
begin
    dbms_stats.export_schema_stats( ownname => 'MYSCHEMA_DEV', stattab => 'exp_stats');
end;
/


3. Export the STATS table using export(exp) or datapump(expdp)
this can be along with schema export or only stats table can be exported
1. expdp command.
2. dbms_datapump API procedures. 

expdp SCHEMAS=MYSCHEMA_DEV DIRECTORY=dumps parallel=24 DUMPFILE=MYSCHEMA_DEV_%U.dmp LOGFILE=MYSCHEMA_DEV_exp.log COMPRESSION=ALL CLUSTER=N METRICS=Y LOGTIME=ALL  EXCLUDE=STATISTICS

4. Transfer the dump file to the destination database.
use sftp or winscp whatever you think is fit.

5. Import the STATS table to the destination database.

begin
    dbms_stats.import_schema_stats(ownname => 'MYSCHEMA_DEV', stattab => 'exp_stats');
end;
/

Hope this helps... please feel free to comment .. thanks.