Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 14 hours 4 min ago

Can we use RETURNING CLAUSE along with CURRENT OF clause in update statement.

Thu, 2020-08-06 10:26
Hi Tom, I am using below update statement in my procedure to return few columns that are getting updated in the update statement. <code>UPDATE DUMMY_TABLE SET DUMMY_STATUS = 'ABC' WHERE CURRENT OF DUMMY_CURSOR RETURNING DUMMY_FIELD1, DUMMY_FIELD2 BULK COLLECT INTO TAB_FIELD1, TAB_FIELD2;</code> The above code works if i am not using CURRENT OF CLAUSE, but is giving error when used like above... I want to know if i am making any syntax error here...or this is not possible at all.. Note: I cannot remove current of clause from the update statement, and still I have to return the columns that are getting updated. Thanks
Categories: DBA Blogs

ORA-01031: insufficient privileges, cannot login as any user

Thu, 2020-08-06 10:26
I ran below command and restarted docker container. <code>alter system set processes = 1 scope = spfile;</code> after this I am not able to login to DB at all. <code> root@30b2f9030f89:/u01/app/oracle/product/11.2.0/xe/bin# sqlplus /nolog SQL*Plus: Release 11.2.0.2.0 Production on Wed Aug 5 14:59:14 2020 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> conn / as sysdba ERROR: ORA-01031: insufficient privileges </code> I tried to resolve it with below command, tried login again, still getting same error orapwd file=filename password=password entries=100 Also tried editting /u01/app/oracle/product/11.2.0/xe/dbs/spfileXE.ora file and manually assign processes=100 and back to processes=1 this didnt help. can someone guide here please?
Categories: DBA Blogs

Loading CLOB data (more than 32k char) into Oracle DB through Apex

Thu, 2020-08-06 10:26
Hi, I am currently working on developing apex web application which requires to capture CLOB data and save it to DB. I am using Rich text editor to capture the content and noticed that only max of 32k characters can be loaded through the page item. when I try to push data more than 32k char, no data is being sent to the DB. Please suggest me a way to capture data more than 32k through apex page item. Oracle DB Version: 12c Apex version:20.1 Kindly let me know if I am missing any details Thanks, Murugananth
Categories: DBA Blogs

are WITH READ ONLY sand WITH CHECK OPTION syntaxes used for views and tables?

Thu, 2020-08-06 10:26
Hi, I am not sure whether or not WITH READ ONLY and WITH CHECK OPTION syntaxes used for views and tables only. I read on Oracle Help Center with the following link https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/CREATE-VIEW.html#GUID-61D2D2B4-DACC-4C7C-89EB-7E50D9594D30. It is said that: WITH READ ONLY Specify WITH READ ONLY to indicate that the table or view cannot be updated. WITH CHECK OPTION Specify WITH CHECK OPTION to indicate that Oracle Database prohibits any changes to the table or view that would produce rows that are not included in the subquery. When used in the subquery of a DML statement, you can specify this clause in a subquery in the FROM clause but not in subquery in the WHERE clause. I hope to receive your response. Thanks, Kris.
Categories: DBA Blogs

Comparing 2 Nested Table Collection Which have 180 field defined in each collection by passing field name dynamically.

Wed, 2020-08-05 16:06
I have 2 tables which of same structure with around 180 columns in each table with one columns as PK. Each table have around 200 k records. I need to compare to tables columns by column and if for that records any difference is found for any of the remaining 179 columns then need to track that column name and the both the old values and new value from that column from both tables. This can be achieve by SQL statement with UNION and group and LEAD functions but as need to compare the for 179 fields the length of the SQL code is very long. So thought of used nested table collection to compare both the tables by bulk collect both the values is 2 different nested tables and iterate them. First loop to iterate using collection count value and second loop using USER_TAB_COLS to iterate based on number of columns in the tables. Is there any possible to pass the field name to the nested loop dynamically ? Below is the sample code for that. <code>SET SERVEROUTPUT ON; DECLARE TYPE TEST1_TYPE IS TABLE OF TEST1%ROWTYPE ; TEST1_TAB TEST1_TYPE; TEST2_TAB TEST1_TYPE; lcCol1 VARCHAR2(3000); lcCol2 VARCHAR2(3000); lQuery VARCHAR2(3000); CURSOR CUR_TAB_COL IS SELECT COLUMN_NAME ,DATA_TYPE FROM USER_TAB_COLS WHERE TABLE_NAME='TEST1' ORDER BY COLUMN_ID; TYPE COL_TYPE IS TABLE OF CUR_TAB_COL%ROWTYPE; COL_TAB COL_TYPE; BEGIN SELECT * BULK COLLECT INTO TEST1_TAB FROM TEST1 ORDER BY ID; SELECT * BULK COLLECT INTO TEST2_TAB FROM TEST2 ORDER BY ID; OPEN CUR_TAB_COL; FETCH CUR_TAB_COL BULK COLLECT INTO COL_TAB; CLOSE CUR_TAB_COL; FOR I IN 1..TEST2_TAB.count LOOP FOR j IN COL_TAB.FIRST..COL_TAB.LAST LOOP lQuery:='SELECT TEST1_TAB('||i||').'||COL_TAB(j).COLUMN_NAME||',FROM DUAL'; EXECUTE IMMEDIATE lQuery INTO lcCol1; lQuery:='SELECT TEST2_TAB('||i||').'||COL_TAB(j).COLUMN_NAME||',FROM DUAL'; EXECUTE IMMEDIATE lQuery INTO lcCol2; END LOOP; END LOOP; END; /</code>
Categories: DBA Blogs

JSON_OBJECT throws error in Stored Procedure

Wed, 2020-08-05 16:06
Dear Team, I am trying to use the JSON functions in a stored procedure and TOAD throws the syntax error. <b> Found 'value', an alias is not allowed here (reserved for XMLCOLATTVAL, XMLFOREST and XMLATTRIBUTES only)</b> Below is the query that I use. However when I try to run outside the stored procedure, it works fine. <code>SELECT JSON_OBJECT ( 'empid' value '1' , 'name' value 'Tom' , 'address' value '23333' ) into emp_json FROM dual ;</code> Could you please help me to identify the cause of the error. Thank You.
Categories: DBA Blogs

Rebuilding Oracle Text Indexes

Wed, 2020-08-05 16:06
Dear Team, In our IFS Applications product, we heavily use Oracle text indexes. we are not updating the indexes real time instead we do it at a separate time interval through <i>Ctx_Ddl.Sync_Index</i> passing a CLOB document. We synchronize the indexes with a default 30 minute interval. In the maintenance cycle, we optimize an index once a week using <i>Ctx_DdL.Optimize_Index</i> method using FAST option. Also we perform an ALTER INDEX REBUILD for the text index once a week. Recently we encountered some errors with this REBUILD operation at few customers so having a thought how we should go forward. When we went through the Oracle documentation, we saw some articles related to this but they are somewhat confusing. https://docs.oracle.com/en/database/oracle/oracle-database/19/ccref/CTX_DDL-package.html#GUID-28E079B1-D5CA-4264-B1C0-A1C5CE174C55 It says: "Using this procedure to optimize the index is recommended over using the <i>ALTER INDEX</i> statement." https://docs.oracle.com/en/database/oracle/oracle-database/19/ccapp/maintaining-oracle-text-indexes.html#GUID-10365262-8B48-40AA-B1F3-DE3268EA9B39 It says: "You might rebuild an index when you want to index with a new preference" We have few questions to clarify. 1. We are <b>not changing any preferences</b> of the text index once after it was created at the installation time, so do we want to rebuild the text index in a scheduled manner? Is there any other benefit doing so? 2. Do you believe <i>Ctx_Ddl_Optimize_Index </i>with <b>REBUILD </b>option instead <b>FAST </b>option would be a good option to have if we skip rebuilding the index using <i>ALTER INDEX</i> statement? Or is it unnecessary in our situation. 3. Going forward, do you see any other risks if we are to remove <i>ALTER INDEX REBUILD</i> & continue with only <i>Ctx_Ddl_Optimize_Index</i> with <b>FAST </b>option? May be in aspects such as performance, etc. Thanks & Best Regards, Navinth
Categories: DBA Blogs

Best practice to delete rows with a CLOB column

Wed, 2020-08-05 16:06
Environment: Oracle 12.1.0.2 on Exadata I have a table with 30 columns, one of which is a CLOB, that contains about 26 million rows. I have a purge job (PL/SQL packaged procedure) that DELETEs rows from nine (9) other tables based on a list of IDs from the driver table that contains the CLOB. I save the list of IDs in a global temporary table and use that to delete the associated rows from the other tables that are done in the correct order to maintain all the FK relationships. I am running a test today that has identified about 98,000 IDs to be purged but I have about 5 million that need to be purged to 'catch up' and going forward I'll have about 10,000 per day to purge. The DELETE on the other 9 tables runs very quickly. The table with the CLOB column is taking many times longer than the others, like several hours as opposed to several minutes. I'm guessing there is a better and more efficient way to remove the rows from the table containing the CLOB and to regain the storage used by the deleted CLOB. I'm currently issuing just a 'DELETE FROM <table> WHERE ID IN (SELECT ID FROM <gtt>', where ID is the PK. I would appreciate any suggestions and order of operational steps to accomplish this purge, both for the 'catch up' and the daily run of 10,000. Thanks in advance for all the help!! Much appreciated! -gary
Categories: DBA Blogs

How to copy the csv files from the shared path to HP-Unix server Data base directory

Wed, 2020-08-05 16:06
Hi Sir, I'm using the Oracle 9i database in the HP-UNIX system. The directory name is "XX_DATA" Shared network path "\\mkees01\public\Details.csv" May you please help me by providing the code for below scenario. From the Shared network path file - Details.csv how to copy to the oracle database directory? Regards, Sankar
Categories: DBA Blogs

How prefix_index and substring_index improve wildcard searches in Oracle Text

Tue, 2020-08-04 21:46
Hi Team, I'm doing some Oracle Text work related to prefix_index and substring_index (we use Oracle 11g). I'm interested in why: (1)Prefix indexing improves performance for right truncated wildcard searches such as TO% (2)A substring index improves left-truncated and double-truncated wildcard queries such as %ing or %benz% Could you help to check whether my following understanding is right or wrong? Thanks! (Maybe the following understanding and questions are not very clear. I just want to know the search logic and process of queries like %abc, abc%, %abc% using prefix_index and substring_index by Oracle Text) (1) I want to confirm principles of extracting tokens using Oracle Text. After we extract tokens from text (tokens are stored in table $I), keywords that user inputs to query will be extract into tokens, too. If at least one keyword token is equal to text token, then user can get matched text. For example, text1 is extracted into token1 and token2 (stored in table $I). Then user inputs some keywords to query. Keywords are extracted into token2 and token3, since token2 is stroed in $I, user can get search result containing text1. However, if keywords are extracted into token3 and token4. Suppose that token3 is prefix of token2, user still can't get search result containing text1 because token2 != token3. Am I understanding right or wrong? Thanks. (2)When we use prefix_index (and set prefix_length_min=2, prefix_length_max=3), some prefix tokens ?token_type = 6 ) will be stored in $I. When we use right truncated wildcard searches, for example: (a) Search to%. Length of "to" is 2. So it will directly search in tokens (only token_type=6) in $I table. If token "to" is in $I and its token_type is 6, then user can get search results matching to%. If token "to" is not in $I, then search result is empty. But we find that sometimes one prefix token (token_type = 6) is stored in $I table (e.g. this token is "??"). Then we search ??%, the search result is empty. Do you know the reason? Thanks. (b) Search toto%. Length of "toto" is 4 (prefix_legth_max is 3). So it will directly search every token (token_type = 0) in $I table. Am I understanding right or wrong? Thanks. (3)When we use substring_index, some substring tokens will be stored in $P. When we use left truncated wildcard searches, for example: (a) Search %abc, then it will directly search in $P table using column PAT_PART2. If PAT_PART2 has token abc, then user can get search results matching %abc. If PAT_PART2 hasn't token abc, then search result is empty. (b) Search %a (length of "a" is 1), since length of all tokens in $P table in column PAT_PART2 is larger than 1, so it will directly search every token (token_type = 0) in $I table. Am I understanding right or wrong? Thanks. And I want to know that why using substring_index can improves double-truncated wildcard queries like %benz% (I understand substring_index can improve left-truncated wildcard queries, but I want to know the process and principle of querying %benz% by Oracle Text. Could you help to explain it, thanks!)
Categories: DBA Blogs

SQLLDR Process is showing password

Tue, 2020-08-04 03:26
Hi Tom, we have a risk of exposing our APPS schema password for host based concurrent programs in Oracle Apps. The ps -ef command exposes the password unless encrypted. We have gone back and added the ENCRYPT option for the Options Field for the Concurrent Program definitions. We have also used $FCP_LOGIN instead of $1 while invoking SQL*Plus or SQL*Loader. As part of these changes, we had a test case where we were trying to observe the this sub process, either SQL*Plus or SQL Loader itself in the ps -ef command. We have observed that SQL*Plus does not display the password in the ps -ef command. However, SQL Loader does display the password. Hence, Can you please let us know the way to fix this or any workarounds for this. Thanks in Advance
Categories: DBA Blogs

Save compressed BLOB to OS file

Fri, 2020-07-31 08:06
Hello - We're doing following in oracle db - Extract comma delimited file from Siebel database using SQL query - we're using BLOB to store comma delimited file - Compressing BLOB data using UTL_compress LN_COMPRESS - Sending data to REST API using HTTP post REST service is erroring out due to Oracle compression and can't read oracle compressed data. Btw, its fine when I zip file manually using winzip and post using POSTMAN. REST API owner would like to know Oracle UTL_compress COMPRESS specification. Also there are asking to send OS file (oracle compressed file) Need to know following... 1)what specification Oracle is using for UTL_COMPRESS 2) How can I create file using Oracle COMRESSED blob? Thanks in advance
Categories: DBA Blogs

Generating and Storing a Report in the Database

Fri, 2020-07-31 08:06
>Hi Tom 's team, I am happy for your quick response my old question <b>https://asktom.oracle.com/pls/apex/asktom.search?tag=storing-and-accessing-reports-in-the-database-on-oracle-apex-of-version-20</b> I make a new question for my present problem. I listed those steps I created for that problem: a. I created a table named from file report_archive_tbl.sql on my database or directly on APEX. b. I created a process named store_blobquery_report when press Print button. Print button redirect to url I created for a report query. <code>declare l_report blob; begin l_report := apex_util.get_print_document ( p_application_id => :APP_ID, -- assign id of app or no need p_report_query_name => 'multiquery', -- <b>create name for a report query</b> p_report_layout_name => 'multiquery', -- <b>create name for a report layout</b> p_report_layout_type => 'rtf', -- <b>create a format for a report</b> p_document_format => 'pdf'-- <b>define format for a report<b> ); insert into report_archive ( filename, mimetype, report, created_date, created_by ) values ( 'Multi Query Search Results ('||to_char(sysdate,'DDMonYYYY')||')', 'application/pdf', l_report, sysdate, :USER ); end;</code> <b>c. When I want to click print report, one row were inserted into the database for each report you ran. </b>. That step while I pressed Print button, it only show that report downloaded but on a table of report_archive had no data. Thank you in advance ^_^.
Categories: DBA Blogs

Auditing of all action in a session not created by specific MACHINE

Thu, 2020-07-30 13:46
Hi Tom thanks for previous solutions . Please guide me or share some helpful links to achieve below requirement. 1--> we want to audit all actions of a session not created by application. 2--> Will a cursor having 1000 records will generate 1000 rows in auditing. Note* We have 12c R2 SE Oracle Database.So no fine grained auditing . I need to store only information like (action,object_name,schema_name,host,OS_user,IP address,time,date). "ALL actions" includes everything (alter,create,drop,update,delete etc) like "audit all by schema whenever successful ". Actual scenario:: An application user is used by many developers ,we want to only audit actions of developer session ,not application. How can we achieve it with minimum impact on performance.
Categories: DBA Blogs

Extract xml node value with existnode filter condition

Thu, 2020-07-30 13:46
Hi Chris, Appreciate for your support. I have a requirement where want to return specific node values if a particular node exist. Example - for the below XML, my output will be 2 records because (Jobsdata/jobList/jobData/job/document) node exist 2 times and returning nodes are (job/document/fileID and job/directoryid ). O/p --> 100,D100 200,D200 Please find the XML table in the Live SQL link . My O/P should be as given below- <code>id fileID directoryid 1 100 D100 1 200 D200</code> Filter Criteria - I don't want to display the directoryid : D300 because there is no <document> tag. Hence, file ID and directoryid should be displayed as <document> tag available for those. Thanks,Asit
Categories: DBA Blogs

Determine The Column Font

Thu, 2020-07-30 13:46
Greetings , Please inform me how to change The Column font Type and size In Apex classic report? In legacy report Builder 6 it is easy Just Select The column and choose the font you want and the size for it how this can be accomplished in APEX ? Looking forward to hearing from you
Categories: DBA Blogs

Query performance difference in RAC instances

Thu, 2020-07-30 13:46
Hi The Oracle DB version I am working on is : Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Its a RAC with 2 instances. This is about a query which runs quickly (under 15s) under instance #1 but takes between 10-11 mins when it runs under Instance #2. The query is utilizing indexes as expected. User provided me the query and I got the sql_ids of its historical runs via DBA_HIST_SQLTEXT. That gave me 5 sql_ids of the same query run between 20-Jul and 28-Jul. <code>select sql_id from dba_hist_sqltext where sql_text like '%CLAIMTYP%claimtype%');</code> <b>Observation 1:</b> Via following query, I discovered that slower runs are on Instance #2, & faster ones on #1. <code>select distinct h.instance_number,trunc(h.sample_time) run_date,h.sql_exec_start, h.sql_id , h.session_id, h.sql_child_number, h.sql_full_plan_hash_value, to_char(trunc(elapsed_time_total/1000000/60/60/24),'9') || ':' ||to_char(trunc(MOD(elapsed_time_total/1000000/60/60, 24)),'09') ||':' || to_char(trunc(MOD(elapsed_time_total/1000000,3600)/60),'09') || ':' ||to_char(MOD(MOD(elapsed_time_total/1000000, 3600),60),'09') as "Time Taken [dd:hh:mi:ss]" from dba_hist_active_sess_history h,DBA_HIST_SQLSTAT s where s.sql_id=h.sql_id and h.instance_number=s.instance_number and h.sql_id in ('73b4smcjjk38s','bt8sw7vj42sp3',........);</code> <b>Observation 2:</b> In dba_hist_active_sess_history, under column DELTA_INTERCONNECT_IO_BYTES, I found several non-null values under Instance #2 run, which as per Oracle documentation means "Number of I/O bytes sent over the I/O interconnect over the last DELTA_TIME microseconds". Value of that column is NULL in all other cases where same query ran on Instance #1. And there are many rows in ASH for slower run sql_ids but faster runs have only couple of rows (each row indicating 10s interval). Does above observations indicate some issue with server of Instance #2? Thanks.
Categories: DBA Blogs

encrypts file using PGP public

Thu, 2020-07-30 13:46
Hi, I don't know this is the right place or not but need to complete this task using PL/SQL. Actually I need to write PLSQL Code need to create a file and encrypt the file using PGP public key. I don't know how to achieve the second part of this task (file encryption) can I use dbms_crypto package for file encrypt and decrypt or need to some other rout. Our client already provides the PGP public key we don't need to create a PGP key need to use that key and encrypt/decrypt the file. Hope I clear enough my requirement. Thanks, Zahid
Categories: DBA Blogs

Need to to create index on a column for partial Data

Thu, 2020-07-30 13:46
Hi Tom, below is my table <code>create table invoice_fact( invoice_id number(7), INV_CHK_ID_DATE varchar2(1000), ACCOUNTING_DATE date, INV_AMOUNT number(12,4) );</code> and below is data INVOICE_ID ACCOUNTING_DATE INV_AMOUNT INV_CHK_ID_DATE 0000001 12-Mar-16 10000.77 0000001,19-Mar-2016,10000.77 0000002 13-Mar-16 10070.74 0000002,21-Mar-2016,10070.74 0000003 14-Mar-16 10124.7 0000003,16-Mar-2016,10124.7 0000004 15-Mar-16 10136.56 0000004,17-Mar-2016,10136.56 0000005 16-Mar-16 10190.35 0000005,23-Mar-2016,10190.35 0000006 17-Mar-16 10200.94 0000006,20-Mar-2016,10200.94 0000007 18-Mar-16 10255.44 0000007,20-Mar-2016,10255.44 0000008 19-Mar-16 10341.86 0000008,20-Mar-2016,10341.86 and my query is <code>select * from invoice_fact where to_date(substr(INV_CHK_ID_DATE,9,instr(INV_CHK_ID_DATE,',',1,2)-9),'DD-MON-RRRR') between '01-MAR-2016' and '01-MAR-2017'; </code> i have a date in inv_chk_id_date column i have to extract that date and need to apply filter in where clause. no of records in my table is more than 20 millions so if i apply filter like above query is taking very long time to execute, is there any way to speed up above query
Categories: DBA Blogs

DataPump crashed, now cant create new jobs

Thu, 2020-07-30 13:46
Hy Tom, a sheduled task that worked for month crashed sudenly. Its an EXPDP job transfering Data to an external SSD (USB3.0). The Disk still runs. The errors listet after a new test to export are like: (EXPDP SYSTEM/XXXX@....) ORA-31626: now job exists ORA-31638: Job SYS_EXPORT_SCHEMA_01 for User SYSTEM cannot be attached ORA-06512: in "SYS.DBMS_SYS_ERROR", Line 95 ...... I read some hints resulting in deleting all DataPump jobs (was too much I think).
Categories: DBA Blogs

Pages