Ittichai Chammavanijakul

Subscribe to Ittichai Chammavanijakul feed
Updated: 9 hours 24 min ago

Connect Power BI to GCP BigQuery using Simba Drivers

Fri, 2018-09-21 21:56

Power BI can connect to GCP BigQuery through its provided connector. However, some reported that they’ve encountered the refresh failure as seen below. Even though the error message suggests that the quota for the API requests per user per minute may be exceeded, some reported that the error still occurs even if with a small dataset is being fetched.

In my case, by simply disabling parallel loading table (Options and settings > Options > Data Load), I no longer see this issue. However, some still said it did not help.

An alternative option is to use another supported ODBC or JDBC driver from Simba Technologies Inc. which is partnered with Google.

Setup

  • Download the latest 64-bit ODBC driver from here.
  • Install it on the local desktop where Power BI Desktop is installed. We will have to install the same driver on the Power BI Gateway Server if the published report needs to be refreshed on Power BI Service.

Configuration

  • From Control Panel > Administrator > ODBC Data Source Administrator > System DSN, click Configure on the Google BigQuery.
  • Follow the instructions from the screens below.

When connecting on Power BI, Get Data > choose ODBC.

Categories: DBA Blogs

Validate Performance Improvement Using Query Folding Feature in Power BI

Tue, 2017-07-04 19:27

I’ve been using Power BI for a couple months now, not as a developer, but as a system architecture. I may not deal with dashboard and report development on a daily basis, however, I, as an end user, use Power BI extensively to monitor Azure and Power BI usage including audit and billing. I would like to learn more about this tool to its nuts and bolts. The intention of this blog series is to document and share what I’ve learned in this journey.

My first area I’d like to explore is performance as it has been most talked within our developer circle and Power BI community. One thing I’ve seen quite often when searching for Power BI performance improvement is to utilize the Query Folding, which basically, pushes query logics and filters to the database. Let’s database do what it does best – extracting and processing data before sending back to Power BI Desktop client. If this is done correctly, this will reduce workload that Power BI has to do on the client side.

There are a couple well-written articles about Query Folding and its benefits already. There is no need for me to repeat it.

Part of this exercise here is to learn how to measure expected performance improvement when using Query Folding.

Setup:

I’m connecting to a 1.3-million-row Oracle database table.

SQL> SELECT COUNT(*) FROM TABLE_ADDRESS_T;
COUNT(*)
----------
1312127

 

I created a series of transformations with and without supporting Query Folding. The first two – Filtered Rows and Uppercased Text – in the sample below are the ones supporting Query Folding. We can confirm it by seeing that the View Native Query is available if right-clicking from the last one.

Or we can confirm it by viewing query itself to see that the native SQL query is re-written to include those transformations.

For the next one, the Split Column by Delimiter does not support Query Folding. Note that the View Native Query is now disabled.

The plan is to run these transformations so most will be utilizing the Query Folding then take a measurement. Then move the non-supported one up to the top so the rest will not be able to utilize the Query Folding thus transformations will be processed on the client.

If I would run this small set of transformations, we’d probably not see much difference in term of runtime so I added more transformations just by manually modifying the M Query as seen below – in this case, just adding alternative upper- and lower-case transformations.

To measure the total run time, I’m using the Rui Romano’s Power BI Desktop Trace Logs Analyser, which will read and visualize Power BI Desktop’s diagnostic trace files.

Before each run, the cache will be cleared and enable tracing will be enabled. Once done in each run, the tracing will be disabled.

 

Outputs:

A. 1st Run data refresh utilizing Query Folding

B. 1st Run data refresh without utilizing Query Folding

 

C. 2nd Run data refresh utilizing Query Folding

D. 2nd Run data refresh without utilizing Query Folding

The outcome clearly confirms our expectation. I think the number difference would be greater if more transformations especially complex ones are added.

In my future post, I’d like to explore further to understand what these Action Details are, though now I’m not confident that I could find any. It seems like this information may not be shared with customers according to the discussion here.

 

Categories: DBA Blogs

Rename all exported files to their original names after exporting from Oracle database using Oracle SQL Developer’s Shopping Cart

Sun, 2016-03-13 15:08

If you’re searching for “export Oracle BLOB”, the article, by Jeff Smith, titled “Exporting Multiple BLOBs with Oracle SQL Developer” using Oracle SQL Developer” is usually at the top of the search result. The SQL Developer features the Shopping Cart without using scripts to export BLOBs out of database. I don’t want to go into detail as Jeff already explained well in his post what it is and how to use it. One main issue of using this approach is that sometime you want the actual file names instead of the exported names. This can be overcame easily using a post-run script. I wrote this simple script in Python as it suites well with name manipulation. (I’m not a Python expert, but it is one of programming languages that is very easy to learn.)

The script is just reply read from the FND_LOBS_DATA_TABLE.ldr file, which contains information about original filename and new exported filename (in the format of FND_LOBS_DATA_TABLExxxxx).

# Sample data
 1889399|"CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}

# 1st = File ID (Media ID)
# 2nd = Original File Name
# 4th = Exported File Name
# The remaining information is not relevant.

The script separates all information, which is stored in a single line, by string {EOL} into multiple lines. It continues to split into each column based positions. The information we’re interested in is in the 1st, 2nd and 4th position. It then just calls the operating system to rename file.

The content of the script rename.py as follows:


from sys import argv
import string
import shutil
import os
# Script to rename exported BLOB files from Oracle SQL Developer tool
#
# Pre-requisite: Python 3.x https://www.python.org/downloads/
#
# Execution:
# (1) Copy the script to the folder containing mapping file - "FND_LOBS_DATA_TABLE.ldr" and all exported files.
# (2) Execute the script as follows
#      C:\> cd deploy
#      C:\> rename.py FND_LOBS_DATA_TABLE.ldr

# Take parameters
script, filename = argv
# Open file in read-only mode
file = open(filename, 'r', encoding="utf8")

# Sample data - everything is stored in one line.
# 1889399|"EPR - CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation_murata (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MGS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}
# 1st = File ID (Media ID)
# 2nd = Actual/Original File Name
# 3rd = File Type
# 4th = Exported File Name
# The remaining = Not relevant

# First, split each by string {EOL} 
splitted_line = file.read().split('{EOL}')

# For each splitted line, split into each word, separated by |
for s in splitted_line:
 # Split by |
 splitted_word = s.split('|')
 
 # If reaching the last line, which contains only [''], exit the loop.
 if len(splitted_word) == 1:
 break
 
 # The Original file name is in the 2nd word (list position #1)
 # Strip out double quotes and leading & trailing spaces if any
 orig_name = splitted_word[1].strip('"').strip() 
 
 # The Exported file name is in the 4th word (list position #3) 
 exported_name = splitted_word[3].strip() # Strip out leading & trailing spaces if any
 
 # We plan to prefix each file with its unique FILE_ID.
 # This is to avoid file name collision if two or more files have the same name
 # Also, strip out leading & trailing spaces if any
 file_id = splitted_word[0].strip() 
 
 # Rename file
 # Adjust the new file name according to your needs
 os.rename(exported_name, file_id + '_' + orig_name)

After unzipping the deploy.zip, which is the default exported file from SQL Developer, copy the rename.py into this unzipped folder.

C:\> cd deploy
C:\> dir
02/23/2016 07:57 PM 2,347 rename.py
02/23/2016 07:57 PM 34,553 export.sql
02/23/2016 07:52 PM 1,817 FND_LOBS.sql
02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
02/23/2016 07:57 PM 78,178 FND_LOBS_DATA_TABLE10fa4165-0153-1000-8001-0a2a783f1605.ldr
02/23/2016 07:57 PM 27,498 FND_LOBS_DATA_TABLE10fa4339-0153-1000-8002-0a2a783f1605.ldr
02/23/2016 07:57 PM 17,363 FND_LOBS_DATA_TABLE10fa43c5-0153-1000-8003-0a2a783f1605.ldr
02/23/2016 07:57 PM 173,568 FND_LOBS_DATA_TABLE10ff189d-0153-1000-8219-0a2a783f1605.ldr
:
:

C:\> rename.py FND_LOBS_DATA_TABLE.ldr

C:\> dir
02/23/2016 07:57 PM 2,347 rename.py
02/23/2016 07:57 PM 34,553 export.sql
02/23/2016 07:52 PM 1,817 FND_LOBS.sql
02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
02/23/2016 07:57 PM 78,178 689427_DATACOM SOUTH ISLAND LTD.htm
02/23/2016 07:57 PM 27,498 698623_lincraft.htm
02/23/2016 07:57 PM 17,363 772140_275131.htm
02/23/2016 07:57 PM 173,568 3685533_RE 新办公室地址.MSG
:
:

Categories: DBA Blogs

JDBC connection samples in Oracle Enterprise Data Quality (OEDQ) to Oracle Service Name and MS SQL SSL

Fri, 2015-11-06 16:46

This post is just a quick note to myself on how to configure JBDC on Oracle Enterprise Data Quality (OEDQ) to connect to different databases.

First let’s talk about connecting to Oracle database. By default, OEDQ’s Data Store can only connect Oracle database by SID, but not by Service Name.

 

Fortunately, the JDBC connection option is available to take advantage of the connection string that can support Service Name and multiple hosts if needed.

EDQ_datastore_JDBC

 

Sample:

Driver class name: weblogic.jdbc.oracle.OracleDriver
JDBC URL:  jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS = (PROTOCOL = TCP)(HOST = host1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = host2)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = host3)(PORT = 1521))(CONNECT_DATA=(SERVICE_NAME = srvname)))

 

Another sample is to connect Microsoft SQL Server which is requires SSL connection. By default if using Server > Database > Microsoft SQL Server (2000-2008), you will get this error.

[FMWGEN][SQLServer JDBC Driver] The SQL Server login requires an SSL connection. (Code 2,011)

EDQ_MSSQL_SSL_error

Again, let’s use the JDBC connection instead.

Driver class name: com.microsoft.sqlserver.jdbc.SQLServerDriver
JDBC URL:  jdbc:weblogic:sqlserver://host:1433;databaseName=DBNAME;EncryptionMethod=SSL;ValidateServerCertificate=false

EDQ_MSSQL_SSL_jdbc

Categories: DBA Blogs

Install APEX’s Sample Packaged Application by Importing its SQL Script

Wed, 2013-10-02 15:11

If for some reason, you could not install the APEX sample packaged application via Application Builder > Packaged Applications interface, you have an option of installing it by importing the SQL script (fxxxx.sql).

In my case, when installing via the Packaged Applications interface, I got the following error:

Screen Shot 2013-10-02 at 12.06.48 PM

While working with DBA and Oracle support to resolve the root cause of this issue, I’ve found that the installation script (like export file) of the packaged applications comes with the APEX installation files under apex/core/packaged_apps

ICHAMMA1:packaged_apps$ pwd
/Users/ichamma1/Downloads/apex/core/packaged_apps

ICHAMMA1:packaged_apps$ grep "prompt  APPLICATION" *.sql
f7000.sql:prompt  APPLICATION 7000 - Online Marketing Campaign Calendar
f7010.sql:prompt  APPLICATION 7010 - Decision Manager***
f7020.sql:prompt  APPLICATION 7020 - Asset Manager*
f7050.sql:prompt  APPLICATION 7050 - Opportunity Tracker ***
f7060.sql:prompt  APPLICATION 7060 - Bug Tracking***
f7090.sql:prompt  APPLICATION 7090 - Group Calendar ***
f7100.sql:prompt  APPLICATION 7100 - Artwork Catalog***
f7120.sql:prompt  APPLICATION 7120 - Expertise Tracker***
f7130.sql:prompt  APPLICATION 7130 - Community Requests ***
f7140.sql:prompt  APPLICATION 7140 - Incident Tracking***
f7150.sql:prompt  APPLICATION 7150 - Systems Catalog***
f7170.sql:prompt  APPLICATION 7170 - Customer Tracker***
f7190.sql:prompt  APPLICATION 7190 - Issue Tracker***
f7220.sql:prompt  APPLICATION 7220 - P-Track***
f7230.sql:prompt  APPLICATION 7230 - Data Model Repository Viewer*
f7240.sql:prompt  APPLICATION 7240 - Checklist Manager***
f7250.sql:prompt  APPLICATION 7250 - Data Reporter***
f7270.sql:prompt  APPLICATION 7270 - APEX Application Archive***
f7280.sql:prompt  APPLICATION 7280 - Survey Builder ***
f7290.sql:prompt  APPLICATION 7290 - Meeting Minutes***
f7300.sql:prompt  APPLICATION 7300 - Use Case Status***
f7600.sql:prompt  APPLICATION 7600 - Sample Access Control*
f7610.sql:prompt  APPLICATION 7610 - Sample Build Options*
f7650.sql:prompt  APPLICATION 7650 - Go Live Check List***
f7800.sql:prompt  APPLICATION 7800 - Brookstrut Sample Application ***
f7810.sql:prompt  APPLICATION 7810 - Sample Reporting***
f7820.sql:prompt  APPLICATION 7820 - Sample Calendar***
f7830.sql:prompt  APPLICATION 7830 - Sample Charts***
f7840.sql:prompt  APPLICATION 7840 - Sample Dynamic Actions***
f7850.sql:prompt  APPLICATION 7850 - Sample Data Loading***
f7860.sql:prompt  APPLICATION 7860 - Sample Master Detail***
f7870.sql:prompt  APPLICATION 7870 - Sample Forms and Grid Layout***
f7880.sql:prompt  APPLICATION 7880 - Sample Search***
f7890.sql:prompt  APPLICATION 7890 - Feedback ***
f7900.sql:prompt  APPLICATION 7900 - Sample Dialog***
f7910.sql:prompt  APPLICATION 7910 - Sample Trees***
f7920.sql:prompt  APPLICATION 7920 - Sample Lists***
f7930.sql:prompt  APPLICATION 7930 - Sample Wizards***
f7940.sql:prompt  APPLICATION 7940 - Sample Collections***
f7950.sql:prompt  APPLICATION 7950 - Sample Time Zones*
f7960.sql:prompt  APPLICATION 7960 - Sample File Upload and Download***
f7980.sql:prompt  APPLICATION 7980 - Sample RESTful Services***
f8950.sql:prompt  APPLICATION 8950 - Sample Database Application

The installation is just simple as importing this script file (Application Builder > Import) or run from the SQL Plus (with proper security setup).

Categories: DBA Blogs

Oracle Database Gateways 11g R2 (11.2) Installation and Configuration for heterogeneous connection from Oracle to Microsoft SQL database

Mon, 2013-04-15 17:33
Installation

Install the Oracle Database Gateways 11g R2 (11.2). See the screen snapshots of the installation here.

Configuration
  • During the installation, the following+ default initialization parameter file is created:
[ORACLE_DG_HOME]\dg4msql\admin\initdg4msql.ora
dg4msql = Database gateway for Microsoft SQL Server. 
If you choose a different database option, use the appropriate path name.
  • Copy initdg4msql.ora to a new file init[MSSQL].ora.
[MSSQL] can be any meaningful name easier to refer to, for example, mssqlsale. 
The filename will be in the format of init[MSSQL].ora.
copy initdg4msql.ora initmssqlsale.ora
  • Modify newly created file initmssqlsale.ora and modify or add the MS SQL server & database name.
# This is a customized agent init file that contains the HS parameters
# that are needed for the Database Gateway for Microsoft SQL Server
#
# HS init parameters
#
HS_FDS_CONNECT_INFO=[SERVERNAME]//DATABASENAME
HS_FDS_TRACE_LEVEL=OFF
HS_FDS_RECOVERY_ACCOUNT=RECOVER
HS_FDS_RECOVERY_PWD=RECOVER
  • Modify the listener.ora file. This can be that of the existing listener or new listener. Add SID_DESC to the appropriate place in the listener.ora file.
         (SID_DESC=
            (SID_NAME=[MSSQL])
            (ORACLE_HOME=[ORACLE_DG_HOME])
            (PROGRAM=[DRIVER])
         )
[MSSQL] =  Name of the new configuration file excluding the init and .ora.
For example, if the file name is initmssqlsale.ora, the [MSSQL] will be only mssqlsale.
[ORACLE_DG_HOME] = Oracle Database Gateway Home. This is NOT listener home.
[DRIVER] = dg4msql for Microsoft SQL Server

Sample:

 
LISTENER =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = DGHOST)(PORT = 1521))
      )
    )

SID_LIST_LISTENER = 
   (SID_LIST=
     (SID_DESC=
            (SID_NAME=mssqlsale)
            (ORACLE_HOME=D:\product\11.2.0\tg_1)
            (PROGRAM=dg4msql)
      )
    )
 )
  • Restart or reload the listener.
  • Validate using lsnrctl status.
C:\>lsnrctl status
LSNRCTL for 64-bit Windows: Version 11.2.0.1.0 - Production
:
:
Services Summary...
Service "mssqlsale" has 1 instance(s).
 Instance "mssqlsale", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
  • On different Oracle database where we’d like to connect to this MS SQL database, create a new database link. The new database link will point to the the host where the gateway is installed.
create database link dblink_mssqlsale connect to "username" identified by "password"
using '
     (DESCRIPTION=
        (ADDRESS=(PROTOCOL=tcp)(HOST=DGHOST)(PORT=1521))
        (CONNECT_DATA=(SID=mssqlsale))
        (HS=OK)
     )';
DGHOST = Host where the Oracle Database Gateway is installed.
mssqlsale = SID name mentioned in the listener.ora.
  • Try to query for a table.
select count(*) from information_schema.tables@dblink_mssqlsale;
 COUNT(*)
----------
 26
Add more MS SQL databases
  • Repeat the above steps starting with copying the sample file into a new init file, and updating it with an appropriate host and database name.
copy initdg4msql.ora to initmssqlhr.ora
  • Update the listener.ora.
  (SID_LIST=
     (SID_DESC=
            (SID_NAME=mssqlsale)
            (ORACLE_HOME=D:\product\11.2.0\tg_1)
            (PROGRAM=dg4msql)
     )
     (SID_DESC=
            (SID_NAME=mssqlhr)
            (ORACLE_HOME=D:\product\11.2.0\tg_1)
            (PROGRAM=dg4msql)
     )
  )
  • Restart or reload listener. Check using lsnrctl status.
Categories: DBA Blogs

How long did Oracle materialized view refresh run?

Mon, 2013-01-21 09:38

The LAST_REFRESH_DATE column of the DBA_MVIEWS or the LAST_REFRESH column of the DBA_MVIEW_REFRESH_TIMES indicates the start refresh time. But what if we’d like to find out how long the refresh of the materialized view really takes. Well, we can query the DBA_MVIEW_ANALYSIS.

For Complete Refresh, the refresh duration will be in the FULLREFRESHTIM column of the DBA_MVIEW_ANALYSIS. For Fast Refresh duration, it will be in the INCREFRESHTIM column.

Both values are in seconds.

SELECT mview_name, last_refresh_date, fullrefreshtim, increfreshtim
FROM dba_mview_analysis
WHERE owner='JOHN';

MVIEW_NAME               LAST_REFRESH_DATE      FULLREFRESHTIM INCREFRESHTIM
------------------------ ---------------------- -------------- -------------
MV_CHANGE_HISTORY        07-JAN-13 04.36.58 PM               0            36
MV_ITEM_HISTORY          07-JAN-13 04.36.58 PM               0             9

This shows that the recent refresh of the MV_CHANGE_HISTORY and MV_ITEM_HISTORY are the fast refreshes for 36 and 9 seconds respectively.

Put in one query to calculate and display the end time.

SELECT 
   mview_name,
   last_refresh_date "START_TIME",
   CASE
      WHEN fullrefreshtim <> 0 THEN
         LAST_REFRESH_DATE + fullrefreshtim/60/60/24
      WHEN increfreshtim <> 0 THEN
         LAST_REFRESH_DATE + increfreshtim/60/60/24
      ELSE
         LAST_REFRESH_DATE
   END "END_TIME",
   fullrefreshtim,
   increfreshtim
FROM all_mview_analysis
WHERE owner='JOHN';

MVIEW_NAME              START_TIME             END_TIME               FULLREFRESHTIM INCREFRESHTIM
----------------------- ---------------------- ---------------------- -------------- -------------
MV_CHANGE_HISTORY       07-JAN-13 04.36.58 PM  07-JAN-13 04.37.34 PM               0            36
MV_ITEM_HISTORY         07-JAN-13 04.36.58 PM  07-JAN-13 04.37.07 PM               0             9

Reference: How To Calculate MVIEW Refresh Duration? What Does DBA_MVIEWS.LAST_REFRESH_DATE and DBA_MVIEW_REFRESH_TIMES.LAST_REFRESH Indicate? [ID 1513554.1]

Categories: DBA Blogs

Create Google Tasks by sending email to Google GMail Address

Fri, 2012-08-10 07:59

I use Google Tasks for a quick to-do list. It has clean interface and is easy to use. On desktop or laptop machine, it is built-in to Google Mail for a quick access. On smartphones, many to-do apps including Tasks N Todos sync with Google tasks.

The neat thing is that in the Google Mail, you can add Gmail messages into the task list very easily by selecting the messages and then using More Actions > Add to Tasks.

What if you want to add email messages from other mails like that from work, or Yahoo Mail, etc., it doesn’t seem that there is a straightforward way to do so.

I found this web log on the automated email-to-task with Google Apps Script by DJ Adams. The Google Apps Script is able to parse the email with a specific filtered label and create a task automatically. Let’s give it a try.

The overall process is as follows:

  • Two new Gmail labels need to be created – newtask and newtaskdone. When a new email is arrived, the filter will label it with newtask. Once the script processes this email, it will be re-labeled to newtaskdone so it won’t be processed again.
  • To make sure that only specified emails – not all – are processed, one of the hidden features of Gmail will be used. The filter will look for only +task@gmail.com (such as ittichai+task@gmail.com) in the TO address to apply new label. Read this on how to use “+” (plus ) or “.” (dot) in your Gmail address.
  • The Apps Script is from the Google Spreadsheet. The original post is to use only the email’s subject for the task’s title but I modified codes a bit to include the email’s body to be the task’s body as well.
  • One of the important things is to integrate the script with Google API so it will allow to use the Google Tasks’ API service and content.
  • Schedule it to run with a needed interval. I’m doing it every 30 minutes. Note that there is a courtesy limit of 5,000 requests per day. But this should be more than enough for a normal use.

Courtesy Limit of Tasks API

  • Now just simply forward all emails to+task@gmail.com if you want to add them into the task list. It should show up in the Google Tasks within your specified interval.

All step-by-step instructions can be found at my wiki site.

Categories: DBA Blogs

Resizing the filesystem using Logical Volume Manager within Oracle Linux

Fri, 2012-07-06 20:53

Oftentimes we all run into the situation where the file system is full (or almost full) and need more space. This expansion task seems to be a lot easier when using the Logical Volume Manager (LVM) in Oracle Linux.

  • Review the current size.
[root@ol6 ~]# df -H
Filesystem                 Size   Used  Avail Use% Mounted on
/dev/mapper/vg_ol6-lv_root 27G    22G   3.8G  86% /
tmpfs                      1.3G   209M   1.1G  17% /dev/shm
/dev/sda1                  508M    97M   385M  21% /boot
Downloads                  750G   172G   578G  23% /media/sf_Downloads

Plan to add 30G to the root file system.

  • Create a partition on the newly-added disk.
[root@ol6 ~]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x0c04311f.
Changes will remain in memory only, until you decide to write them.

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3916, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3916, default 3916): 
Using default value 3916

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
  •  Create a physical volume on top of it.
[root@ol6 ~]# pvcreate /dev/sdf1
  Writing physical volume data to disk "/dev/sdf1"
  Physical volume "/dev/sdf1" successfully created
  • Review the current volume. Note that currently there is no free extends (noted by zero value of the “Free PE / Size”).
[root@ol6 ~]# vgdisplay
  --- Volume group ---
  VG Name               vg_ol6
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               29.51 GiB
  PE Size               4.00 MiB
  Total PE              7554
  Alloc PE / Size       7554 / 29.51 GiB
 Free PE / Size 0 / 0  
  VG UUID               2e2VHd-Mb3D-Uz0G-4Yec-tbfe-f3cI-7cvpby
  • Extend this volume with a new disk.
[root@ol6 ~]# vgextend vg_ol6 /dev/sdf1
  Volume group "vg_ol6" successfully extended
  • Check the volume again. The “Free PE / Size” is now 30G.
[root@ol6 ~]# vgdisplay
--- Volume group ---
VG Name vg_ol6
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 59.50 GiB
PE Size 4.00 MiB
Total PE 15233
Alloc PE / Size 7554 / 29.51 GiB
Free PE / Size 7679 / 30.00 GiB
VG UUID 2e2VHd-Mb3D-Uz0G-4Yec-tbfe-f3cI-7cvpby
  • Now let’s review the logical volume.
[root@ol6 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_ol6/lv_root
  LV Name                lv_root
  VG Name                vg_ol6
  LV UUID                rd2d4X-vqE8-xENi-clCz-Oa0T-0R6X-RFCBDq
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
 LV Size 25.10 GiB
  Current LE             6426
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg_ol6/lv_swap
  LV Name                lv_swap
  VG Name                vg_ol6
  LV UUID                xM3Blz-wvpG-IUfF-WhWc-EHoI-I0xG-oeV1IR
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                4.41 GiB
  Current LE             1128
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

We want to add additional 30G into the existing /dev/vg_ol6/lv_root. So the total size will be 55.1GB.

  • We can extend the logical volume to the needed size.
[root@ol6 ~]# lvextend -L 55.10G /dev/vg_ol6/lv_root
  Rounding size to boundary between physical extents: 55.10 GiB
  Extending logical volume lv_root to 55.10 GiB
  Insufficient free space: 5120 extents needed, but only 5119 available

You may have to adjust the size if the initial specified size is too large.

[root@ol6 ~]# lvextend -L 55G /dev/vg_ol6/lv_root
  Extending logical volume lv_root to 55.00 GiB
  Logical volume lv_root successfully resized
  • Now finally you can extend the file system.
[root@ol6 ~]# resize2fs /dev/vg_ol6/lv_root 55G
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg_ol6/lv_root is mounted on /; on-line resizing required
old desc_blocks = 3, new_desc_blocks = 4
Performing an on-line resize of /dev/vg_ol6/lv_root to 14417920 (4k) blocks.
The filesystem on /dev/vg_ol6/lv_root is now 14417920 blocks long.
  • The file system is resized while the system is still on-line.
[root@ol6 ~]# df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_ol6-lv_root
55G 22G 33G 39% /
tmpfs 1.3G 209M 1.1G 17% /dev/shm
/dev/sda1 508M 97M 385M 21% /boot
Downloads 750G 172G 578G 23% /media/sf_Downloads

References:

Categories: DBA Blogs

Customize/Personalize Oracle APEX Workspace Environment

Wed, 2011-11-09 16:06

When you have multiple Oracle APEX environments, e.g., development, test, UAT and production, and for multiple groups, one of the confusing things is that sometimes developers are lost on which environment they’re working on. Most of technically proficient developers can make a distinction easily from the URLs they’re currently using. However, most of less technically proficient (e.g., business users) sometimes cannot.

A quick and simple solution here is to make a visual distinction on the APEX workspace pages especially on the logo area. Instead of using plain vanilla logo images, they are replaced with more distinct and informative images.

This is the default APEX workspace logon page.

Default Workspace Logon Image

This is a sample of the personalized workspace logon page with a company logo and it is showing that this is the development environment.

Customized APEX workspace logon page

This is the main workspace page after logon showing the default logo image.

Default Workspace Page

You can change it to be like this.

Customized APEX workspace development page

With just this minor change, our users feel more comfortable knowing that they’re working the right environment.

How to change it?

With a quick investigation, you can easily find the paths of those image files as follows:

Workspace logon page:

Image path: /i/apex/builder/apex-logo-white.gif
Image size: 300px x 30px
Transparent background

Image path:/i/apex/builder/apex-db-apps.png
Image size: 240px x 200px
Transparent background

Workspace main page after logon:
Image path: /i/htmldb/apex_logo.gif
Image size: 300px x 30px
White background

  • Use Photoshop or any image editing tools to recreate those image files.
  • Copy and replace those image files on the web server. Please make the backup of those files before replacing them.

Note that this approach will NOT work if you’re using a shared web server for multiple APEX database environments because they’re using the same image files.

Update: Using ApexLib script in the login message (under Manage Instance) as mentioned by Peter in the comment section is another solution for customization. I like this approach as not only you can customize the logo, but you can also change the description text on the workspace logon page. In addition, since the change is on the APEX instance itself, it can definitely be used on a shared web server.

Categories: DBA Blogs

Book Review: Android Application Testing Guide (Deigo Torres Milano) by PACKT Publishing

Mon, 2011-10-24 10:39

Android Application Testing Guide book  (Diego Torres Milano) by Packt PublishingWhen I was offered to review this book, I was so excited. The topic is probably one of the least popular topics in the Android development circle based on the number of books written on Android. Most of the books in the market tend to cover solely on the area of actual development because it gives readers an instant gratification when creating something. However, it is unfortunate (but true) that the application testing might be something most developers least think about, or if they do, they not do it systematically. (I’m guilty of this as well.) I would like to expand my horizon by learning from the pros. That’s why I’m so excited for a chance to review this book.

The “Android Application Testing Guide” is a very practical book introducing available frameworks and most widely used tools & techniques to improve the qualify of the application by engaging in the quality control and testing throughout the development cycle (not just start testing at the end). This agile development concept called the Test Driven Development (TDD) relies on repeatable short cycle to trying to catch and take care of potential issues (i.e., bugs) as early as possible.

The first chapter explains what involves in the different stages of testing in the Android development including unit test, integration test, functional or acceptance test, system test, and performance test. It starts introducing the Android testing framework extending JUnit which provides the complete testing framework suitable for the end-to-end testing strategies.

The chapter 2 starts working on the actual testing using JUnit which is the default framework for Android testing project and is supported by Eclipse which it the most widely-used IDE for Android development. The chapter jumps right into the step-by-step on how to create the Android test project which is a separate from its development project being tested. The test project will have independent structure and a set of its own components. Having a separate project is the best practice because from the production build’s standpoint, testing codes will not be included in the actual build, thus it will be not be in the APK.

Chapter 3 dives into individual building block in the Android SDK tool.  This covers Assertions, TouchUtils class (to simulate the touch events), Mock objects (to simulate mock objects in order to isolate the tests), TestCase class, and Instrumentation. There is an extensive explanation of individual component accompanying by code samples.

Chapter 4 talks about the concept of Test Driven Development. Again, it is the strategy of performing tests along the development process – not at the end as in the traditional approach. This even includes writing test cases (and test codes) first right after studying the requirements, and then writing the actual codes to satisfy (having the “pass” results) the test cases. Author claims that this approach of creating test cases this early will ensure that tests will be performed instead of, if left until the end, it is highly possible that they would have been forgotten or ignored. I agree with the concept. But in reality this may not work with all types of the projects, and this is confirmed by author in a few paragraphs later to use your judgement and expertise in applying this approach to wherever suitable. The latter part of the chapter shows samples on how to apply the TDD in the step-by-step sample application and tests, which I found this makes the case very compelling.

Chapter 5 introduces the Android Virtual Device (AVD) which is the next best thing to the real devices. The obvious advantage of using AVD is ability to create different Android configurations to run tests. The Headless Emulator (no window displayed) is also mentioned here allowing the automated tests via command line with no windows. I really enjoy many samples of what you can do with command-lines especially when you can simulate different device’s conditions including network bandwidth throttling, or different locales, etc. Later in the chapter, it talks about the support of the Monkey application (you can read more about Monkey Theorem here) which allows random event generation. It also demos the server-client setup and test scripting with Monkneyrunner.

Chapter 6 talks about the Behavior Driven Development which, according to author, is considered the evolution of the Test Driven Development and a need of Acceptance Testing. In summary, the approach emphasizes not only to include the business or end users in testing, but also to use non-technical terms in the test cases that business or end users would understand. Personally I think the concept is too abstract than practical. However, the introduction of Fitnesse and Slim tools makes the point to see its usefulness especially the part of “Wiki”-style test cases and test result documentation.

Chapter 7 contains all practical real-world samples of the disciplines and techniques you can use in the different situations. This includes testings on activities and applications, databases and Content Providers, UIs, exceptions, parsers and memory leaks. Couple tools are introduced here including EasyMock which provides mock objects for interfaces, and Hamcrest which is a library of matcher objects for comparison.

Chapter 8 continues the introduction of Agile technique with Continuous integration. Similar to continuous testing, it is suggested that integration should be done frequently early in the process and in small steps. The most common practice, as stated by author, is to trigger the build process after every commit to the source code repository. The chapter goes in details how to setup the automate building process using ant, the version control repository using Git, the continuous integration with Hudson, and test result analysis using nbandroid-utils.

Chapter 9 discusses one of the most critical components in designing Android application which is performance. As we all know, with a device which has limited computing power and battery life, balancing between performance and effective use of resources is something developers should keep in mind when designing an Android application. The chapter again introduces us with a series of tools and techniques for performance testing. This includes using traditional LogCatlogging, extending Instrumentation, Traceview, dmtracedump, and Caliper microbenchmarks.

Chapter 10 – alternative testing tactic is not for the most Android developers as it involves building Android from source code. Again, more tools are introduced. Even though it is very technical, it is still a good read.

Summary:

I have to say that for a novice Android developer like I am, I learn a lot from reading the Android Application Testing Guide. Best-in-class testing techniques are covered in this book which are commonly practiced in the agile development. These include Test Driven Development, Behavior Driven Development, Continuous Integration, etc. They in generally oppose to the traditional approach of doing test and integration at the end of development cycle. However, as author states in the book, there is no silver bullets in software development – the suggested testing approaches might not apply to certain needs or projects. But I can see that for a large complex project, applying these methodologies and techniques will likely increase in the productivity and quality.

Numerous tools are introduced in this book. This may be one of the big issues for those who are less-advanced in Android development. Since there is no common repository of these tools, extended knowledge of a varieties of open-source tools, and ability to incorporate them with the development process are required. I wish that there would be consolidated repository for all testing tools or even better all required testing functionality would be built-in in the SDK. Well, before when that would happen, this Android Application Testing Guide book is probably one of a few reference books that has the complete collection of Android testing topics.

Disclosure: I received the Android Application Testing Guide e-book from Pubkt publishing with no charge. I’m not compensated for this book review. The content expressed in this post is personal opinion.

 

Categories: DBA Blogs

Oracle APEX (Application Express) Built-in LDAP Test Tool Stand-alone Application

Wed, 2011-09-21 18:56

Oracle APEX provides multiple ways for authentication. The most commonly used one is the LDAP authentication. Configuring it is very straight-forward in most cases – just providing the LDAP host, port, whether or not to use SSL, and finally the DN string. See here and here for posts about APEX LDAP configuration.

Sometime you want to test the LDAP configuration before deploying it. You can either use the 3rd-party tools to validate the configurations, or create an APEX application to test the logon, or use the built-in LDAP test tool in APEX.

The built-in APEX LDAP test tool can be launched when the authentication is being configured as seen below.

LDAP Test Tool

It will pop-up a new window as shown below.

LDAP Test Tool Pop-up Screen

By accident, I found out that you can even launch the APEX LDAP Test Tool as the stand-alone application using Application ID 4000 and Page Number 3890.

For example,

http://localhost:8888/apex/f?p=4000:3890

It will obviously prompt you to log on to the workspace first, then the LDAP test tool will just show just like an application, not the pop-up window.

Additional resources:

Categories: DBA Blogs