Sunil S. Ranka's Weblog

Superior Data Analytics is the antidote to Business Failure

Posts Tagged ‘BigData’

Map Reduce: File compression and Processing cost

Posted by sranka on August 19, 2015

Recently while working with a customer we ran into an interesting situation concerning file compression and processing time. For a system like Hadoop, file compression has been always a good way to save on space, especially when Hadoop replicates the data multiple times.

All Hadoop compression algorithms exhibit a space/time trade-off: faster compression and decompression speeds usually come at the expense of space savings.  For more details about how compression is used, see https://documentation.altiscale.com/when-and-why-to-use-compression. There are many file compression formats, but below we only mention some of the commonly used compression methods in Hadoop. 

The type of compression plays an important role — the true power of MapReduce is realized when input can be split, and not all compression formats are splittable, resulting in an unexpected number of map tasks.

In the case of splittable formats, the number of mappers will correspond to number of block-sized chunks into which the file has been stored, whereas in case of a non-splittable format a single map task will process all the blocks. Although the table above shows that LZO is not splittable, in fact it is possible to index LZO files so that performance can be greatly improved. At Altiscale, our experience has shown that by indexing LZO format files, it will make LZO-compressed files splittable and you can gain in performance. For more information, see https://documentation.altiscale.com/compressing-and-indexing-your-data-with-lzo on how to do it.

Format Codec Extension Splittable Hadoop HDInsight
DEFLATE org.apache.hadoop.io.compress.DefaultCodec .deflate N Y Y
Gzip org.apache.hadoop.io.compress.GzipCodec .gz N Y Y
Bzip2 org.apache.hadoop.io.compress.BZip2Codec .bz2 Y Y Y
LZO com.hadoop.compression.lzo.LzopCodec .lzo N Y N
LZ4 org.apache.hadoop.io.compress.Lz4Codec .Lz4 N Y N
Snappy org.apache.hadoop.io.compress.SnappyCodec .Snappy N Y N

To measure the performance and processing costs for the compression method, we did a simple wordcount example on Altiscale’s in-house production cluster by following the steps mentioned in the below link.  Although there are other more detailed methodologies to measure performance, we choose this example just to demonstrate the benefit and performance tradeoff of a splittable solution:

https://documentation.altiscale.com/wordcount-example
The following  table represents the results.

# File Name Compression Option size

(GB)

Comments # Of

Maps

# Of

Reducers

Processing Time
1 input.txt n/a 5.9 No Compression 24 1 1min 16sec
2 input.txt.gz default 1.46 Normal Gzip compression 1 1 11min 19sec
3 input1.txt.gz -1 1.42 gzip with -1 option, means optimize for speed 1 1 11min 41sec
4 input9.txt.gz -9 1.14 gzip with -9 option, means optimize for space 1 1 11min 21sec

The following shows how you can use the Resource Manager Web UI to find the relevant job statistics that show the processing time as well as the number of mappers and reducers that were used.  

input-gz

Conclusion

In our test #1 scenario the uncompressed file size was 5.9 GB when stored in HDFS. With a HDFS block size of 256 MB, the file was stored as ~24 blocks, and a MapReduce job using this file as input created 24 input splits, each processed independently as input to a separate map task taking only 1 min 16 sec.

In the rest of the test scenarios, due to gzip, the file could not be split, resulting in a single input split and taking an average time of approximately 11 min. Even the gzip -1 option, meant for optimize speed, or  -9 option, meant for optimize space, did not  help much.

Gzip compression is an important aspect of the Hadoop ecosystem; it helps save space at a trade off of processing time. If the data processing is time sensitive, then a splittable compression format, or even uncompressed files would be recommended.

Advertisements

Posted in Uncategorized | Tagged: , , , , , , | 1 Comment »

Accessing HDFS files on local File system using mountableHDFS – FUSE

Posted by sranka on April 9, 2015

Hi All

Recently we had one requirement wherein we had to merge the files post Map and Reducer job. Since the file needed to be given to the outbound team outside of Hadoop development team, having these files on local system would have been ideal. The customer IT team worked with cloudera and gave us a mount point using a utility/concept called “mountableHDFS” aka FUSE (Filesystem in Userspace)  .

mountableHDFS, helps allowing HDFS to be mounted (on most flavors of Unix) as a standard file system using the mount command. Once mounted, the user can operate on an instance of hdfs using standard Unix utilities such as ‘ls’, ‘cd’, ‘cp’, ‘mkdir’, ‘find’, ‘grep’

For more details on mountableHDFS :

https://wiki.apache.org/hadoop/MountableHDFS

For how to configure on cloudera :

http://www.cloudera.com/content/cloudera/en/documentation/core/v5-2-x/topics/cdh_ig_hdfs_mountable.html

 

Special thanks to Aditi Hedge for bringing to my attention.

Hope This Helps,

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

Posted in Uncategorized | Tagged: , , , , | Leave a Comment »

Permissions for both HDFS and local fileSystem paths

Posted by sranka on July 18, 2014

Hi All,

Permission issues is one of the key error , while setting up Hadoop Cluster, while debugging some error found below table on http://hadoop.apache.org/ . It’s a good scorecard to keep handy.

 

Permissions for both HDFS and local fileSystem paths

The following table lists various paths on HDFS and local filesystems (on all nodes) and recommended permissions:

Filesystem Path User:Group Permissions
local dfs.namenode.name.dir hdfs:hadoop drwx——
local dfs.datanode.data.dir hdfs:hadoop drwx——
local $HADOOP_LOG_DIR hdfs:hadoop drwxrwxr-x
local $YARN_LOG_DIR yarn:hadoop drwxrwxr-x
local yarn.nodemanager.local-dirs yarn:hadoop drwxr-xr-x
local yarn.nodemanager.log-dirs yarn:hadoop drwxr-xr-x
local container-executor root:hadoop –Sr-s—
local conf/container-executor.cfg root:hadoop r——–
hdfs / hdfs:hadoop drwxr-xr-x
hdfs /tmp hdfs:hadoop drwxrwxrwxt
hdfs /user hdfs:hadoop drwxr-xr-x
hdfs yarn.nodemanager.remote-app-log-dir yarn:hadoop drwxrwxrwxt
hdfs mapreduce.jobhistory.intermediate-done-dir mapred:hadoop drwxrwxrwxt
hdfs mapreduce.jobhistory.done-dir mapred:hadoop drwxr-x—

Hope this helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

This table was taken directly from http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SecureMode.html

 

Posted in 11g, Big Data | Tagged: , , , , , , , | 1 Comment »

Hbase : Co-relation between RegionServer and Region

Posted by sranka on March 21, 2014

Hi All

While looking into HBase performance issue, one of the suggestion was to have more region for a larger table. There was some confusion around, “Region” vs “RegionServer” . While doing some digging, found a simple text written below.

The basic unit of scalability and load balancing in HBase is called a region. Regions are essentially contiguous ranges of rows stored together. They are dynamically split by the system when they become too large. Alternatively, they may also be merged to reduce their number and required storage files.*

The HBase regions are equivalent to range partitions as used in database sharding. They can be spread across many physical servers, thus distributing the load, and therefore providing scalability

Initially there is only one region for a table, and as you start adding data to it, the system is monitoring it to ensure that you do not exceed a configured maximum size. If you exceed the limit, the region is split into two at the middle key—the row key in the middle of the region—creating two roughly equal halves.

Each region is served by exactly one region server, and each of these servers can serve many regions at any time. The logical view of a table is actually a set of regions hosted by many region servers.

The default split policy for HBase 0.94 and trunk is IncreasingToUpperBoundRegionSplitPolicy, which does more aggressive splitting based on the number of regions hosted in the same region server. The split policy uses the max store file size based on Min (R^2 * “hbase.hregion.memstore.flush.size”, “hbase.hregion.max.filesize”), where R is the number of regions of the same table hosted on the same regionserver. So for example, with the default memstore flush size of 128MB and the default max store size of 10GB, the first region on the region server will be split just after the first flush at 128MB. As number of regions hosted in the region server increases, it will use increasing split sizes: 512MB, 1152MB, 2GB, 3.2GB, 4.6GB, 6.2GB, etc. After reaching 9 regions, the split size will go beyond the configured “hbase.hregion.max.filesize”, at which point, 10GB split size will be used from then on. For both of these algorithms, regardless of when splitting occurs, the split point used is the rowkey that corresponds to the mid point in the “block index” for the largest store file in the largest store.

  The above text has been taken from Chapter 1 – Introduction, section – Building Blocks of “HBase The Definitive Guide” book and “HortonWorks Blog “.

Hope This Helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

HDFS Free Space Command

Posted by sranka on March 17, 2014

Hi All

With increasing data  volume , in HDFS space could be continued challenge. While running into some space related issue, following command came very handy, hence thought of sharing with extended virtual community.

At times it gets challenging to know how much of actual space a directory or a file is using.  Having a command which can give you human readable format of size is always useful.  Below command shows how to get actual human readable file size on HDFS

hdfs dfs -du -h /

241.3 G  /app
9.8 G    /benchmarks
309.6 G  /hbase
0        /system
59.6 G   /tmp
20.0 G   /user
[sranka@devHadoopSrvr06 ~]$

 

hadoop dfsadmin -report

Post running the command, below is the result, it takes all the nodes in the cluster and gives the detail break-up based on the space availability and spaces used.


Configured Capacity: 13965170479105 (12.70 TB)
Present Capacity: 4208469598208 (3.83 TB)
DFS Remaining: 2120881930240 (1.93 TB)
DFS Used: 2087587667968 (1.90 TB)
DFS Used%: 49.60%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 5 (5 total, 0 dead)

Live datanodes:
Name: 160.33.148.202:50010 (devHadoopSrvr08.ps.am.mycompany.com)
Hostname: devHadoopSrvr08.ps.am.mycompany.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 2793034095821 (2.54 TB)
DFS Used: 381953257472 (355.72 GB)
Non DFS Used: 1986904386765 (1.81 TB)
DFS Remaining: 424176451584 (395.05 GB)
DFS Used%: 13.68%
DFS Remaining%: 15.19%
Last contact: Mon Mar 17 12:43:05 PDT 2014

Name: 160.33.148.204:50010 (devHadoopSrvr10.ps.am.mycompany.com)
Hostname: devHadoopSrvr10.ps.am.mycompany.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 2793034095821 (2.54 TB)
DFS Used: 402465816576 (374.83 GB)
Non DFS Used: 1966391827661 (1.79 TB)
DFS Remaining: 424176451584 (395.05 GB)
DFS Used%: 14.41%
DFS Remaining%: 15.19%
Last contact: Mon Mar 17 12:43:05 PDT 2014

Name: 160.33.148.203:50010 (devHadoopSrvr09.ps.am.mycompany.com)
Hostname: devHadoopSrvr09.ps.am.mycompany.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 2793034095821 (2.54 TB)
DFS Used: 391020421120 (364.17 GB)
Non DFS Used: 1977837223117 (1.80 TB)
DFS Remaining: 424176451584 (395.05 GB)
DFS Used%: 14.00%
DFS Remaining%: 15.19%
Last contact: Mon Mar 17 12:43:06 PDT 2014

Name: 160.33.148.201:50010 (devHadoopSrvr07.ps.am.mycompany.com)
Hostname: devHadoopSrvr07.ps.am.mycompany.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 2793034095821 (2.54 TB)
DFS Used: 389182472192 (362.45 GB)
Non DFS Used: 1979675172045 (1.80 TB)
DFS Remaining: 424176451584 (395.05 GB)
DFS Used%: 13.93%
DFS Remaining%: 15.19%
Last contact: Mon Mar 17 12:43:04 PDT 2014

Name: 160.33.148.59:50010 (devHadoopSrvr06.ps.am.mycompany.com)
Hostname: devHadoopSrvr06.ps.am.mycompany.com
Rack: /default
Decommission Status : Normal
Configured Capacity: 2793034095821 (2.54 TB)
DFS Used: 522965700608 (487.05 GB)
Non DFS Used: 1845892140237 (1.68 TB)
DFS Remaining: 424176254976 (395.04 GB)
DFS Used%: 18.72%
DFS Remaining%: 15.19%
Last contact: Mon Mar 17 12:43:05 PDT 2014

Hope This Helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

Posted in Uncategorized | Tagged: , , , , , , | Leave a Comment »

How To retrieve/backup Views In Endeca

Posted by sranka on January 3, 2014

Hi All,

Last few weeks I have been engaged with a customer, helping them them with remediation of Endeca project. During remediation, faced a typical challenge, where all the graphs and EQLs were erroring out. After doing some research found out that its a known issue . I spent good amount of time on this issue, hence thought of sharing this trivial but useful information .

Issue :

Endeca views gets lost during development, causing all the dependent Graphs, EQLs to error out.

Root Cause :

Unknown ( Could be a potential product Issue)

Solution :

After adding all the view definitions, run Export View definition Graph provided by endeca example, see below link for details.

https://wikis.oracle.com/display/endecainformationdiscovery/EID+3.0+Export+View+Configuration

after running the Export Graph, the view definitions gets stored in a XML file in view-manager directory under config-in , see the picture below :

Endeca

Take a back-up of the file  and store at secured location or version control the file. In case of a view definition lose, Go to integration server URL and click on following :

SandBox –> Project Name –>Config-in –> View-manager –> viewDefinition.xml –> fileEditor

Endeca1

Copy paste the content and hit UpdateFile button. Once you click on UpdateFile button, than run the Import View Definition , shown in the graph below.

https://wikis.oracle.com/display/endecainformationdiscovery/EID+3.0+Import+View+Configuration

Hope this helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

Posted in Uncategorized | Tagged: , , , , , , , , , , | Leave a Comment »

How To Find Size Of Table In Hive / HDFS

Posted by sranka on November 19, 2013

Hi All

Volume on BigData being the constant challenge, as an administrator, you will have to keep a tab on the data growth, at the same time you need to make sure there is spurge growth of unwanted objects or folders. Typically you would want to be worried about the data growth in GB size. Hence below is the script which you could use to translate your current folder size to GB. Anything below GB would be shown as 0. This is a simple script, you can modify to track MB level details as well. Just change the multiplier factor of 1024.

sudo -u hdfs hadoop fs -du /app/hadoop/hive/warehouse/ | awk '/^[0-9]+/ { print int($1/(1024**3)) " [GB]\t" $2 }'

Hope This Helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

Posted in Big Data | Tagged: , , , , , , , , , | Leave a Comment »

Behind The Scene Of MapReduce Job

Posted by sranka on October 28, 2013

Hi All

Recently I have spending most of my time on Big Data projects,using CDH 4.X. Understanding key component of hadoop infrastruture  is very necessary, But the MapReduce (MR) is the most important for processing and aggregrating data. For getting the best of the performance, one needs to know the details of MapReduce job. After reading several white papers and few books, in my opinion below paragraph summarizes the MapReduce THE BEST !!!!!

All About Map Reduce 

The execution of a MapReduce job is broken down into map tasks and reduce tasks. Subsequently, map task execution is divided into the phases: Read (reading map inputs), Map (map function pro-cessing), Collect (serializing to buffer and partitioning), Spill (sorting, combining, compressing, and writing map outputs to local disk), and Merge (merging sorted spill files). Reduce task execution is divided into the phases: Shuffle (transferring map outputs to reduce tasks, with decompression if needed), Merge (merging sorted map outputs), Reduce (reduce function processing), and Write (writing reduce outputs to the distributed file-system). Each phase represents an important part of
the job’s overall execution in Hadoop.

In the MapReduce model, computation is divided into a map function and a reduce function. The map function takes a key/value pair and produces one or more intermediate key/value pairs. The reduce function then takes these intermediate key/value pairs and merges all values corresponding to a single key. The map function can run independently on each key/value pair, exposing enormous amounts of parallelism. Similarly, the reduce function can run independently on each intermediate key, also exposing significant parallelism. In Hadoop, a centralized JobTracker service is responsible for splitting the input data into pieces for processing by independent map and reduce tasks, scheduling each task on a cluster node for execution, and recovering from failures by re-running tasks. On each node, a TaskTracker service runs MapReduce tasks and periodically contacts the JobTracker to report task completions and request new tasks. By default, when a new task is received, a new JVM instance will be spawned to execute it.

The about text is taken from

The Hadoop Distributed Filesystem: Balancing Portability and Performance by Jeffrey Shafer, Scott Rixner, and Alan L. Cox :Rice University

Technical Report : Hadoop Performance Models By Herodotos Herodotou 

 

Hope This helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

Posted in Big Data, sunil s ranka | Tagged: , , , , , , , , | Leave a Comment »

Big Data : A Perfect data strom

Posted by sranka on July 25, 2013

Hi All,

Lately I have been spending lot of time on Big Data, its application, architecture and processes. As part of the process, my thoughts on Big Data in print Media. I hope you would enjoy reading.

http://issuu.com/itnext/docs/it_next-vol-04-issue-06-july-2013/43?e=1503387/3929020

Hope this helps.

Sunil S Ranka

“Superior BI is the antidote to Business Failure”

Posted in Uncategorized | Tagged: , , , , , , , | Leave a Comment »

Oracle Apache Hadoop Hive ODBC Driver For OBIEE

Posted by sranka on July 3, 2013

Hi All,

Last few months have been a crazy ride, wrapping up on new opportunities and getting upto speed on Hadoop took all the free time away . During my read on Hadoop , I always wondered why we dont have out of the box ODBC driver? I tried using out of the box ODBC driver given by HortonWorks and it worked up to certain extend but running using cloudera was a challenge. Finally to the sigh, with 11.1.1.7 Oracle has introduced Oracle Apache Hadoop Hive ODBC Driver For OBIEE.

So far extracting the data using Hadoop have never been a challenge, but using the data to make sense out of it has been a constant challenge.  With this integration, there would be some amount of relief with the existing investment people have made in OBIEE. Hive’s SQL dialect, called HiveQL doenst support the full SQL-92 specification, please refer 11.1.1.7 Metadata Repository Builder’s Guide for not supported features.

For details on where to start please refer 11.1.1.7 Metadata Repository Builder’s Guide.  For importing the metadata from Hive you will have to download ODBC driver from oracle support web, for details please refer DocID 1520733.1 .

In coming weeks I will be working more on Big Data approach and Solutions. 

Hope this helps

Sunil S Ranka

“Superior BI is the antidote to Business Failure

 

 

Posted in 11g, Me, OBIEE | Tagged: , , , , , , , , , , | Leave a Comment »

 
%d bloggers like this: