Question

I want to schedule a Hbase Map-Reduce job by Oozie.I am facing following problem .

How/Where  to specify these properties in oozie workflow ?
(    i> Table name for Mapper/Reducer 
     ii> scan object for Mapper         )


    Scan scan = new Scan(new Get());

    scan.setMaxVersions();

    scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(VALUE));
    scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(DATE));

    Job job = new Job(conf, JOB_NAME + "_" + TABLE_USER);
    // These two properties :-
    TableMapReduceUtil.initTableMapperJob(TABLE_USER, scan,
            Mapper.class, Text.class, Text.class, job);
    TableMapReduceUtil.initTableReducerJob(DETAILS_TABLE,
            Reducer.class, job);

or

please let me know the best way to schedule a Hbase Map-Reduce Job by Oozie .

Thanks :) :)

Was it helpful?

Solution

The best way(According to me ) to schedule a Hbase Map_Reduce job is to schedule it as a .java file . It works well and there is no need to write code to change your scan to string , etc. So i am scheduling my jobs like java file till i get any better option .

workflow-app xmlns="uri:oozie:workflow:0.1" name="java-main-wf">
<start to="java-node"/>
<action name="java-node">
    <java>
          <job-tracker></job-tracker>
        <name-node></name-node>
        <configuration>
            <property>
                <name>mapred.job.queue.name</name>
                <value>${queueName}</value>
            </property>
        </configuration>
        <main-class>org.apache.oozie.example.DemoJavaMain</main-class>
        <arg>Hello</arg>
        <arg>Oozie!</arg>
     <arg>This</arg>
        <arg>is</arg>
     <arg>Demo</arg>
        <arg>Oozie!</arg>

    </java>
    <ok to="end"/>
    <error to="fail"/>
</action>
<kill name="fail">
    <message>Java failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>

OTHER TIPS

You can also schedule the job using <Map-reduce> tag , but it is not as easy as scheduling it as java file. It requires a considerable effort, but can be considered as an alternate approach.

     <action name='jobSample'>
        <map-reduce>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <configuration>
                <!-- This is required for new api usage -->
                <property>
                    <name>mapred.mapper.new-api</name>
                    <value>true</value>
                </property>
                <property>
                    <name>mapred.reducer.new-api</name>
                    <value>true</value>
                </property>
                <!-- HBASE CONFIGURATIONS -->
                <property>
                    <name>hbase.mapreduce.inputtable</name>
                    <value>TABLE_USER</value>
                </property>
                <property>
                    <name>hbase.mapreduce.scan</name>
                    <value>${wf:actionData('get-scanner')['scan']}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.property.clientPort</name>
                    <value>${hbaseZookeeperClientPort}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.quorum</name>
                    <value>${hbaseZookeeperQuorum}</value>
                </property>
                <!-- MAPPER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.inputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableInputFormat</value>
                </property>
                <property>
                    <name>mapred.mapoutput.key.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapred.mapoutput.value.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapreduce.map.class</name>
                    <value>com.hbase.mapper.MyTableMapper</value>
                </property>
                <!-- REDUCER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.reduce.class</name>
                    <value>com.hbase.reducer.MyTableReducer</value>
                </property>
                <property>
                    <name>hbase.mapred.outputtable</name>
                    <value>DETAILS_TABLE</value>
                </property>
                 <property>
                    <name>mapreduce.outputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableOutputFormat</value>
                </property>

                <property>
                    <name>mapred.map.tasks</name>
                    <value>${mapperCount}</value>
                </property>
                <property>
                    <name>mapred.reduce.tasks</name>
                    <value>${reducerCount}</value>
                </property>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
            </configuration>
        </map-reduce>
        <ok to="end" />
        <error to="fail" />
    </action>
    <kill name="fail">
        <message>Map/Reduce failed, error
            message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name='end' />

To know more about the property name and value , dump the configration parameter. Also, the scan property is some serialization of the scan information (a Base 64 encoded version) so not sure how to specify this -

scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(VALUE));
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top