Version

    HadoopReader

    Short Description
    Ports
    Metadata
    HadoopReader Attributes
    Details
    Examples
    See also

    Short Description

    HadoopReader reads Hadoop sequence files.

    ComponentData sourceInput portsOutput portsEach to all outputsDifferent to different outputsTransformationTransf. req.JavaCTLAuto-propagated metadata
    HadoopReaderHadoop Sequence File0–11
    no
    no
    no
    no
    no
    no
    no

    Ports

    Port typeNumberRequiredDescriptionMetadata
    Input0
    no
    For Input Port Reading. Only the source mode is supported.Any
    Output0
    yes
    For read data records.Any

    Metadata

    HadoopReader does not propagate metadata.

    HadoopReader has no metadata template.

    HadoopReader Attributes

    AttributeReqDescriptionPossible values
    Basic
    Hadoop connection 

    Hadoop connection with Hadoop libraries containing the Hadoop sequence file parser implementation. If the Hadoop connection ID is specified in a hdfs:// URL in the File URL attribute, the value of this attribute is ignored.

    Hadoop connection ID
    File URL
    yes

    A URL to a file on HDFS or a local file system.

    URLs without a protocol (i.e. absolute or relative path) or with the file:// protocol are considered to be located on the local file system.

    If the file to be read is located on the HDFS, use the URL in this form: hdfs://ConnID/path/to/file, where ConnID is the ID of a Hadoop connection (the Hadoop connection component attribute will be ignored), and /path/to/myfile is the absolute path on corresponding HDFS to the file named myfile.

     
    Key field
    yes
    The name of an output edge record field, where a key of each key-value pair will be stored. 
    Value field
    yes
    The name of an output edge record field, where a value of each key-value pair will be stored. 

    Details

    HadoopReader reads data from a special Hadoop sequence file (org.apache.hadoop.io.SequenceFile). These files contain key-value pairs and are used in MapReduce jobs as input/output file formats. The component can read a single file as well as a collection of files which have to be located on HDFS or local file system.

    If you connect to local sequence files, there is no need to connect to a Hadoop cluster. However, you still need a valid Hadoop connection (with a correct version of libraries).

    The exact version of the file format supported by the HadoopReader component depends on Hadoop libraries which you supply in the Hadoop connection referenced from the File URL attribute. In general, sequence files created by one version of Hadoop may not be readable by a different version.

    Hadoop sequence files may contain compressed data. HadoopReader automatically detects this and decompresses the data. Remember that supported compression codecs depend on libraries you specify in the Hadoop connection.

    For technical details about Hadoop sequence files, see Apache Hadoop Wiki.

    Examples

    Reading data from local sequence files

    Read records from a Hadoop Sequence file products.dat. The file has ProductID as a key and ProductName as a value.

    Solution

    Create a valid Hadoop connection or use existing one. See Hadoop connection.

    Use the Hadoop connection, File URL, Key field and Key value attributes.

    AttributeValue
    Hadoop connectionMyHadoopConnection
    File URL${DATA_IN}/products.dat
    Key fieldProductID
    Value fieldProductName