public class GroupReadSupport extends ReadSupport<Group>
ReadSupport.ReadContext
PARQUET_READ_SCHEMA
Constructor and Description |
---|
GroupReadSupport() |
Modifier and Type | Method and Description |
---|---|
ReadSupport.ReadContext |
init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema)
called in
InputFormat.getSplits(org.apache.hadoop.mapreduce.JobContext) in the front end |
RecordMaterializer<Group> |
prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema,
ReadSupport.ReadContext readContext)
called in
RecordReader.initialize(org.apache.hadoop.mapreduce.InputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext) in the back end
the returned RecordConsumer will materialize the records and add them to the destination |
getSchemaForRead, getSchemaForRead, init
public ReadSupport.ReadContext init(org.apache.hadoop.conf.Configuration configuration, Map<String,String> keyValueMetaData, MessageType fileSchema)
ReadSupport
InputFormat.getSplits(org.apache.hadoop.mapreduce.JobContext)
in the front endinit
in class ReadSupport<Group>
configuration
- the job configurationkeyValueMetaData
- the app specific metadata from the filefileSchema
- the schema of the filepublic RecordMaterializer<Group> prepareForRead(org.apache.hadoop.conf.Configuration configuration, Map<String,String> keyValueMetaData, MessageType fileSchema, ReadSupport.ReadContext readContext)
ReadSupport
RecordReader.initialize(org.apache.hadoop.mapreduce.InputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext)
in the back end
the returned RecordConsumer will materialize the records and add them to the destinationprepareForRead
in class ReadSupport<Group>
configuration
- the job configurationkeyValueMetaData
- the app specific metadata from the filefileSchema
- the schema of the filereadContext
- returned by the init methodCopyright © 2015. All rights reserved.