Package | Description |
---|---|
parquet.avro |
Provides classes to store Avro data in Parquet files.
|
parquet.column.impl | |
parquet.example | |
parquet.example.data.simple | |
parquet.example.data.simple.convert | |
parquet.hadoop |
Provides classes to store use Parquet files in Hadoop
In a map reduce job:
|
parquet.hadoop.api |
APIs to integrate various type systems with Parquet
|
parquet.hadoop.example | |
parquet.hadoop.metadata | |
parquet.io | |
parquet.schema | |
parquet.tools.command | |
parquet.tools.read | |
parquet.tools.util |
Modifier and Type | Method and Description |
---|---|
MessageType |
AvroSchemaConverter.convert(org.apache.avro.Schema avroSchema) |
Modifier and Type | Method and Description |
---|---|
org.apache.avro.Schema |
AvroSchemaConverter.convert(MessageType parquetSchema) |
ReadSupport.ReadContext |
AvroReadSupport.init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema) |
RecordMaterializer<T> |
AvroReadSupport.prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema,
ReadSupport.ReadContext readContext) |
Constructor and Description |
---|
AvroWriteSupport(MessageType schema,
org.apache.avro.Schema avroSchema) |
Constructor and Description |
---|
ColumnReadStoreImpl(PageReadStore pageReadStore,
GroupConverter recordConverter,
MessageType schema) |
Modifier and Type | Field and Description |
---|---|
static MessageType |
Paper.schema |
static MessageType |
Paper.schema2 |
static MessageType |
Paper.schema3 |
Constructor and Description |
---|
DummyRecordConverter(MessageType schema) |
Constructor and Description |
---|
SimpleGroupFactory(MessageType schema) |
Constructor and Description |
---|
GroupRecordConverter(MessageType schema) |
Constructor and Description |
---|
ParquetFileWriter(org.apache.hadoop.conf.Configuration configuration,
MessageType schema,
org.apache.hadoop.fs.Path file) |
ParquetRecordWriter(ParquetFileWriter w,
WriteSupport<T> writeSupport,
MessageType schema,
Map<String,String> extraMetaData,
int blockSize,
int pageSize,
CodecFactory.BytesCompressor compressor,
int dictionaryPageSize,
boolean enableDictionary,
boolean validating,
ParquetProperties.WriterVersion writerVersion) |
Modifier and Type | Method and Description |
---|---|
MessageType |
InitContext.getFileSchema()
this is the union of all the schemas when reading multiple files.
|
MessageType |
ReadSupport.ReadContext.getRequestedSchema() |
MessageType |
WriteSupport.WriteContext.getSchema() |
static MessageType |
ReadSupport.getSchemaForRead(MessageType fileMessageType,
MessageType projectedMessageType) |
static MessageType |
ReadSupport.getSchemaForRead(MessageType fileMessageType,
String partialReadSchemaString)
attempts to validate and construct a
MessageType from a read projection schema |
Modifier and Type | Method and Description |
---|---|
static MessageType |
ReadSupport.getSchemaForRead(MessageType fileMessageType,
MessageType projectedMessageType) |
static MessageType |
ReadSupport.getSchemaForRead(MessageType fileMessageType,
String partialReadSchemaString)
attempts to validate and construct a
MessageType from a read projection schema |
ReadSupport.ReadContext |
ReadSupport.init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema)
Deprecated.
override
ReadSupport.init(InitContext) instead |
abstract RecordMaterializer<T> |
ReadSupport.prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema,
ReadSupport.ReadContext readContext)
called in
RecordReader.initialize(org.apache.hadoop.mapreduce.InputSplit, org.apache.hadoop.mapreduce.TaskAttemptContext) in the back end
the returned RecordConsumer will materialize the records and add them to the destination |
Constructor and Description |
---|
InitContext(org.apache.hadoop.conf.Configuration configuration,
Map<String,Set<String>> keyValueMetadata,
MessageType fileSchema) |
ReadContext(MessageType requestedSchema) |
ReadContext(MessageType requestedSchema,
Map<String,String> readSupportMetadata) |
WriteContext(MessageType schema,
Map<String,String> extraMetaData) |
Modifier and Type | Method and Description |
---|---|
static MessageType |
GroupWriteSupport.getSchema(org.apache.hadoop.conf.Configuration configuration) |
static MessageType |
ExampleOutputFormat.getSchema(org.apache.hadoop.mapreduce.Job job)
retrieve the schema from the conf
|
Modifier and Type | Method and Description |
---|---|
ReadSupport.ReadContext |
GroupReadSupport.init(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema) |
RecordMaterializer<Group> |
GroupReadSupport.prepareForRead(org.apache.hadoop.conf.Configuration configuration,
Map<String,String> keyValueMetaData,
MessageType fileSchema,
ReadSupport.ReadContext readContext) |
static void |
ExampleOutputFormat.setSchema(org.apache.hadoop.mapreduce.Job job,
MessageType schema)
set the schema being written to the job conf
|
static void |
GroupWriteSupport.setSchema(MessageType schema,
org.apache.hadoop.conf.Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
MessageType |
FileMetaData.getSchema() |
MessageType |
GlobalMetaData.getSchema() |
Constructor and Description |
---|
FileMetaData(MessageType schema,
Map<String,String> keyValueMetaData,
String createdBy) |
GlobalMetaData(MessageType schema,
Map<String,Set<String>> keyValueMetaData,
Set<String> createdBy) |
Modifier and Type | Method and Description |
---|---|
MessageType |
MessageColumnIO.getType() |
Modifier and Type | Method and Description |
---|---|
MessageColumnIO |
ColumnIOFactory.getColumnIO(MessageType schema) |
MessageColumnIO |
ColumnIOFactory.getColumnIO(MessageType requestedSchema,
MessageType fileSchema) |
void |
ColumnIOFactory.ColumnIOCreatorVisitor.visit(MessageType messageType) |
Constructor and Description |
---|
ColumnIOCreatorVisitor(boolean validating,
MessageType requestedSchema) |
ValidatingRecordConsumer(RecordConsumer delegate,
MessageType schema) |
Modifier and Type | Method and Description |
---|---|
MessageType |
Types.MessageTypeBuilder.named(String name)
Builds and returns the
MessageType configured by this builder. |
static MessageType |
MessageTypeParser.parseMessageType(String input) |
MessageType |
MessageType.union(MessageType toMerge) |
Modifier and Type | Method and Description |
---|---|
T |
TypeConverter.convertMessageType(MessageType messageType,
List<T> children) |
MessageType |
MessageType.union(MessageType toMerge) |
void |
TypeVisitor.visit(MessageType messageType) |
Modifier and Type | Method and Description |
---|---|
static void |
DumpCommand.dump(PrettyPrintWriter out,
ParquetMetadata meta,
MessageType schema,
org.apache.hadoop.fs.Path inpath,
boolean showmd,
boolean showdt,
Set<String> showColumns) |
Modifier and Type | Method and Description |
---|---|
RecordMaterializer<SimpleRecord> |
SimpleReadSupport.prepareForRead(org.apache.hadoop.conf.Configuration conf,
Map<String,String> metaData,
MessageType schema,
ReadSupport.ReadContext context) |
Constructor and Description |
---|
SimpleRecordMaterializer(MessageType schema) |
Modifier and Type | Method and Description |
---|---|
static void |
MetadataUtils.showDetails(PrettyPrintWriter out,
MessageType type) |
Copyright © 2015. All rights reserved.