The schema file and data file both reside on hdfs.
Hunk is able to read the data file and show the raw data but it doesn't associate it with the schema. That means i would have to manually extract each field.
Is there a way to point hunk to a schema so that it can understand the raw data better ?
Hunk supports the following Schema options: hive schema, Structure files (Parquet, Json, Avro, ORC, RC, Seq, TSV, CSV, etc ..), and Many different type of log files (just call one of the known sourcetypes)
If you can elaborate a bit I can give it a shot.
In my case the schema looks like following and the schema is in a separate file on hdfs:
column1name partitionkey - - - - - long - "ends '\054'"
column1name partitionkey - - - - - integer - "ends '\054'"
The data is another directory having multiple bzip files.
What type of schema are you referring to? Are the data files plain text (compressed) files? Can you post a sample data record?
data file has multiple lines, each of which would be something like: (114 fields/columns)
The definition of each column/field will be in the schema file. The schema file will have 114 lines; each defining the specific column/field:
column1name partitionkey - - - - - long - "ends '054'"
column1name partitionkey - - - - - integer - "ends '054'"
So, it seems like the file is a headerless CSV file - correct? If so, then you can use delimiter based KV - then you can do something like this:
etc/apps/search/local/props.conf [source::/path/to/your/file] REPORT-kv = my-delim-kv etc/apps/search/local/transforms.conf [my-delim-kv] FIELDS = <comma delimited list of field names> DELIM = ,
You can look at this answer for a similar issue
I tried out the transforms.conf configuration and it seems to work properly. I guess I will have to manually create similar confs for all required files/schemas. Not the best way but at least it works with manual configuration.