<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How to omit columns from CSV-style event input? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352307#M64583</link>
    <description>&lt;P&gt;I intend to import a CSV-style file into Splunk. The file has about 30 columns, about 120 million lines and is about 150GB of size. Of the file, I only require a subset of columns.&lt;/P&gt;

&lt;P&gt;The file's contents shall be imported as events and not as a CSV lookup file.&lt;/P&gt;

&lt;P&gt;For sake of simplicity, assume the structure below:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;src,dest_ip,dest_port,dest_user,dest_zone
10.50.60.80,192.0.2.92,443,emily,Internet
10.50.60.53,203.0.113.12,389,brian,Intranet
10.33.118.40,198.51.100.65,80,john,Internet
...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Is there any way to exclude the "dest_user" column from the import?&lt;/P&gt;

&lt;P&gt;(Running the data through a sed/awk/perl script beforehand is certainly possible, but given the size of the file this would be computationally expensive. And as Splunk already extracts the field headers, it appears to me that excluding columns from import would be the cleaner and more efficient approach. Furthermore, it is likely that I will have to deal with similar files (that have different fieldsets or column orders) in the future.)&lt;/P&gt;</description>
    <pubDate>Thu, 01 Feb 2018 00:11:33 GMT</pubDate>
    <dc:creator>ziq</dc:creator>
    <dc:date>2018-02-01T00:11:33Z</dc:date>
    <item>
      <title>How to omit columns from CSV-style event input?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352307#M64583</link>
      <description>&lt;P&gt;I intend to import a CSV-style file into Splunk. The file has about 30 columns, about 120 million lines and is about 150GB of size. Of the file, I only require a subset of columns.&lt;/P&gt;

&lt;P&gt;The file's contents shall be imported as events and not as a CSV lookup file.&lt;/P&gt;

&lt;P&gt;For sake of simplicity, assume the structure below:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;src,dest_ip,dest_port,dest_user,dest_zone
10.50.60.80,192.0.2.92,443,emily,Internet
10.50.60.53,203.0.113.12,389,brian,Intranet
10.33.118.40,198.51.100.65,80,john,Internet
...
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Is there any way to exclude the "dest_user" column from the import?&lt;/P&gt;

&lt;P&gt;(Running the data through a sed/awk/perl script beforehand is certainly possible, but given the size of the file this would be computationally expensive. And as Splunk already extracts the field headers, it appears to me that excluding columns from import would be the cleaner and more efficient approach. Furthermore, it is likely that I will have to deal with similar files (that have different fieldsets or column orders) in the future.)&lt;/P&gt;</description>
      <pubDate>Thu, 01 Feb 2018 00:11:33 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352307#M64583</guid>
      <dc:creator>ziq</dc:creator>
      <dc:date>2018-02-01T00:11:33Z</dc:date>
    </item>
    <item>
      <title>Re: How to omit columns from CSV-style event input?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352308#M64584</link>
      <description>&lt;P&gt;Hi  ziq,&lt;/P&gt;

&lt;P&gt;You can use script to create new csv with selected fields and then index into splunk.&lt;/P&gt;</description>
      <pubDate>Thu, 01 Feb 2018 05:22:11 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352308#M64584</guid>
      <dc:creator>p_gurav</dc:creator>
      <dc:date>2018-02-01T05:22:11Z</dc:date>
    </item>
    <item>
      <title>Re: How to omit columns from CSV-style event input?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352309#M64585</link>
      <description>&lt;P&gt;I'm aware and mentioned the script option in my question. But running the data through a script beforehand would be twice as computationally expensive (or more). I also anticipate similar files with different field sets in the future, hence I would need to touch/modify the script for each of these files.&lt;/P&gt;

&lt;BLOCKQUOTE&gt;
&lt;P&gt;(Running the data through a&lt;BR /&gt;
sed/awk/perl script beforehand is&lt;BR /&gt;
certainly possible, but given the size&lt;BR /&gt;
of the file this would be&lt;BR /&gt;
computationally expensive. And as&lt;BR /&gt;
Splunk already extracts the field&lt;BR /&gt;
headers, it appears to me that&lt;BR /&gt;
excluding columns from import would be&lt;BR /&gt;
the cleaner and more efficient&lt;BR /&gt;
approach. Furthermore, it is likely&lt;BR /&gt;
that I will have to deal with similar&lt;BR /&gt;
files (that have different fieldsets&lt;BR /&gt;
or column orders) in the future.)&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;</description>
      <pubDate>Fri, 02 Feb 2018 13:32:38 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/How-to-omit-columns-from-CSV-style-event-input/m-p/352309#M64585</guid>
      <dc:creator>ziq</dc:creator>
      <dc:date>2018-02-02T13:32:38Z</dc:date>
    </item>
  </channel>
</rss>

