<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: What are the best practices for uploading CSV files in merging fields from more than two sources? in Getting Data In</title>
    <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378935#M68512</link>
    <description>&lt;P&gt;Hi @MuS&lt;BR /&gt;
Please see below:&lt;/P&gt;

&lt;P&gt;Data lets say:&lt;BR /&gt;
Files                             Fields                ExpectedOutput.                                                 O/p Fields          &lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File1.csv :&lt;/STRONG&gt; &lt;BR /&gt;
Fields:  Field1, Field2 Field3 Field3&lt;BR /&gt;&lt;BR /&gt;
Operation: Perform analysis on field2 wrt field1 to get results as compliant or non compliant&lt;BR /&gt;
O/p Fields: &lt;EM&gt;FIELD1, AnalysisField1&lt;/EM&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File2.csv&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;
Fields: Field1 FA FB FC FD FE FG&lt;BR /&gt;&lt;BR /&gt;
Operation: -Combine Fields from File1 and File2&lt;BR /&gt;&lt;BR /&gt;
O/p Fields: &lt;EM&gt;Field1, Analysis Field1, FA FB FC FD FE FG&lt;/EM&gt;    &lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File3.csv&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;
Fields: Field1 fa fb fc fd fe fg fh&lt;BR /&gt;&lt;BR /&gt;
Operation 1: -Analyse based on fa fb fc to get results as compliant or non compliant&lt;BR /&gt;&lt;BR /&gt;
O/P Fileds: Field1, Analysis Field3, fd fe fg fh&lt;BR /&gt;&lt;BR /&gt;
OPeration 2: -Combine Fields from Fiel1.csv /File2.csv   and File 3.csv&lt;BR /&gt;&lt;BR /&gt;
                       -Analyse the data to generated a combined  compliance /non compliance field&lt;BR /&gt;
O/P Fileds:  &lt;EM&gt;Field 1, JointAnalysisField1UField3, FA FB FC FD FE FG fd fe fg fh&lt;/EM&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File4.csv&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;
Fields: Field1 , f1 f2 f3 f4 f5 f6&lt;BR /&gt;&lt;BR /&gt;
Operation 1: -Analyse File4.csv based on field f1 f2 and get  compliance / non compliant result field&lt;BR /&gt;&lt;BR /&gt;
O/P Fileds:   Field 1, Analysis Field4,  f3 f4 f5 f6&lt;/P&gt;

&lt;P&gt;O/P Fileds: -Combine the Analysed fields “JointAnalysisField1UField3”&amp;amp; “ Analysis Field4” to generate the final analysis result of COMPLIANCE/NONCOMPLIANCE&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Search Query&lt;/STRONG&gt;&lt;BR /&gt;
101 &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index= mine source=“File4.csv”

| table&amp;nbsp; Field1 , f1 f2 f3 f4 f5 f6

| eval “ Analysis Field4”=case(like(f1,”Y”), "Compliant", like(f2,”Y”), "NonCompliant")

    |join  Field 1, Analysis Field4,  f3 f4 f5 f6 type=outer 
        [ 
        search (source=File1.csv or source=File2.csv) index=mine

        |fields+Field1, Field2 Field3 Field3 FA FB FC FD FE FG

        | eval AnalysisField1= case(like(field2,”Y”), "Compliant", like(field2,”N”), "NonCompliant")

        | join Field 1 type=outer 
            [
            search source=File3.csv index=mine

            |eval Analysis Field3= case (like(fa,”Y”) AND (like(fb,”Y”) OR like(fb,”N”) ), "Compliant", like(fa,”N”) AND  (like(fb,”Y”) OR like(fb,”N”) ),"NonCompliant”)
            ]

        | fileds Field1, Analysis Field1, FA FB FC FD FE FG Analysis Field3, fd fe fg fh

        | eval  JointAnalysisField1UField3=case (like(“Analysis Field1”,”Compliant”) AND (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ), 
        "Compliant”, like(“Analysis Field1”,”NonCompliant”) AND  (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ),"NonCompliant”) 
        ]

| fields  Field1, JointAnalysisField1UField3, FA FB FC FD FE FG  fd fe fg fh Analysis Field4,  f3 f4 f5 f6

| eval finalanalysis= if ( match( JointAnalysisField1UField3, “Compliant”) AND match (“Analysis Field1”,”Compliant”), “COMPLIANT”, 
                if(match( JointAnalysisField1UField3, “NonCompliant”) AND match (“Analysis Field1”,”NonCompliant”), “NONCOMPLIANT”, “UNDEFINED))
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;010&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Note: Eval represent analysis steps , for brevity I have reduced long analysis statements in very simple manner.&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;What I observed: &lt;BR /&gt;
1. Join should happen from log file with higher number of logs to log file with less number of logs.&lt;/P&gt;

&lt;P&gt;My Question &lt;BR /&gt;
1. can I improve the search query to make better dashboards.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Also this is just a POC, I am expected to increase the number of parameters in the report by including more logs from various other sources. Can I continue to use this approach of joining output from various search queries?&lt;/LI&gt;
&lt;/OL&gt;</description>
    <pubDate>Mon, 30 Jul 2018 15:31:20 GMT</pubDate>
    <dc:creator>vikfnu</dc:creator>
    <dc:date>2018-07-30T15:31:20Z</dc:date>
    <item>
      <title>What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378931#M68508</link>
      <description>&lt;P&gt;Hi&lt;BR /&gt;
I have a question:&lt;BR /&gt;
I have 3-4 CSV files from different IT reports that I need to analyze and prepare a combined dashboard.&lt;BR /&gt;
Now my question is :&lt;BR /&gt;
1. Should I do pre-analysis on some files before uploading to splunk&lt;BR /&gt;
2. or should I upload all files and then use splunk search commands to do the analysis.&lt;/P&gt;

&lt;P&gt;Preanalysis means as below:&lt;BR /&gt;
For AV update file with computers name, Virus defn installed, date of scan - Preanalysis involves categorizing systems as compliant on non-compliant based on org Virus scan policy.&lt;BR /&gt;
For Patch Installed report with the computer name, patch name, patch required, patch installed status- Pre analysis involves categorizing each system for each patch as compliant or non-compliant based on org policy&lt;BR /&gt;
for Software installation report with the computer name, software name, version, year- Preanalysis involves categorizing each software for each system as compliant or non-compliant based on org policy&lt;/P&gt;

&lt;P&gt;Further, I need to merge this preanalysis and categorize systems as high compliant, medium compliant and Warning based on above 3 categorizations&lt;/P&gt;

&lt;P&gt;Problems I am facing are as below:&lt;BR /&gt;
1. I upload all csv directly into splunk and then do the necessary categorization on individual csv files , but when I try to merge the output fields, ComputerName, Categoristion_ from each file in splunk I am unable to generate any data&lt;BR /&gt;
2. However, If I perform the pre-analysis of 2 files lets say (AV and PAtch) in python and merge them in Python and then upload the resulting csv along with the 3rd CSV, I am able to generate the report for final categorization.&lt;/P&gt;

&lt;P&gt;However, I am expected to add more parameters going forward to generate more complete categorization for dashboards.&lt;/P&gt;

&lt;P&gt;Please, anyone, help me, I can share my code as well if its required, Please help.&lt;/P&gt;</description>
      <pubDate>Sun, 29 Jul 2018 18:07:13 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378931#M68508</guid>
      <dc:creator>vikfnu</dc:creator>
      <dc:date>2018-07-29T18:07:13Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378932#M68509</link>
      <description>&lt;P&gt;Hi vikfnu,&lt;/P&gt;

&lt;P&gt;still not much of information in here, and also no hint to the search your using ..... so I'm asking the magic glass ball and try to answer what I reckon you might want to do ....&lt;/P&gt;

&lt;P&gt;One thing first, upload the data as is into Splunk no need to do any &lt;EM&gt;preanalysis&lt;/EM&gt; of any kind.&lt;/P&gt;

&lt;P&gt;Upload your csv files (lets call them &lt;CODE&gt;1.csv&lt;/CODE&gt; to &lt;CODE&gt;4.csv&lt;/CODE&gt;) into an index, let's say it is called &lt;CODE&gt;csv&lt;/CODE&gt; and create a search that will return all the event, don't care about grouping or anything else for now - just get the events:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=csv source=1.csv OR source=2.csv OR source=3.csv OR source=4.csv
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;this will just return all the events. &lt;/P&gt;

&lt;P&gt;Now, we need to find some field that can be used to correlate the events you mentioned &lt;CODE&gt;ComputerName&lt;/CODE&gt; and &lt;CODE&gt;Categoristion_&lt;/CODE&gt; I assume the later has something after the &lt;CODE&gt;_&lt;/CODE&gt; so we ignore this one for now.&lt;BR /&gt;
By adding a &lt;CODE&gt;case()&lt;/CODE&gt; statement to the SPL we can get all variations of &lt;CODE&gt;ComputerName&lt;/CODE&gt; from the events and use it:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;| eval ComputerName = case(isnotnull(ComputerName), ComputerName, isnotnull(computername), computername, isnotnull(computerName), computerName, 1=1, "unknown")
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;this list need to be extended so it matches all possible variation for any computer name field available in the events, and the last &lt;CODE&gt;1=1&lt;/CODE&gt; is a catch all and will show you unhandled cases &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;

&lt;P&gt;After that step you can correlate the events using a &lt;CODE&gt;stats&lt;/CODE&gt; on &lt;CODE&gt;ComputerName&lt;/CODE&gt; and &lt;CODE&gt;_time&lt;/CODE&gt;:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;| stats count values(*) AS * by ComputerName _time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Once this is done, you can add more PSL to munch the data in every possible way until you get your expected result. So here is everything from above as one SPL statement:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt; index=csv source=1.csv OR source=2.csv OR source=3.csv OR source=4.csv
| eval ComputerName = case(isnotnull(ComputerName), ComputerName, isnotnull(computername), computername, isnotnull(computerName), computerName, 1=1, "unknown")
| stats count values(*) AS * by ComputerName _time
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;If this is not what you are after, than I need to bring my magic glass ball into service or you need to provide more details on the events (anonymise any sensitive data!) and you SPL you used.&lt;/P&gt;

&lt;P&gt;Hope this helps ...&lt;/P&gt;

&lt;P&gt;cheers, MuS&lt;/P&gt;</description>
      <pubDate>Sun, 29 Jul 2018 20:15:04 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378932#M68509</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2018-07-29T20:15:04Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378933#M68510</link>
      <description>&lt;P&gt;Hi MuS&lt;BR /&gt;
The situation has been sorted. But I am detailing out the case for your ref below. Please do tell me if my final approach is good or can be improved further.&lt;BR /&gt;
I have 4 files. AVstatus.csv, SystemInfo.csv, PatchStatus.csv, SWStatus.csv&lt;BR /&gt;
Fields as below:&lt;BR /&gt;
AV Status.csv: computer name, virus defn, lastscandate, &lt;BR /&gt;
Systeminfo.csv: computer name, user, div, location, system model, serialnumber&lt;BR /&gt;
PatchStatus.csv: Computer name, patchname, Updateclassification, installed, required, dateposted&lt;BR /&gt;
SWstatus.csv: computername, SW, version, year,&lt;/P&gt;

&lt;P&gt;I needed to perform analysis on each file individually and calculate categorisation for AV, Patch and SW.&lt;/P&gt;

&lt;P&gt;using the available info on splunk websites and other sources, I was able to perform the analysis on each individual file and get final fields.&lt;/P&gt;

&lt;P&gt;But I was unable to merge the same.&lt;/P&gt;

&lt;P&gt;After going through &lt;BR /&gt;
below answers:&lt;BR /&gt;
&lt;A href="https://answers.splunk.com/answers/521078/joining-fields-from-three-different-indexes-and-so.html"&gt;https://answers.splunk.com/answers/521078/joining-fields-from-three-different-indexes-and-so.html&lt;/A&gt;&lt;BR /&gt;
&lt;A href="https://answers.splunk.com/answers/434939/joining-multiple-fields-of-two-searches-together-o.html"&gt;https://answers.splunk.com/answers/434939/joining-multiple-fields-of-two-searches-together-o.html&lt;/A&gt; &lt;BR /&gt;
&lt;A href="https://answers.splunk.com/answers/129424/how-to-compare-fields-over-multiple-sourcetypes-without-join-append-or-use-of-subsearches.html"&gt;https://answers.splunk.com/answers/129424/how-to-compare-fields-over-multiple-sourcetypes-without-join-append-or-use-of-subsearches.html&lt;/A&gt;&lt;BR /&gt;
and&lt;BR /&gt;
&lt;A href="https://answers.splunk.com/answers/129424/how-to-compare-fields-over-multiple-sourcetypes-without-join-append-or-use-of-subsearches.html"&gt;https://answers.splunk.com/answers/129424/how-to-compare-fields-over-multiple-sourcetypes-without-join-append-or-use-of-subsearches.html&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;I tried doing the merge using join, append and appendcols but nothing was working.&lt;BR /&gt;
Finally I figured that to join two searches using ComputerName as the primary key, I should put the file with less number of rows in the subsearch. Continuing thus I have been able to get the final result.&lt;/P&gt;

&lt;P&gt;The search is finally as below:&lt;/P&gt;

&lt;P&gt;index= mine source="SWstatus.csv"&lt;BR /&gt;
| rename  as field&lt;BR /&gt;
|table &lt;BR /&gt;
| eval SWcompliance1=case(like(sw,"Y"), "Compliant", like(sw,"n"), "NonCompliant")&lt;BR /&gt;
|join computername swcompliance type=outer [search (source=2.csv or source=3.csv) index mine&lt;BR /&gt;
|fields+computername, div,location,user,model,virusdefn, lastscand&lt;BR /&gt;
|&lt;/P&gt;</description>
      <pubDate>Sun, 29 Jul 2018 23:23:12 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378933#M68510</guid>
      <dc:creator>vikfnu</dc:creator>
      <dc:date>2018-07-29T23:23:12Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378934#M68511</link>
      <description>&lt;P&gt;Can you please post the search again, it looks like some parts are missing. Also please use either the &lt;CODE&gt;101 010&lt;/CODE&gt; icon to mark it to be code or select the search and press &lt;CODE&gt;Ctrl + K&lt;/CODE&gt; to mark it as code.&lt;/P&gt;

&lt;P&gt;cheers, MuS&lt;/P&gt;</description>
      <pubDate>Sun, 29 Jul 2018 23:47:48 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378934#M68511</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2018-07-29T23:47:48Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378935#M68512</link>
      <description>&lt;P&gt;Hi @MuS&lt;BR /&gt;
Please see below:&lt;/P&gt;

&lt;P&gt;Data lets say:&lt;BR /&gt;
Files                             Fields                ExpectedOutput.                                                 O/p Fields          &lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File1.csv :&lt;/STRONG&gt; &lt;BR /&gt;
Fields:  Field1, Field2 Field3 Field3&lt;BR /&gt;&lt;BR /&gt;
Operation: Perform analysis on field2 wrt field1 to get results as compliant or non compliant&lt;BR /&gt;
O/p Fields: &lt;EM&gt;FIELD1, AnalysisField1&lt;/EM&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File2.csv&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;
Fields: Field1 FA FB FC FD FE FG&lt;BR /&gt;&lt;BR /&gt;
Operation: -Combine Fields from File1 and File2&lt;BR /&gt;&lt;BR /&gt;
O/p Fields: &lt;EM&gt;Field1, Analysis Field1, FA FB FC FD FE FG&lt;/EM&gt;    &lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File3.csv&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;
Fields: Field1 fa fb fc fd fe fg fh&lt;BR /&gt;&lt;BR /&gt;
Operation 1: -Analyse based on fa fb fc to get results as compliant or non compliant&lt;BR /&gt;&lt;BR /&gt;
O/P Fileds: Field1, Analysis Field3, fd fe fg fh&lt;BR /&gt;&lt;BR /&gt;
OPeration 2: -Combine Fields from Fiel1.csv /File2.csv   and File 3.csv&lt;BR /&gt;&lt;BR /&gt;
                       -Analyse the data to generated a combined  compliance /non compliance field&lt;BR /&gt;
O/P Fileds:  &lt;EM&gt;Field 1, JointAnalysisField1UField3, FA FB FC FD FE FG fd fe fg fh&lt;/EM&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;File4.csv&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;
Fields: Field1 , f1 f2 f3 f4 f5 f6&lt;BR /&gt;&lt;BR /&gt;
Operation 1: -Analyse File4.csv based on field f1 f2 and get  compliance / non compliant result field&lt;BR /&gt;&lt;BR /&gt;
O/P Fileds:   Field 1, Analysis Field4,  f3 f4 f5 f6&lt;/P&gt;

&lt;P&gt;O/P Fileds: -Combine the Analysed fields “JointAnalysisField1UField3”&amp;amp; “ Analysis Field4” to generate the final analysis result of COMPLIANCE/NONCOMPLIANCE&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Search Query&lt;/STRONG&gt;&lt;BR /&gt;
101 &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index= mine source=“File4.csv”

| table&amp;nbsp; Field1 , f1 f2 f3 f4 f5 f6

| eval “ Analysis Field4”=case(like(f1,”Y”), "Compliant", like(f2,”Y”), "NonCompliant")

    |join  Field 1, Analysis Field4,  f3 f4 f5 f6 type=outer 
        [ 
        search (source=File1.csv or source=File2.csv) index=mine

        |fields+Field1, Field2 Field3 Field3 FA FB FC FD FE FG

        | eval AnalysisField1= case(like(field2,”Y”), "Compliant", like(field2,”N”), "NonCompliant")

        | join Field 1 type=outer 
            [
            search source=File3.csv index=mine

            |eval Analysis Field3= case (like(fa,”Y”) AND (like(fb,”Y”) OR like(fb,”N”) ), "Compliant", like(fa,”N”) AND  (like(fb,”Y”) OR like(fb,”N”) ),"NonCompliant”)
            ]

        | fileds Field1, Analysis Field1, FA FB FC FD FE FG Analysis Field3, fd fe fg fh

        | eval  JointAnalysisField1UField3=case (like(“Analysis Field1”,”Compliant”) AND (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ), 
        "Compliant”, like(“Analysis Field1”,”NonCompliant”) AND  (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ),"NonCompliant”) 
        ]

| fields  Field1, JointAnalysisField1UField3, FA FB FC FD FE FG  fd fe fg fh Analysis Field4,  f3 f4 f5 f6

| eval finalanalysis= if ( match( JointAnalysisField1UField3, “Compliant”) AND match (“Analysis Field1”,”Compliant”), “COMPLIANT”, 
                if(match( JointAnalysisField1UField3, “NonCompliant”) AND match (“Analysis Field1”,”NonCompliant”), “NONCOMPLIANT”, “UNDEFINED))
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;010&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Note: Eval represent analysis steps , for brevity I have reduced long analysis statements in very simple manner.&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;What I observed: &lt;BR /&gt;
1. Join should happen from log file with higher number of logs to log file with less number of logs.&lt;/P&gt;

&lt;P&gt;My Question &lt;BR /&gt;
1. can I improve the search query to make better dashboards.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;Also this is just a POC, I am expected to increase the number of parameters in the report by including more logs from various other sources. Can I continue to use this approach of joining output from various search queries?&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Mon, 30 Jul 2018 15:31:20 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378935#M68512</guid>
      <dc:creator>vikfnu</dc:creator>
      <dc:date>2018-07-30T15:31:20Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378936#M68513</link>
      <description>&lt;P&gt;Data lets say:&lt;BR /&gt;
Files                   Fields              ExpectedOutput.                                                 OUTPUT FIELDS&lt;BR /&gt;&lt;BR /&gt;
File1.csv”        Field1, Field2 Field3 Field3    -Perform analysis on field2 wrt filed1 to                         FIELD1, AnalysisField1&lt;BR /&gt;
                                get results as compliant or non compliant&lt;/P&gt;

&lt;P&gt;File2.csv       Field1 FA FB FC FD FE FG    -Combine Fields from File1 and File2                         Field1, Analysis Field1, FA FB FC FD FE FG&lt;BR /&gt;&lt;BR /&gt;
Filre3.csv      Field1 fa fb fc fd fe fg fh     -Analyse based on fa fb fc to                                       Field1, Analysis Field3, fd fe fg fh&lt;BR /&gt;
                                get results as compliant or non compliant                         &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;                            -Combine Fields from Fiel1.csv /File2.csv                   Field 1, JointAnalysisField1UField3, FA FB FC FD FE FG fd fe fg fh
                            and File 3.csv
                            -Analyse the data to generated a combined 
                            compliance /non compliance field                    
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;File4.csv       Field1 , f1 f2 f3 f4 f5 f6      -Analyse File4.csv based on field f1 f2 and get            Field 1, Analysis Field4,  f3 f4 f5 f6&lt;BR /&gt;
                                compliance / non compliant result field&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;                            -Combine the Analysed fields “JointAnalysisField1UField3”
                            &amp;amp; “ Analysis Field4” to generate the final analysis result of 
                            COMPLIANCE/NONCOMPLIANCE
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;Search Query&lt;BR /&gt;
101 &lt;BR /&gt;
index= mine source=“File4.csv”&lt;/P&gt;

&lt;P&gt;| table&amp;nbsp; Field1 , f1 f2 f3 f4 f5 f6&lt;/P&gt;

&lt;P&gt;| eval “ Analysis Field4”=case(like(f1,”Y”), "Compliant", like(f2,”Y”), "NonCompliant")&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;|join  Field 1, Analysis Field4,  f3 f4 f5 f6 type=outer 
    [ 
    search (source=File1.csv or source=File2.csv) index=mine

    |fields+Field1, Field2 Field3 Field3 FA FB FC FD FE FG

    | eval AnalysisField1= case(like(field2,”Y”), "Compliant", like(field2,”N”), "NonCompliant")

    | join Field 1 type=outer 
        [
        search source=File3.csv index=mine

        |eval Analysis Field3= case (like(fa,”Y”) AND (like(fb,”Y”) OR like(fb,”N”) ), "Compliant", like(fa,”N”) AND  (like(fb,”Y”) OR like(fb,”N”) ),"NonCompliant”)
        ]

    | fileds Field1, Analysis Field1, FA FB FC FD FE FG Analysis Field3, fd fe fg fh

    | eval  JointAnalysisField1UField3=case (like(“Analysis Field1”,”Compliant”) AND (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ), 
    "Compliant”, like(“Analysis Field1”,”NonCompliant”) AND  (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ),"NonCompliant”) 
    ]
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;| fields  Field1, JointAnalysisField1UField3, FA FB FC FD FE FG  fd fe fg fh Analysis Field4,  f3 f4 f5 f6&lt;/P&gt;

&lt;P&gt;| eval finalanalysis= if ( match( JointAnalysisField1UField3, “Compliant”) AND match (“Analysis Field1”,”Compliant”), “COMPLIANT”, &lt;BR /&gt;
                if(match( JointAnalysisField1UField3, “NonCompliant”) AND match (“Analysis Field1”,”NonCompliant”), “NONCOMPLIANT”, “UNDEFINED))&lt;BR /&gt;
010&lt;/P&gt;

&lt;P&gt;Note: Eval represent analysis steps , for brevity I have reduced long analysis statements in very simple manner.&lt;/P&gt;

&lt;P&gt;What I observed: &lt;BR /&gt;
1. Join should happen from log file with higher number of logs to log file with less number of logs.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;&lt;P&gt;My Question can I improve the search query to make better dashboards.&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;Also this is just a POC, I am expected to increase the number of parameters in the report by including more logs from various other sources.&lt;BR /&gt;
Can I continue to use this approach &lt;/P&gt;&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Tue, 31 Jul 2018 00:19:50 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378936#M68513</guid>
      <dc:creator>vikfnu</dc:creator>
      <dc:date>2018-07-31T00:19:50Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378937#M68514</link>
      <description>&lt;P&gt;Hi MuS &lt;BR /&gt;
I have been trying to write the comment and reply but I get a message that it will be moderated and then updated, but even after 24 hours its not appearing&lt;/P&gt;

&lt;P&gt;Data lets say:&lt;BR /&gt;
Files                   Fields              ExpectedOutput.                                                                  OUTPUT FIELDS&lt;BR /&gt;&lt;BR /&gt;
File1.csv”        Field1, Field2 Field3 Field3    -Perform analysis on field2 wrt filed1 to                         FIELD1, AnalysisField1&lt;BR /&gt;
                                                         get results as compliant or non compliant&lt;/P&gt;

&lt;P&gt;File2.csv              Field1 FA FB FC FD FE FG         -Combine Fields from File1 and File2                                Field1, Analysis Field1, FA FB&lt;BR /&gt;&lt;BR /&gt;
                                                                                                                                                                                        FC  FD FE FG&lt;BR /&gt;&lt;BR /&gt;
Filre3.csv          Field1 fa fb fc fd fe fg fh     -Analyse based on fa fb fc to                                             Field1, Analysis Field3, fd fe fg &lt;BR /&gt;
                                                                                          get results as compliant or non compliant                    fh                                                                                                                                                          &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;                                        -Combine Fields from Fiel1.csv /File2.csv                         Field 1, FA FB FC FD FE FG fd 
                                                                                      and File 3.csv                                                                        fe fg fh 
                                        -Analyse the data to generated a combined           JointAnalysisField1UField3,  
                                                     compliance /non compliance field
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;File4.csv       Field1 , f1 f2 f3 f4 f5 f6      -Analyse File4.csv based on field f1 f2 and get                         Field 1, Analysis Field4,  f3 f4 &lt;BR /&gt;
                                                                                   compliance / non compliant result field                                 f5 f6&lt;BR /&gt;
                                               -Combine the Analysed fields “JointAnalysisField1UField3”&lt;BR /&gt;
                                                &amp;amp; “ Analysis Field4” to generate the final analysis result of &lt;BR /&gt;
                                                COMPLIANCE/NONCOMPLIANCE&lt;/P&gt;

&lt;P&gt;Search Query&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index= mine source=“File4.csv”

| table&amp;nbsp; Field1 , f1 f2 f3 f4 f5 f6

| eval “ Analysis Field4”=case(like(f1,”Y”), "Compliant", like(f2,”Y”), "NonCompliant")

    |join  Field 1, Analysis Field4,  f3 f4 f5 f6 type=outer 
        [ 
        search (source=File1.csv or source=File2.csv) index=mine

        |fields+Field1, Field2 Field3 Field3 FA FB FC FD FE FG

        | eval AnalysisField1= case(like(field2,”Y”), "Compliant", like(field2,”N”), "NonCompliant")

        | join Field 1 type=outer 
            [
            search source=File3.csv index=mine

            |eval Analysis Field3= case (like(fa,”Y”) AND (like(fb,”Y”) OR like(fb,”N”) ), "Compliant", like(fa,”N”) AND  (like(fb,”Y”) OR like(fb,”N”) ),"NonCompliant”)
            ]

        | fileds Field1, Analysis Field1, FA FB FC FD FE FG Analysis Field3, fd fe fg fh

        | eval  JointAnalysisField1UField3=case (like(“Analysis Field1”,”Compliant”) AND (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ), 
        "Compliant”, like(“Analysis Field1”,”NonCompliant”) AND  (like(“Analysis Field3”,”Compliant”) OR like(“Analysis Field3”,”NonCompliant”) ),"NonCompliant”) 
        ]

| fields  Field1, JointAnalysisField1UField3, FA FB FC FD FE FG  fd fe fg fh Analysis Field4,  f3 f4 f5 f6

| eval finalanalysis= if ( match( JointAnalysisField1UField3, “Compliant”) AND match (“Analysis Field1”,”Compliant”), “COMPLIANT”, 
                if(match( JointAnalysisField1UField3, “NonCompliant”) AND match (“Analysis Field1”,”NonCompliant”), “NONCOMPLIANT”, “UNDEFINED))
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;010&lt;/P&gt;

&lt;P&gt;Note: Eval represent analysis steps , for brevity I have reduced long analysis statements in very simple manner.&lt;/P&gt;

&lt;P&gt;What I observed: &lt;BR /&gt;
1. Join should happen from log file with higher number of logs to log file with less number of logs.&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;&lt;P&gt;My Question can I improve the search query to make better dashboards.&lt;/P&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;P&gt;Also this is just a POC, I am expected to increase the number of parameters in the report by including more logs from various other sources.&lt;BR /&gt;
Can I continue to use this approach &lt;/P&gt;&lt;/LI&gt;
&lt;/OL&gt;</description>
      <pubDate>Tue, 31 Jul 2018 14:49:28 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378937#M68514</guid>
      <dc:creator>vikfnu</dc:creator>
      <dc:date>2018-07-31T14:49:28Z</dc:date>
    </item>
    <item>
      <title>Re: What are the best practices for uploading CSV files in merging fields from more than two sources?</title>
      <link>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378938#M68515</link>
      <description>&lt;P&gt;Hi vikfnu,&lt;/P&gt;

&lt;OL&gt;
&lt;LI&gt;yes, you can improve this search by not using &lt;CODE&gt;join&lt;/CODE&gt;!&lt;/LI&gt;
&lt;LI&gt;Nope, as long as you use &lt;CODE&gt;join&lt;/CODE&gt; this will break sooner than later, because of all the obvious and hidden limits of the sub search &lt;A href="http://docs.splunk.com/Documentation/Splunk/7.1.2/Search/Aboutsubsearches#Subsearch_performance_considerations"&gt;http://docs.splunk.com/Documentation/Splunk/7.1.2/Search/Aboutsubsearches#Subsearch_performance_considerations&lt;/A&gt;&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;But back to your SPL, try this instead:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=mine source=File4.csv OR source=File1.csv OR source=File2.csv OR source=File3.csv
| eval " Analysis Field4" = case(like(f1,"Y"), "Compliant", like(f2,"Y"), "NonCompliant")
| eval AnalysisField1 = case(like(field2,"Y"), "Compliant", like(field2,"N"), "NonCompliant")
| eval "Analysis Field3" = case (like(fa,"Y") AND (like(fb,"Y") OR like(fb,"N") ), "Compliant", like(fa,"N") AND  (like(fb,"Y") OR like(fb,"N") ),"NonCompliant")
| eval  JointAnalysisField1UField3=case (like("Analysis Field1”,”Compliant”) AND (like("Analysis Field3”,”Compliant”) OR like("Analysis Field3”,”NonCompliant”) ),          "Compliant”, like("Analysis Field1”,”NonCompliant”) AND  (like("Analysis Field3”,”Compliant”) OR like("Analysis Field3”,”NonCompliant”) ),"NonCompliant”) 
| eval finalanalysis= if ( match( JointAnalysisField1UField3, "Compliant”) AND match ("Analysis Field1”,”Compliant”), "COMPLIANT”, if(match( JointAnalysisField1UField3, "NonCompliant”) AND match ("Analysis Field1”,”NonCompliant”), "NONCOMPLIANT”, "UNDEFINED))
| stats values(*) AS * by Field1
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This will do the same as your multiple &lt;CODE&gt;join&lt;/CODE&gt; search, it gets all various events from all 4 csv files and performs 'analytics' on them, and and shows results by &lt;CODE&gt;Field1&lt;/CODE&gt;.&lt;/P&gt;

&lt;P&gt;To add more sources, sourcetype, and/or indexes simply expand the first base search by adding what you need.&lt;/P&gt;

&lt;P&gt;One thing I noticed: &lt;/P&gt;

&lt;UL&gt;
&lt;LI&gt;You use multiple variations of field names, like upper and lower case, added spaces in front, and so on.&lt;/LI&gt;
&lt;LI&gt;You use &lt;CODE&gt;”&lt;/CODE&gt; instead of &lt;CODE&gt;"&lt;/CODE&gt; - this is a big difference in SPL&lt;/LI&gt;
&lt;LI&gt;If a field value is literal &lt;CODE&gt;"Y"&lt;/CODE&gt; use it like this &lt;CODE&gt;FA="Y"&lt;/CODE&gt; instead of &lt;CODE&gt;like(FA,"Y")&lt;/CODE&gt;
Be consistent with filed names, and be sure to use the right &lt;CODE&gt;"&lt;/CODE&gt;, also please read all the answer posts, you listed above, again. You found the right advice still you use &lt;CODE&gt;join&lt;/CODE&gt; &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;cheers, MuS  &lt;/P&gt;</description>
      <pubDate>Tue, 31 Jul 2018 20:24:17 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Getting-Data-In/What-are-the-best-practices-for-uploading-CSV-files-in-merging/m-p/378938#M68515</guid>
      <dc:creator>MuS</dc:creator>
      <dc:date>2018-07-31T20:24:17Z</dc:date>
    </item>
  </channel>
</rss>

