<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Help to write the query in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Help-to-write-the-query/m-p/568251#M198018</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I have below query.&amp;nbsp; G&lt;SPAN&gt;etting data from dc_nfast index and putting it in test index and using this test index in dashboard.&amp;nbsp; we are not using test dc_nfast index anywhere. I want to refine the query so that it gets data from same index and putting in it same index. Can one advice on this&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;`comment("Get latest NFAST data")`&lt;/P&gt;&lt;P&gt;index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION NOT "Nfast Version" IN ("NA","[no results]") [&lt;BR /&gt;| tstats latest(_time) as latest earliest(_time) as earliest count where index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION host=nfast* earliest=-30d by _time , source , host span=2m&lt;BR /&gt;| stats values(count) as count values(earliest) as earliest values(latest) as latest by _time , source , host&lt;BR /&gt;| eventstats median(count) as median p5(count) as p5 p95(count) as p95 by source , host&lt;BR /&gt;| eval range=(p95-p5)&lt;BR /&gt;| eval lower_count=(median-range*3) , upper_count=(median+range*3)&lt;BR /&gt;| where count &amp;gt;=lower_count AND count&amp;lt;=upper_count&lt;BR /&gt;| eval earliest = earliest - 1 , latest = latest + 1&lt;BR /&gt;| dedup source , host sortby - latest&lt;BR /&gt;| return 10 earliest latest host source ]&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;`comment("Get lifecyclestate, country and ipaddress from master asset lists")`&lt;BR /&gt;| lookup dc_app_dcmaster_master_asset_ul.csv hostname OUTPUT master_ipaddress as ul_master_ipaddress master_lifeCycleState as ul_master_lifeCycleState master_country as ul_master_country&lt;BR /&gt;| lookup dc_app_dcmaster_master_asset_win.csv hostname OUTPUT master_ipaddress as win_master_ipaddress master_lifeCycleState as win_master_lifeCycleState master_country as win_master_country&lt;BR /&gt;| lookup dc_app_unicorn_lookup.csv hostName as hostname OUTPUT locationCode&lt;BR /&gt;| eval ipaddress=coalesce(ul_master_ipaddress, win_master_ipaddress) , lifeCycleState=coalesce(ul_master_lifeCycleState,win_master_lifeCycleState) , country=coalesce(ul_master_country,win_master_country)&lt;BR /&gt;| fillnull value="UNKNOWN" ipaddress lifeCycleState country locationCode&lt;/P&gt;&lt;P&gt;`comment("Join on serial number data from HSMC CSV import")`&lt;/P&gt;&lt;P&gt;| rename "Serial Number" as HSMSerialNumber&lt;BR /&gt;| makemv HSMSerialNumber&lt;BR /&gt;`comment("Count the number of serial numbers per host")`&lt;BR /&gt;| eval HSM_count=mvcount(HSMSerialNumber)&lt;BR /&gt;| fillnull value=0 HSM_count&lt;BR /&gt;| mvexpand HSMSerialNumber&lt;BR /&gt;| join type=left HSMSerialNumber [ search `get_latest_source_in_range(test , /opt/splunk/etc/apps/dc_ta_nfast_inputs/git/master/HSMSplunkFeed/SPLUNK_HSM_Import_Listing.csv)` ]&lt;BR /&gt;| rename Country as HSM_country "HSM *" as HSM_* "HSM * *" as HSM_*_&lt;BR /&gt;| foreach HSM* [ fillnull value="No_HSM_data" &amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt; ]&lt;/P&gt;&lt;P&gt;`comment("Add EIM service data - creating")`&lt;BR /&gt;| lookup app_eim_lookup_eim_basic_extract.csv hostname OUTPUT PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL&lt;BR /&gt;| fillnull value="UNKNOWN" IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7&lt;BR /&gt;| eval servicedata=mvzip(IT_SERVICE ,SERVICE_OWNER ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,SERVICE_OWNER_EMAIL ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,PLADA_CRITICALITY ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,SERVICE_IT_ORG6 ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,SERVICE_IT_ORG7 ,"##")&lt;BR /&gt;| eval servicedata=mvdedup(servicedata)&lt;BR /&gt;| fields - PLADA_CRITICALITY SERVICE_IT_ORG6 ISERVICE_IT_ORG7 T_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL&lt;BR /&gt;| eval servicedata=mvdedup(servicedata)&lt;/P&gt;&lt;P&gt;`comment("Now expand each servicedata into multiple lines where the is multiple values ")`&lt;BR /&gt;| mvexpand servicedata&lt;/P&gt;&lt;P&gt;`comment("Recreate the service values")`&lt;BR /&gt;| rex field=servicedata "(?&amp;lt;IT_SERVICE&amp;gt;.*?)##(?&amp;lt;SERVICE_OWNER&amp;gt;.*?)##(?&amp;lt;SERVICE_OWNER_EMAIL&amp;gt;.*?)##(?&amp;lt;PLADA_CRITICALITY&amp;gt;.*?)##(?&amp;lt;SERVICE_IT_ORG6&amp;gt;.*?)##(?&amp;lt;SERVICE_IT_ORG7&amp;gt;.*)"&lt;/P&gt;&lt;P&gt;`comment("Run addinfo command to capture info_search_time field")`&lt;BR /&gt;| addinfo&lt;/P&gt;&lt;P&gt;`comment("Write out to index")`&lt;BR /&gt;| eval _time=now()&lt;BR /&gt;| table hostname Combined Nfast* ipaddress lifeCycleState HSM_count country locationCode IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 HSMSerialNumber HSM*&lt;BR /&gt;| collect index=test source=stash&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 23 Sep 2021 13:55:37 GMT</pubDate>
    <dc:creator>mm12</dc:creator>
    <dc:date>2021-09-23T13:55:37Z</dc:date>
    <item>
      <title>Help to write the query</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Help-to-write-the-query/m-p/568251#M198018</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I have below query.&amp;nbsp; G&lt;SPAN&gt;etting data from dc_nfast index and putting it in test index and using this test index in dashboard.&amp;nbsp; we are not using test dc_nfast index anywhere. I want to refine the query so that it gets data from same index and putting in it same index. Can one advice on this&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;`comment("Get latest NFAST data")`&lt;/P&gt;&lt;P&gt;index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION NOT "Nfast Version" IN ("NA","[no results]") [&lt;BR /&gt;| tstats latest(_time) as latest earliest(_time) as earliest count where index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION host=nfast* earliest=-30d by _time , source , host span=2m&lt;BR /&gt;| stats values(count) as count values(earliest) as earliest values(latest) as latest by _time , source , host&lt;BR /&gt;| eventstats median(count) as median p5(count) as p5 p95(count) as p95 by source , host&lt;BR /&gt;| eval range=(p95-p5)&lt;BR /&gt;| eval lower_count=(median-range*3) , upper_count=(median+range*3)&lt;BR /&gt;| where count &amp;gt;=lower_count AND count&amp;lt;=upper_count&lt;BR /&gt;| eval earliest = earliest - 1 , latest = latest + 1&lt;BR /&gt;| dedup source , host sortby - latest&lt;BR /&gt;| return 10 earliest latest host source ]&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;`comment("Get lifecyclestate, country and ipaddress from master asset lists")`&lt;BR /&gt;| lookup dc_app_dcmaster_master_asset_ul.csv hostname OUTPUT master_ipaddress as ul_master_ipaddress master_lifeCycleState as ul_master_lifeCycleState master_country as ul_master_country&lt;BR /&gt;| lookup dc_app_dcmaster_master_asset_win.csv hostname OUTPUT master_ipaddress as win_master_ipaddress master_lifeCycleState as win_master_lifeCycleState master_country as win_master_country&lt;BR /&gt;| lookup dc_app_unicorn_lookup.csv hostName as hostname OUTPUT locationCode&lt;BR /&gt;| eval ipaddress=coalesce(ul_master_ipaddress, win_master_ipaddress) , lifeCycleState=coalesce(ul_master_lifeCycleState,win_master_lifeCycleState) , country=coalesce(ul_master_country,win_master_country)&lt;BR /&gt;| fillnull value="UNKNOWN" ipaddress lifeCycleState country locationCode&lt;/P&gt;&lt;P&gt;`comment("Join on serial number data from HSMC CSV import")`&lt;/P&gt;&lt;P&gt;| rename "Serial Number" as HSMSerialNumber&lt;BR /&gt;| makemv HSMSerialNumber&lt;BR /&gt;`comment("Count the number of serial numbers per host")`&lt;BR /&gt;| eval HSM_count=mvcount(HSMSerialNumber)&lt;BR /&gt;| fillnull value=0 HSM_count&lt;BR /&gt;| mvexpand HSMSerialNumber&lt;BR /&gt;| join type=left HSMSerialNumber [ search `get_latest_source_in_range(test , /opt/splunk/etc/apps/dc_ta_nfast_inputs/git/master/HSMSplunkFeed/SPLUNK_HSM_Import_Listing.csv)` ]&lt;BR /&gt;| rename Country as HSM_country "HSM *" as HSM_* "HSM * *" as HSM_*_&lt;BR /&gt;| foreach HSM* [ fillnull value="No_HSM_data" &amp;lt;&amp;lt;FIELD&amp;gt;&amp;gt; ]&lt;/P&gt;&lt;P&gt;`comment("Add EIM service data - creating")`&lt;BR /&gt;| lookup app_eim_lookup_eim_basic_extract.csv hostname OUTPUT PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL&lt;BR /&gt;| fillnull value="UNKNOWN" IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7&lt;BR /&gt;| eval servicedata=mvzip(IT_SERVICE ,SERVICE_OWNER ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,SERVICE_OWNER_EMAIL ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,PLADA_CRITICALITY ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,SERVICE_IT_ORG6 ,"##")&lt;BR /&gt;| eval servicedata=mvzip(servicedata,SERVICE_IT_ORG7 ,"##")&lt;BR /&gt;| eval servicedata=mvdedup(servicedata)&lt;BR /&gt;| fields - PLADA_CRITICALITY SERVICE_IT_ORG6 ISERVICE_IT_ORG7 T_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL&lt;BR /&gt;| eval servicedata=mvdedup(servicedata)&lt;/P&gt;&lt;P&gt;`comment("Now expand each servicedata into multiple lines where the is multiple values ")`&lt;BR /&gt;| mvexpand servicedata&lt;/P&gt;&lt;P&gt;`comment("Recreate the service values")`&lt;BR /&gt;| rex field=servicedata "(?&amp;lt;IT_SERVICE&amp;gt;.*?)##(?&amp;lt;SERVICE_OWNER&amp;gt;.*?)##(?&amp;lt;SERVICE_OWNER_EMAIL&amp;gt;.*?)##(?&amp;lt;PLADA_CRITICALITY&amp;gt;.*?)##(?&amp;lt;SERVICE_IT_ORG6&amp;gt;.*?)##(?&amp;lt;SERVICE_IT_ORG7&amp;gt;.*)"&lt;/P&gt;&lt;P&gt;`comment("Run addinfo command to capture info_search_time field")`&lt;BR /&gt;| addinfo&lt;/P&gt;&lt;P&gt;`comment("Write out to index")`&lt;BR /&gt;| eval _time=now()&lt;BR /&gt;| table hostname Combined Nfast* ipaddress lifeCycleState HSM_count country locationCode IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 HSMSerialNumber HSM*&lt;BR /&gt;| collect index=test source=stash&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Sep 2021 13:55:37 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Help-to-write-the-query/m-p/568251#M198018</guid>
      <dc:creator>mm12</dc:creator>
      <dc:date>2021-09-23T13:55:37Z</dc:date>
    </item>
    <item>
      <title>Re: Help to write the query</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Help-to-write-the-query/m-p/568281#M198029</link>
      <description>&lt;P&gt;If your end goal is to gather data for powering a dashboard then using a datamodel seems a better solution. Then you can accelerate it for a given time range if you're looking for a performance increase.&lt;/P&gt;&lt;P&gt;In your existing search you might want to look at your use of table toward the end as it doesn't transform results and looks like collect is going to pull in _raw.&lt;/P&gt;&lt;P&gt;Also, summary indexing as you're doing with collect counts against your license so be careful. Datamodel (or report) acceleration does not cause a hit because you are not indexing new data. Just FYI...&lt;/P&gt;</description>
      <pubDate>Thu, 23 Sep 2021 16:23:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Help-to-write-the-query/m-p/568281#M198029</guid>
      <dc:creator>codebuilder</dc:creator>
      <dc:date>2021-09-23T16:23:21Z</dc:date>
    </item>
  </channel>
</rss>

