All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Use streamstats. Here's an example - use the last 3 lines with your data | makeresults format=csv data="ID,message,state 101,executed,started 101,null,in progress 101,none,completed 102,activity pri... See more...
Use streamstats. Here's an example - use the last 3 lines with your data | makeresults format=csv data="ID,message,state 101,executed,started 101,null,in progress 101,none,completed 102,activity printed,started 102,null,in progress 102,activity printed,completed" | eval needs_fill=if(message="executed" AND state="started", 1, 0) | streamstats max(needs_fill) as needs_fill by ID | eval message=if(needs_fill=1 AND state="completed", "executed", message)  
@bowesmana thanks for your quick response, the value of massage field is different as per ID as you shown below. current data: expected output:  
and a final way, again using streamstats, but with a more optimised way of collecting the results - if you have a lot of data, it's worth benchmarking each of these solutions as they may have differe... See more...
and a final way, again using streamstats, but with a more optimised way of collecting the results - if you have a lot of data, it's worth benchmarking each of these solutions as they may have different performance characteristics | makeresults... | streamstats count as seq global=f by provider errorname | streamstats global=f list(eval(if(seq<4, Count, null()))) as Count list(eval(if(seq<4, errorid, null()))) as errorid by provider errorname | where seq=3
Please edit your post and use the code block feature when posting code, otherwise it's unreadable  
Hello, Sorry I wasn't clear. I modified my questions a bit below I was referring Splunk UI as in the menu: Lookups >> Lookup definitions >> Add new My previous two questions specifically asked a... See more...
Hello, Sorry I wasn't clear. I modified my questions a bit below I was referring Splunk UI as in the menu: Lookups >> Lookup definitions >> Add new My previous two questions specifically asked about a relationship between Splunk UI and transform.conf (not collection.conf) 1) Can I create KVStore lookup definition in Splunk UI without creating transform.conf file directly via command line? [Yes/No] 2) Will creating KVStore lookup definition in Splunk UI automatically update transform.conf file? [Yes/No] The reason I asked because I only have the ability to create lookup definition through Splunk UI Lookup menu (not lookup editor) and I was wondering if that would create transform.conf I appreciate your suggestion, here's my response to yours suggestion (although didn't answer my two questions) 1) maybe - but I don't have a way to test   2) PC is restrictive  3) not possible Thank you
Yes Local=221 for all events
<form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <l... See more...
<form version="1.1" theme="dark"> <label>DMT Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input></fieldset> <row> <panel> <table> <search> <query> index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) repoter.dataloadingintiated |stats count by local |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) task.dataloadedfromfiles NOT "error" NOT "end_point" NOT "failed_data" |stats count as FilesofDMA] |append [search index=dam-idx (host_ip=12.234.201.22 OR host_ip=10.457.891.34 OR host_ip=10.234.34.18 OR host_ip=10.123.363.23) "app.mefwebdata - jobintiated" |eval host = case(match(host_ip, "12.234"), "HOP"+substr(host, 120,24), match(host_ip, "10.123"), "HOM"+substr(host, 120,24)) |eval host = host + " - " + host_ip |stats count by host |fields - count |appendpipe [stats count |eval Error="Job didn't run today" |where count==0 |table Error]] |stats values(host) as "Host Data Details", values(Error) as Error, values(local) as "Files created localley on AMP", values(FilesofDMA) as "File sent to DMA" <query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentageRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host_ip> <colorPalette type="map">{"12.234.201.22":#53A051, "10.457.891.34":#53A051,"10.234.34.18":#53A051,"10.123.363.23":#53A051}</colorPalette> </format> <format type="color" field="local"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="FilesofDMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Files created localley on AMP"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="File sent to DMA"> <colorPalette type="list">[#DC4E41,#53A051]</colorPalette> <scale type="threshold">8</scale> </format> <format type="color" field="Error"> <colorPalette type="map">{"Job didn't run today":#DC4E41}</colorPalette> </format> <format type="color" field="Host Data Details"> <colorPalette type="map">{"HOM-jjderf - 10.123.34.18":#53A051"HOM-iytgh - 10.123.363.23":#53A051, HOP-wghjy - 12.234.201.22":#53A051, "HOP-tyhgt - 12.234.891.34":#53A051}</colorPalette> </format> </table> </panel> </row> </form>
Can you post your XML for the entire <panel> inside a code formatting block <>  
and is local=221 for all events?
Or you can use streamstats, | makeresults... | streamstats count as seq window=3 global=f list(*) as * by provider errorname | where seq=3 | fields - seq | streamstats count as seq window=3 global=f... See more...
Or you can use streamstats, | makeresults... | streamstats count as seq window=3 global=f list(*) as * by provider errorname | where seq=3 | fields - seq | streamstats count as seq window=3 global=f list(*) as * by provider errorname | where seq=1 | fields - seq  
One way on a small table (less than 100 items per provider) is to use stats list and then keep the first 3, here's an example | makeresults format=csv data="provider,errorid,errorname,Count Digital ... See more...
One way on a small table (less than 100 items per provider) is to use stats list and then keep the first 3, here's an example | makeresults format=csv data="provider,errorid,errorname,Count Digital it,401,apf,200.0000 Data St,200,apf,500.0000 dtst,0,apf,18.0000 Digital it,100,apf,55.0000 dtst,501,apf,16.0000 Digital it,0,apf,20.0000 Data St,200,apf,300.0000 dtst,201,apf,12.0000 Data St,404,apf,20.0000 Digital it,201,apf,10.0000 Data St,501,apf,10.0000 dtst,201,apf,9.0000 Data St,401,apf,8.0000 dtst,500,apf,3.0000 Data St,555,apf,5.0000 dtst,200,apf,2.0000" ``` list() will retain order, but has a max of 100 items ``` | stats list(*) as * by provider errorname ``` This just retains the first 3 ``` | foreach Count errorid [ eval <<FIELD>>=mvindex(<<FIELD>>, 0, 2) ] but if you have more complex data, it may not be suitable.
this is a classic dashboard and no base searches used.
If there is really no delimiter, you can't, but in your case, there is a delimiter, which I am assuming in your example is the line feed at the end of each row. You can either do this by putting a li... See more...
If there is really no delimiter, you can't, but in your case, there is a delimiter, which I am assuming in your example is the line feed at the end of each row. You can either do this by putting a line feed as the split delimiter | makeresults | eval field1="example1@splunk.com example@splunk.com sample@splunk.com scheduler" | eval x=split(field1, " ") | eval field1_items=mvcount(field1), fieldx_items=mvcount(x) or you can use replace+split to change the line feed into something easier to split with, e.g.  | eval x=split(replace(field1, "\n", "#!#"), "#!#")
Your search is rather odd - firstly you are doing ... | stats count by local and at the end you are doing  | stats ... values(local) as ... which doesn't make a lot of sense, unless local is alwa... See more...
Your search is rather odd - firstly you are doing ... | stats count by local and at the end you are doing  | stats ... values(local) as ... which doesn't make a lot of sense, unless local is always 221 in your example. Is this dashboard studio or classic and are you using any base searches here?  
Hi All, I want to separate a field which contains multiple value within it but doesn't have delimiter on it. Example: | makeresults | eval field1="example1@splunk.com example@splunk.com sample@... See more...
Hi All, I want to separate a field which contains multiple value within it but doesn't have delimiter on it. Example: | makeresults | eval field1="example1@splunk.com example@splunk.com sample@splunk.com scheduler" I have tried to use | eval split = split(field1, " "). But nothing works, Kindly help me out on this like how to separate this single string field as MV field. Thanks in Advance   
Hi @gcusello @deepakc  Thanks for all the inputs. Checking in again today after the weekend, the TCP filtering is working fine! The TCP events from the firewall stopped 1-2 hours after disabli... See more...
Hi @gcusello @deepakc  Thanks for all the inputs. Checking in again today after the weekend, the TCP filtering is working fine! The TCP events from the firewall stopped 1-2 hours after disabling of TCP input, and I suspect this might be due to TCP backlogs? I am not sure how Splunk handles TCP backlogs, but it seems that TCP backlogs will not be processed by the event filtering syntax. Maybe TCP backlogs are "past" the filtering stages and are slowly ingested?
There is no difference in the query, same query in dashboard panel and same used in search too
Panel displaying in dashboard:   When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to D... See more...
Panel displaying in dashboard:   When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to DMA HOM-jjderf - 10.123.34.18 HOM-iytgh - 10.123.363.23 HOP-wghjy - 12.234.201.22 HOP-tyhgt - 12.234.891.34   221 86  
Panel displaying in dashboard:   When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to D... See more...
Panel displaying in dashboard:   When we open the panel in search showing as below:(this is the correct data) Host Data Details Error Files created localley on AMP File sent to DMA HOM-jjderf - 10.123.34.18 HOM-iytgh - 10.123.363.23 HOP-wghjy - 12.234.201.22 HOP-tyhgt - 12.234.891.34   221 86  
Query: |mstats sum(error.count) as Count where index=metrics_data by provider errorid errorname |search errorname=apf Results: provider errorid errorname Count Digital it... See more...
Query: |mstats sum(error.count) as Count where index=metrics_data by provider errorid errorname |search errorname=apf Results: provider errorid errorname Count Digital it 401 apf 200.0000 Data St 200 apf 500.0000 dtst 0 apf 18.0000 Digital it 100 apf 55.0000 dtst 501 apf 16.0000 Digital it 0 apf 20.0000 Data St 200 apf 300.0000 dtst 201 apf 12.0000 Data St 404 apf 20.0000 Digital it 201 apf 10.0000 Data St 501 apf 10.0000 dtst 201 apf 9.0000 Data St 401 apf 8.0000 dtst 500 apf 3.0000 Data St 555 apf 5.0000 dtst 200 apf 2.0000 expected results: provider errorname errorid Count Digital it apf 401 100 0 200.0000 55.0000 20.0000 Data St apf 200 200 404 500.0000 300.0000 20.0000 dtst apf 0 501 201 18.0000 16.0000 12.0000