All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Format your chart as stacked | rename myApp as up | eval down=1-up  
I am glad it is working
Actually, I think transaction should work in this case.  @bowesmana is correct that your command is missing host as parameter.  But more than that, it is also missing option keeporphans.  Also the de... See more...
Actually, I think transaction should work in this case.  @bowesmana is correct that your command is missing host as parameter.  But more than that, it is also missing option keeporphans.  Also the determinant is not eventcount but closed_txn.   | transaction host maxspan=5m keeporphans=true startswith="%ROUTING-LDP-5-NSR_SYNC_START" endswith="%ROUTING-LDP-5-NBR_CHANGE" | where closed_txn != 1 | stats count by host   Apply the above to this mock dataset: _raw _time host 1 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:45 host1 2 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:39 host2 3 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:33 host3 5 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:21 host0 6 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:02:15 host1 7 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:02:09 host2 8 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:02:03 host3 9 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:57 host4 10 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:51 host0 11 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:45 host1 13 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:33 host3 14 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:27 host4 15 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:21 host0 16 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:15 host1 17 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:09 host2 18 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:03 host3 19 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:57 host4 20 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:51 host0 21 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:45 host1 22 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:39 host2 23 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:33 host3 25 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:21 host0 26 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:15 host1 27 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:09 host2 28 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:03 host3 29 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 18:59:57 host4 30 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 18:59:51 host0 You get host count host2 1 host4 2 Here is an emulation that produces the above mock data   | makeresults count=30 | streamstats count as _count | eval _time = _time - _count * 6 | eval host = "host" . _count % 5 | eval _raw = _count . " " . mvindex(mvappend("%ROUTING-LDP-5-NSR_SYNC_START", "%ROUTING-LDP-5-NBR_CHANGE"), -ceil(_count / 5) %2) | search NOT (_count IN (4, 12, 24) %ROUTING-LDP-5-NBR_CHANGE) ``` the above emulates index = test ("%ROUTING-LDP-5-NSR_SYNC_START" OR "%ROUTING-LDP-5-NBR_CHANGE") ```   Play with it and compare with real data.
I'm trying to create a simple status page visualization that mimics the style I've seen by Atlassian Statuspage.   You can see it on the status page for Discord and Wiz. Currently, I have a timechar... See more...
I'm trying to create a simple status page visualization that mimics the style I've seen by Atlassian Statuspage.   You can see it on the status page for Discord and Wiz. Currently, I have a timechart and if status=1 then it's up, but if status=0 then it's down.  When the app is down, there is simply no bar on the graph.  How do I "force" a value for the bar to appear but then color each bar based on the status value.  I think I'm missing something really simple and hoping someone can point me in the right direction. Current SPL index=main app="myApp" | eval status=if(isnull(status), "0", status) | timechart span=1m max(status) by app   Current XML   <dashboard version="1.1" theme="light"> <label>Application Status</label> <row> <panel> <chart> <search> <query>index=main app="myApp" | eval status=if(isnull(status), "0", status) | timechart span=1m max(status) by app</query> <earliest>-60m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisLabelsY.majorUnit">1</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.maximumNumber">1</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.columnSpacing">0</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.seriesColors">[0x459240]</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </dashboard>    
Sorry I missed that part.  I found this old post https://community.splunk.com/t5/Deployment-Architecture/What-is-the-curl-command-used-on-the-deployer-to-apply-shcluster/td-p/202735?_gl=1*row333*_gc... See more...
Sorry I missed that part.  I found this old post https://community.splunk.com/t5/Deployment-Architecture/What-is-the-curl-command-used-on-the-deployer-to-apply-shcluster/td-p/202735?_gl=1*row333*_gcl_au*MTI1NzYwOTM2Ni4xNzM2NjM1OTU2*FPAU*MTI1NzYwOTM2Ni4xNzM2NjM1OTU2*_ga*Mjc0NTEwOTk4LjE3MzY2MzU5NTY.*_ga_5EPM2P39FV*MTczNjYzNTk1Ni4xLjEuMTczNjYzNjM4Mi4wLjAuMTQ1OTAyMDQ4*_fplc*UXpNN09jNUpTRlNJbCUyQno4bmxjdVFQSjBNYTQ1bXdDVGxKcVc2TGJNWGpBNjF0RzV0ZldvMXElMkI3NjBtYWhRZ2kzT00ydjFmejlVUHVCbkh2UHhnTU5SJTJGTE1hbWFVNkNjeDNTYndDRndXdGV5eURwdUFmVFNwYjJHeFduT0x3JTNEJTNE#answer-321559 I haven’t suitable test environment on my hand now, but maybe this is still valid?
index IN (cart purchased) cart_id=* OR pur_id=* | eval common_id=coalesce(cart_id, pur_id) | eventstats dc(index) as common_count by common_id | where index="cart" | stats count as carts count(eval(c... See more...
index IN (cart purchased) cart_id=* OR pur_id=* | eval common_id=coalesce(cart_id, pur_id) | eventstats dc(index) as common_count by common_id | where index="cart" | stats count as carts count(eval(common_count > 1)) as purchases | eval pct=(purchases*100)/carts | table carts purchases pct
@isoutamoLook into the opening post, they have no CLI access on the servers. I assume it's either an infrastructure managed by third party or they have a very strict duty separation policies in place.
Perhaps this will help.  It counts the number of unique cart and purchase IDs then does the math to find the percentage of paid carts. index IN (cart purchased) cart_id=* OR pur_id=* | stats dc(cart... See more...
Perhaps this will help.  It counts the number of unique cart and purchase IDs then does the math to find the percentage of paid carts. index IN (cart purchased) cart_id=* OR pur_id=* | stats dc(cart_id) as carts, dc(pur_id) as purchases | eval pct=(purchases*100)/carts | table carts purchases pct
Honestly, it looks as if you were trying to have a Zabbix console just done with other tools. It doesn't make much sense.
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_... See more...
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_id values will be stored as a pur_id in the "purchased" index. cart purchased  cart_id 123 payment received  pur_id   123 cart_id 456   no payment  no record for 456 Now I want to display the percentage of cart for which payment is done. I wonder if anyone can help here.   Thank you so much 
Hi @rohithvr19 , real time monitoring isn't possible, you can have a near real time monitoring sheduling a very frequent update of the data (e.g. every 5 or 10 minutes), otherwise, you need a differ... See more...
Hi @rohithvr19 , real time monitoring isn't possible, you can have a near real time monitoring sheduling a very frequent update of the data (e.g. every 5 or 10 minutes), otherwise, you need a different solution. As I said, the performace of a query pressing a button are very very low! and the only solution is a frequent update (e.g. every 5 minutes). Ciao. Giuseppe
Thank you, @gcusello and @PickleRick, for your responses. I have tried using the Zabbix add-on for Splunk, but unfortunately, it is not working for my use case. My requirement is to display real-tim... See more...
Thank you, @gcusello and @PickleRick, for your responses. I have tried using the Zabbix add-on for Splunk, but unfortunately, it is not working for my use case. My requirement is to display real-time audit logs from Zabbix in a Splunk dashboard, but only upon user request, such as via a button click or similar functionality. Could you suggest a standard and efficient approach to accomplish this task?
Is this working if roles are updated by installing app which contains those definitions in conf files or only if those are edited with GUI?
1st you should create a new question instead of add your questions into already long time ago closed question. Both of those are working equivalently in technical point of view. But for human/ reada... See more...
1st you should create a new question instead of add your questions into already long time ago closed question. Both of those are working equivalently in technical point of view. But for human/ readability point of view at least I prefer the way where multisite attribute is set in closest place. Especially when you are looking those conf files it’s easier to see is that cluster multi or single site version. Of course you should use “splunk btool  server list” command and check what it show.
Hi as other already said you could use DS to push apps to deployer and the it push those to SHC members, but we don’t encourage you to do it. DS’s main function is manage UF and just those. You coul... See more...
Hi as other already said you could use DS to push apps to deployer and the it push those to SHC members, but we don’t encourage you to do it. DS’s main function is manage UF and just those. You could use it to manage also HFs and individual servers, but there are some things which you must know or otherwise there could be some side effects. What is your issue which you are trying to solve with DS -> Deployer-> SHC solution? Maybe there is better way to solve it? r. Ismo
Strictly theoretically speaking it would probably be possible to do what you want using classic dashboard, a lot of custom JS and possibly a custom search commands. The thing is, it's so unusual and ... See more...
Strictly theoretically speaking it would probably be possible to do what you want using classic dashboard, a lot of custom JS and possibly a custom search commands. The thing is, it's so unusual and custom there's a fat chance noone ever tried something like that and you'd have to write everything from scratch yourself. But as @gcusello already pointed out - it's completely opposite to the normal Splunk data workflow. What's your use case?
Hi @rohithvr19 , this is the opposite of the normal way to run of Splunk: Splunk isn't a client of external platforms to use when needed. The usual way to run is: schedule the ingestions of logs... See more...
Hi @rohithvr19 , this is the opposite of the normal way to run of Splunk: Splunk isn't a client of external platforms to use when needed. The usual way to run is: schedule the ingestions of logs from the external source (e.g. Zabbix and save the extraction in an index, run a search n a dashboard and display logs. It's the same approach to use DB-Connect: you can run SQL queries but the correct approach is schedule queries and run on indexed results. Why this? because your approach is very very slow and results aren't saved in any archive, so you have ro run the API script every time and it consumes a large amount of resources. Use the Splunk Add-On for Zabbix ( https://splunkbase.splunk.com/app/5272 ) to extract logs and then create your own dashboards. Ciao. Giuseppe
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the ... See more...
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the button is clicked. Has anyone implemented something like this before? Please help, as I’m really stuck on this!
w can one understand it or interprete it?  
Please share event with raw text, not search app's format. Regardless, you should not need any regex to deal with this data because Splunk already extracted everything.  Secondly, you do not need to... See more...
Please share event with raw text, not search app's format. Regardless, you should not need any regex to deal with this data because Splunk already extracted everything.  Secondly, you do not need to consider logs{}.action because your requirement only concerns status "Open" and "Escalated".  What actions have been taken is irrelevant to filter. In other words, given status and id like the following: _time id status 2025-01-10 23:24:57 xxx10 Escalated 2025-01-10 23:17:57 xxx10 Other 2025-01-10 23:10:57 xxx10 Open 2025-01-10 23:03:57 xxx10 Other 2025-01-10 22:56:57 xxx10 Open 2025-01-10 23:30:57 xxx11 Closed 2025-01-10 23:23:57 xxx11 Closed 2025-01-10 23:16:57 xxx11 Open 2025-01-10 23:09:57 xxx11 Escalated 2025-01-10 23:02:57 xxx11 Other 2025-01-10 22:55:57 xxx11 Open 2025-01-10 23:29:57 xxx12 Assigned 2025-01-10 23:22:57 xxx12 Open 2025-01-10 23:15:57 xxx12 Closed 2025-01-10 23:08:57 xxx12 Closed 2025-01-10 23:01:57 xxx12 Open 2025-01-10 22:54:57 xxx12 Escalated 2025-01-10 23:28:57 xxx13 Open 2025-01-10 23:21:57 xxx13 Open 2025-01-10 23:14:57 xxx13 Assigned 2025-01-10 23:07:57 xxx13 Open 2025-01-10 23:00:57 xxx13 Closed 2025-01-10 22:53:57 xxx13 Closed 2025-01-10 23:27:57 xxx14 Assigned 2025-01-10 23:20:57 xxx14 Escalated 2025-01-10 23:13:57 xxx14 Open 2025-01-10 23:06:57 xxx14 Open 2025-01-10 22:59:57 xxx14 Assigned 2025-01-10 22:52:57 xxx14 Open 2025-01-10 23:26:57 xxx15 Open 2025-01-10 23:19:57 xxx15 Open 2025-01-10 23:12:57 xxx15 Assigned 2025-01-10 23:05:57 xxx15 Escalated 2025-01-10 22:58:57 xxx15 Open 2025-01-10 22:51:57 xxx15 Open 2025-01-10 23:25:57 xxx16 Open 2025-01-10 23:18:57 xxx16 Other 2025-01-10 23:11:57 xxx16 Open 2025-01-10 23:04:57 xxx16 Open 2025-01-10 22:57:57 xxx16 Assigned You only want to count events for id's xxx10 (last status Escalated), xxx13 (Open), xxx15 (Open), and xxx16 (Open).  Using eventstats is perhaps the easiest.     | eventstats latest(status) as final_status by id | search final_status IN (Open, Escalated) | stats count by id final_status   Here, final_status is thrown in just to confirm that final_status only contains Open or Escalated.  The above mock data will result in id final_status count xxx10 Escalated 5 xxx13 Open 6 xxx15 Open 6 xxx16 Open 5 Here is the emulation that generates the mock data.  Play with it and compare with real data.   | makeresults count=40 | streamstats count as _count | eval _time = _time - _count * 60 | eval id = "xxx" . (10 + _count % 7) | eval status = mvindex(mvappend("Open", "Assigned", "Other", "Escalated", "Closed"), -(_count * (_count % 3)) % 5) ``` data emulation above ```   Hope this helps.