All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @isoutamo , We are trying to create a role (by using authorise.conf) in DS app under etc/deployment-apps and it will be  pushed to deployer under shcluster/apps. From there how do I need to pu... See more...
Hello @isoutamo , We are trying to create a role (by using authorise.conf) in DS app under etc/deployment-apps and it will be  pushed to deployer under shcluster/apps. From there how do I need to push it to search head cluster members. There are 3 SHs. We don't have access to backend. From Splunk web I need to achieve this.? Because in SH when I am checking with roles section created role is not showing but in Deployer under shcluster/apps authorise.conf is updated when I push it from DS. Please help me in this?
Hi, I have json data structured as follows:     { "payload": { "status": "ok", # or "degraded" } }     I'm trying to use the stats command to count the "ok" and "degraded" events separa... See more...
Hi, I have json data structured as follows:     { "payload": { "status": "ok", # or "degraded" } }     I'm trying to use the stats command to count the "ok" and "degraded" events separately. I am using the following query:      index=whatever | eval is_ok=if(payload.status=="ok", 1, 0) | stats count as total, count(is_ok) as ok_count     I have tried passing it through spath, , with "=" in the if condition,  and several other approaches changes. What always happens is that both counts contain all elements, despite there being different numbers of them. Please help!
Is it possible to execute a script through a button click and display the script's output on a Splunk dashboard? Has anyone implemented something similar before? Any guidance would be greatly appreci... See more...
Is it possible to execute a script through a button click and display the script's output on a Splunk dashboard? Has anyone implemented something similar before? Any guidance would be greatly appreciated, as I am currently stuck on this. Thank you!
Hello everyone! I would like to ask about the Splunk Heavy Forwarder Splunk-side config: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ With those ... See more...
Hello everyone! I would like to ask about the Splunk Heavy Forwarder Splunk-side config: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ With those settings it will send the metadata in the format of key::value. Is it possible to reconfigure it to send metadata key-value pairs with some other key-value separator instead of "::"? If yes, how exactly?
  Format your chart as stacked | rename myApp as up | eval down=1-up  
I am glad it is working
Actually, I think transaction should work in this case.  @bowesmana is correct that your command is missing host as parameter.  But more than that, it is also missing option keeporphans.  Also the de... See more...
Actually, I think transaction should work in this case.  @bowesmana is correct that your command is missing host as parameter.  But more than that, it is also missing option keeporphans.  Also the determinant is not eventcount but closed_txn.   | transaction host maxspan=5m keeporphans=true startswith="%ROUTING-LDP-5-NSR_SYNC_START" endswith="%ROUTING-LDP-5-NBR_CHANGE" | where closed_txn != 1 | stats count by host   Apply the above to this mock dataset: _raw _time host 1 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:45 host1 2 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:39 host2 3 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:33 host3 5 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:02:21 host0 6 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:02:15 host1 7 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:02:09 host2 8 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:02:03 host3 9 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:57 host4 10 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:51 host0 11 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:45 host1 13 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:33 host3 14 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:27 host4 15 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:01:21 host0 16 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:15 host1 17 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:09 host2 18 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:01:03 host3 19 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:57 host4 20 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:51 host0 21 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:45 host1 22 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:39 host2 23 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:33 host3 25 %ROUTING-LDP-5-NBR_CHANGE 2025-01-11 19:00:21 host0 26 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:15 host1 27 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:09 host2 28 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 19:00:03 host3 29 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 18:59:57 host4 30 %ROUTING-LDP-5-NSR_SYNC_START 2025-01-11 18:59:51 host0 You get host count host2 1 host4 2 Here is an emulation that produces the above mock data   | makeresults count=30 | streamstats count as _count | eval _time = _time - _count * 6 | eval host = "host" . _count % 5 | eval _raw = _count . " " . mvindex(mvappend("%ROUTING-LDP-5-NSR_SYNC_START", "%ROUTING-LDP-5-NBR_CHANGE"), -ceil(_count / 5) %2) | search NOT (_count IN (4, 12, 24) %ROUTING-LDP-5-NBR_CHANGE) ``` the above emulates index = test ("%ROUTING-LDP-5-NSR_SYNC_START" OR "%ROUTING-LDP-5-NBR_CHANGE") ```   Play with it and compare with real data.
I'm trying to create a simple status page visualization that mimics the style I've seen by Atlassian Statuspage.   You can see it on the status page for Discord and Wiz. Currently, I have a timechar... See more...
I'm trying to create a simple status page visualization that mimics the style I've seen by Atlassian Statuspage.   You can see it on the status page for Discord and Wiz. Currently, I have a timechart and if status=1 then it's up, but if status=0 then it's down.  When the app is down, there is simply no bar on the graph.  How do I "force" a value for the bar to appear but then color each bar based on the status value.  I think I'm missing something really simple and hoping someone can point me in the right direction. Current SPL index=main app="myApp" | eval status=if(isnull(status), "0", status) | timechart span=1m max(status) by app   Current XML   <dashboard version="1.1" theme="light"> <label>Application Status</label> <row> <panel> <chart> <search> <query>index=main app="myApp" | eval status=if(isnull(status), "0", status) | timechart span=1m max(status) by app</query> <earliest>-60m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisLabelsY.majorUnit">1</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.maximumNumber">1</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.columnSpacing">0</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.seriesColors">[0x459240]</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </dashboard>    
Sorry I missed that part.  I found this old post https://community.splunk.com/t5/Deployment-Architecture/What-is-the-curl-command-used-on-the-deployer-to-apply-shcluster/td-p/202735?_gl=1*row333*_gc... See more...
Sorry I missed that part.  I found this old post https://community.splunk.com/t5/Deployment-Architecture/What-is-the-curl-command-used-on-the-deployer-to-apply-shcluster/td-p/202735?_gl=1*row333*_gcl_au*MTI1NzYwOTM2Ni4xNzM2NjM1OTU2*FPAU*MTI1NzYwOTM2Ni4xNzM2NjM1OTU2*_ga*Mjc0NTEwOTk4LjE3MzY2MzU5NTY.*_ga_5EPM2P39FV*MTczNjYzNTk1Ni4xLjEuMTczNjYzNjM4Mi4wLjAuMTQ1OTAyMDQ4*_fplc*UXpNN09jNUpTRlNJbCUyQno4bmxjdVFQSjBNYTQ1bXdDVGxKcVc2TGJNWGpBNjF0RzV0ZldvMXElMkI3NjBtYWhRZ2kzT00ydjFmejlVUHVCbkh2UHhnTU5SJTJGTE1hbWFVNkNjeDNTYndDRndXdGV5eURwdUFmVFNwYjJHeFduT0x3JTNEJTNE#answer-321559 I haven’t suitable test environment on my hand now, but maybe this is still valid?
index IN (cart purchased) cart_id=* OR pur_id=* | eval common_id=coalesce(cart_id, pur_id) | eventstats dc(index) as common_count by common_id | where index="cart" | stats count as carts count(eval(c... See more...
index IN (cart purchased) cart_id=* OR pur_id=* | eval common_id=coalesce(cart_id, pur_id) | eventstats dc(index) as common_count by common_id | where index="cart" | stats count as carts count(eval(common_count > 1)) as purchases | eval pct=(purchases*100)/carts | table carts purchases pct
@isoutamoLook into the opening post, they have no CLI access on the servers. I assume it's either an infrastructure managed by third party or they have a very strict duty separation policies in place.
Perhaps this will help.  It counts the number of unique cart and purchase IDs then does the math to find the percentage of paid carts. index IN (cart purchased) cart_id=* OR pur_id=* | stats dc(cart... See more...
Perhaps this will help.  It counts the number of unique cart and purchase IDs then does the math to find the percentage of paid carts. index IN (cart purchased) cart_id=* OR pur_id=* | stats dc(cart_id) as carts, dc(pur_id) as purchases | eval pct=(purchases*100)/carts | table carts purchases pct
Honestly, it looks as if you were trying to have a Zabbix console just done with other tools. It doesn't make much sense.
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_... See more...
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_id values will be stored as a pur_id in the "purchased" index. cart purchased  cart_id 123 payment received  pur_id   123 cart_id 456   no payment  no record for 456 Now I want to display the percentage of cart for which payment is done. I wonder if anyone can help here.   Thank you so much 
Hi @rohithvr19 , real time monitoring isn't possible, you can have a near real time monitoring sheduling a very frequent update of the data (e.g. every 5 or 10 minutes), otherwise, you need a differ... See more...
Hi @rohithvr19 , real time monitoring isn't possible, you can have a near real time monitoring sheduling a very frequent update of the data (e.g. every 5 or 10 minutes), otherwise, you need a different solution. As I said, the performace of a query pressing a button are very very low! and the only solution is a frequent update (e.g. every 5 minutes). Ciao. Giuseppe
Thank you, @gcusello and @PickleRick, for your responses. I have tried using the Zabbix add-on for Splunk, but unfortunately, it is not working for my use case. My requirement is to display real-tim... See more...
Thank you, @gcusello and @PickleRick, for your responses. I have tried using the Zabbix add-on for Splunk, but unfortunately, it is not working for my use case. My requirement is to display real-time audit logs from Zabbix in a Splunk dashboard, but only upon user request, such as via a button click or similar functionality. Could you suggest a standard and efficient approach to accomplish this task?
Is this working if roles are updated by installing app which contains those definitions in conf files or only if those are edited with GUI?
1st you should create a new question instead of add your questions into already long time ago closed question. Both of those are working equivalently in technical point of view. But for human/ reada... See more...
1st you should create a new question instead of add your questions into already long time ago closed question. Both of those are working equivalently in technical point of view. But for human/ readability point of view at least I prefer the way where multisite attribute is set in closest place. Especially when you are looking those conf files it’s easier to see is that cluster multi or single site version. Of course you should use “splunk btool  server list” command and check what it show.
Hi as other already said you could use DS to push apps to deployer and the it push those to SHC members, but we don’t encourage you to do it. DS’s main function is manage UF and just those. You coul... See more...
Hi as other already said you could use DS to push apps to deployer and the it push those to SHC members, but we don’t encourage you to do it. DS’s main function is manage UF and just those. You could use it to manage also HFs and individual servers, but there are some things which you must know or otherwise there could be some side effects. What is your issue which you are trying to solve with DS -> Deployer-> SHC solution? Maybe there is better way to solve it? r. Ismo
Strictly theoretically speaking it would probably be possible to do what you want using classic dashboard, a lot of custom JS and possibly a custom search commands. The thing is, it's so unusual and ... See more...
Strictly theoretically speaking it would probably be possible to do what you want using classic dashboard, a lot of custom JS and possibly a custom search commands. The thing is, it's so unusual and custom there's a fat chance noone ever tried something like that and you'd have to write everything from scratch yourself. But as @gcusello already pointed out - it's completely opposite to the normal Splunk data workflow. What's your use case?