All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Honestly, it looks as if you were trying to have a Zabbix console just done with other tools. It doesn't make much sense.
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_... See more...
Hi, I have two indexes - "cart" and "purchased" . In "cart" index there is a field "cart_id" and in "purchased" there is a field "pur_id".  If  payment will be successfully for a cart then the card_id values will be stored as a pur_id in the "purchased" index. cart purchased  cart_id 123 payment received  pur_id   123 cart_id 456   no payment  no record for 456 Now I want to display the percentage of cart for which payment is done. I wonder if anyone can help here.   Thank you so much 
Hi @rohithvr19 , real time monitoring isn't possible, you can have a near real time monitoring sheduling a very frequent update of the data (e.g. every 5 or 10 minutes), otherwise, you need a differ... See more...
Hi @rohithvr19 , real time monitoring isn't possible, you can have a near real time monitoring sheduling a very frequent update of the data (e.g. every 5 or 10 minutes), otherwise, you need a different solution. As I said, the performace of a query pressing a button are very very low! and the only solution is a frequent update (e.g. every 5 minutes). Ciao. Giuseppe
Thank you, @gcusello and @PickleRick, for your responses. I have tried using the Zabbix add-on for Splunk, but unfortunately, it is not working for my use case. My requirement is to display real-tim... See more...
Thank you, @gcusello and @PickleRick, for your responses. I have tried using the Zabbix add-on for Splunk, but unfortunately, it is not working for my use case. My requirement is to display real-time audit logs from Zabbix in a Splunk dashboard, but only upon user request, such as via a button click or similar functionality. Could you suggest a standard and efficient approach to accomplish this task?
Is this working if roles are updated by installing app which contains those definitions in conf files or only if those are edited with GUI?
1st you should create a new question instead of add your questions into already long time ago closed question. Both of those are working equivalently in technical point of view. But for human/ reada... See more...
1st you should create a new question instead of add your questions into already long time ago closed question. Both of those are working equivalently in technical point of view. But for human/ readability point of view at least I prefer the way where multisite attribute is set in closest place. Especially when you are looking those conf files it’s easier to see is that cluster multi or single site version. Of course you should use “splunk btool  server list” command and check what it show.
Hi as other already said you could use DS to push apps to deployer and the it push those to SHC members, but we don’t encourage you to do it. DS’s main function is manage UF and just those. You coul... See more...
Hi as other already said you could use DS to push apps to deployer and the it push those to SHC members, but we don’t encourage you to do it. DS’s main function is manage UF and just those. You could use it to manage also HFs and individual servers, but there are some things which you must know or otherwise there could be some side effects. What is your issue which you are trying to solve with DS -> Deployer-> SHC solution? Maybe there is better way to solve it? r. Ismo
Strictly theoretically speaking it would probably be possible to do what you want using classic dashboard, a lot of custom JS and possibly a custom search commands. The thing is, it's so unusual and ... See more...
Strictly theoretically speaking it would probably be possible to do what you want using classic dashboard, a lot of custom JS and possibly a custom search commands. The thing is, it's so unusual and custom there's a fat chance noone ever tried something like that and you'd have to write everything from scratch yourself. But as @gcusello already pointed out - it's completely opposite to the normal Splunk data workflow. What's your use case?
Hi @rohithvr19 , this is the opposite of the normal way to run of Splunk: Splunk isn't a client of external platforms to use when needed. The usual way to run is: schedule the ingestions of logs... See more...
Hi @rohithvr19 , this is the opposite of the normal way to run of Splunk: Splunk isn't a client of external platforms to use when needed. The usual way to run is: schedule the ingestions of logs from the external source (e.g. Zabbix and save the extraction in an index, run a search n a dashboard and display logs. It's the same approach to use DB-Connect: you can run SQL queries but the correct approach is schedule queries and run on indexed results. Why this? because your approach is very very slow and results aren't saved in any archive, so you have ro run the API script every time and it consumes a large amount of resources. Use the Splunk Add-On for Zabbix ( https://splunkbase.splunk.com/app/5272 ) to extract logs and then create your own dashboards. Ciao. Giuseppe
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the ... See more...
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the button is clicked. Has anyone implemented something like this before? Please help, as I’m really stuck on this!
w can one understand it or interprete it?  
Please share event with raw text, not search app's format. Regardless, you should not need any regex to deal with this data because Splunk already extracted everything.  Secondly, you do not need to... See more...
Please share event with raw text, not search app's format. Regardless, you should not need any regex to deal with this data because Splunk already extracted everything.  Secondly, you do not need to consider logs{}.action because your requirement only concerns status "Open" and "Escalated".  What actions have been taken is irrelevant to filter. In other words, given status and id like the following: _time id status 2025-01-10 23:24:57 xxx10 Escalated 2025-01-10 23:17:57 xxx10 Other 2025-01-10 23:10:57 xxx10 Open 2025-01-10 23:03:57 xxx10 Other 2025-01-10 22:56:57 xxx10 Open 2025-01-10 23:30:57 xxx11 Closed 2025-01-10 23:23:57 xxx11 Closed 2025-01-10 23:16:57 xxx11 Open 2025-01-10 23:09:57 xxx11 Escalated 2025-01-10 23:02:57 xxx11 Other 2025-01-10 22:55:57 xxx11 Open 2025-01-10 23:29:57 xxx12 Assigned 2025-01-10 23:22:57 xxx12 Open 2025-01-10 23:15:57 xxx12 Closed 2025-01-10 23:08:57 xxx12 Closed 2025-01-10 23:01:57 xxx12 Open 2025-01-10 22:54:57 xxx12 Escalated 2025-01-10 23:28:57 xxx13 Open 2025-01-10 23:21:57 xxx13 Open 2025-01-10 23:14:57 xxx13 Assigned 2025-01-10 23:07:57 xxx13 Open 2025-01-10 23:00:57 xxx13 Closed 2025-01-10 22:53:57 xxx13 Closed 2025-01-10 23:27:57 xxx14 Assigned 2025-01-10 23:20:57 xxx14 Escalated 2025-01-10 23:13:57 xxx14 Open 2025-01-10 23:06:57 xxx14 Open 2025-01-10 22:59:57 xxx14 Assigned 2025-01-10 22:52:57 xxx14 Open 2025-01-10 23:26:57 xxx15 Open 2025-01-10 23:19:57 xxx15 Open 2025-01-10 23:12:57 xxx15 Assigned 2025-01-10 23:05:57 xxx15 Escalated 2025-01-10 22:58:57 xxx15 Open 2025-01-10 22:51:57 xxx15 Open 2025-01-10 23:25:57 xxx16 Open 2025-01-10 23:18:57 xxx16 Other 2025-01-10 23:11:57 xxx16 Open 2025-01-10 23:04:57 xxx16 Open 2025-01-10 22:57:57 xxx16 Assigned You only want to count events for id's xxx10 (last status Escalated), xxx13 (Open), xxx15 (Open), and xxx16 (Open).  Using eventstats is perhaps the easiest.     | eventstats latest(status) as final_status by id | search final_status IN (Open, Escalated) | stats count by id final_status   Here, final_status is thrown in just to confirm that final_status only contains Open or Escalated.  The above mock data will result in id final_status count xxx10 Escalated 5 xxx13 Open 6 xxx15 Open 6 xxx16 Open 5 Here is the emulation that generates the mock data.  Play with it and compare with real data.   | makeresults count=40 | streamstats count as _count | eval _time = _time - _count * 60 | eval id = "xxx" . (10 + _count % 7) | eval status = mvindex(mvappend("Open", "Assigned", "Other", "Escalated", "Closed"), -(_count * (_count % 3)) % 5) ``` data emulation above ```   Hope this helps.
Hi @splunklearner , I usually avoid to use the DS to manage SHC-Deployer and Cluster Manager, even if it's possible do deploy apps to them. You have to create a deploymentclient.conf file specializ... See more...
Hi @splunklearner , I usually avoid to use the DS to manage SHC-Deployer and Cluster Manager, even if it's possible do deploy apps to them. You have to create a deploymentclient.conf file specialized for the CM or the SHCD adding [deployment-client] # NOTE: Because of a bug in the way the client works when installing apps # outside of $SPLUNK_HOME/etc/apps, these apps aren't listed as "installed" # by the deployment client, meaning that taking an app away from the cluster # manager's serverclass won't remove it from the manager-apps directory. This # would have to be done by hand. Updates to existing apps will transfer # from the deployment server just fine, however. repositoryLocation = $SPLUNK_HOME/etc/manager-apps serverRepositoryLocationPolicy = rejectAlways in this way the DS deploys apps not in the $SPLUNK_HOME/etc/apps folder but in the folders of the CM  (as in the example) or n the DS folder. the real problem is how to run the push command: for the CM it's possible from GUI but it isn't possible for the SHCD, so it's easier to use a script. And at least anyway, as also @PickleRick said, I'd avoid to search problems by myself! Ciao. Giuseppe
We are facing the same challenge. Did you get a solution to this issue? I have upgraded to 9.2.3 recently.
Can DS push apps to Deployer directly and there deployer will push to cluster SHs?  An you please explain how to push apps from deployer to SHs in splunk web?
Ok. So these actually were different ids? Because of your anonymization they looked as if they were to be the same id. So you have only one event per each id? And you want to exclude those that have... See more...
Ok. So these actually were different ids? Because of your anonymization they looked as if they were to be the same id. So you have only one event per each id? And you want to exclude those that have action=closed or some other values? As far as I can see your jsons should parse so that you get a multivalued field some.path{}.log.action, am I right? If so, you can use normal field=value conditions. Just remember that with multivalued fields key!="myvalue" matches an event where there is at least one value in the field key not equal to myvalue whereas NOT key="myvalue" requires that none of the values in the field key match myvalue (or the field is empty).
Hi @avi123 , I'm not 100% sure if I understood the requirements, but I'm giving it a shot here. Let me know if this works for you: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.... See more...
Hi @avi123 , I'm not 100% sure if I understood the requirements, but I'm giving it a shot here. Let me know if this works for you: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contacts_Email_address | eval Expiry_Date = strptime(Expiry_date, "%m/%d/%Y") | eval Current_Time = now() | eval Expiry_Date_Timestamp = strftime(Expiry_Date, "%Y/%m/%d %H:%M:%S") | eval Days_until_expiry = round((Expiry_Date - Current_Time) / 86400, 0) | eval alert_type = case( Days_until_expiry <= 7, "Owner", Days_until_expiry <= 15, "Support", Days_until_expiry < 1, "Expired", Days_until_expiry > 15, "Others", true(), "None") | search alert_type != "None" | eval email_list = case( alert_type == "Owner", Escalation_Contacts_Email_address, alert_type == "Support", Ops_Leads_Email_address, alert_type == "Expired", Escalation_Contacts_Email_address, true(), "None") | eval cc_email_list = case( alert_type == "Owner", Owner_Email_address, alert_type == "Support", Owner_Email_address, alert_type == "Expired", mvappend(Owner_Email_address, Ops_Leads_Email_address), true(), "None") | eval email_list = split(mvjoin(email_list, ","), ",") | eval cc_email_list = split(mvjoin(cc_email_list, ","), ",") | dedup Application_name Environment email_list | eval email_recipient = mvdedup(email_list) | eval email_recipient = mvjoin(email_recipient, ",") | eval email_cc = mvdedup(cc_email_list) | eval email_cc = mvjoin(email_cc, ",") | table Application_name, Environment, Type, Sub_Type, Expiry_Date_Timestamp, Days_until_expiry, email_recipient, email_cc | fields - alert_type, Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address  
Based on the docs, I can't tell if there's a functional difference  between this: [clustering] multisite = true [clustermanager:prod] multisite = true [clustermanager:dev] multisite = false ... See more...
Based on the docs, I can't tell if there's a functional difference  between this: [clustering] multisite = true [clustermanager:prod] multisite = true [clustermanager:dev] multisite = false and this: [clustering] [clustermanager:prod] multisite = true [clustermanager:dev] in server.conf on searchheads.      
Collect is very time sensitive and as @gcusello pointed out. My search and collect writing to index was working. I changed the _time=now() to use a now time 14 eval statements earlier in the search a... See more...
Collect is very time sensitive and as @gcusello pointed out. My search and collect writing to index was working. I changed the _time=now() to use a now time 14 eval statements earlier in the search and it stopped writing to the index. After viewing this thread, I changed it back to these final three lines in search and now successfully writing the results to index every time: | eval now=now() | eval _time=now | collect index=index output_format=raw spool=true source=yourSource sourcetype=stash
I don't need those really. I only need the ones that have not been updated so the status is still Open or Esclated as I am trying to get a number for volume on what is still outstanding. So yes you a... See more...
I don't need those really. I only need the ones that have not been updated so the status is still Open or Esclated as I am trying to get a number for volume on what is still outstanding. So yes you are correct, these events are associted with the same ID but there are different IDs. What I want is to exclude all of the IDs where the status has been updated and closed (I don't want them to show the open or escalated event).