All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rohithvr19 , this is the opposite of the normal way to run of Splunk: Splunk isn't a client of external platforms to use when needed. The usual way to run is: schedule the ingestions of logs... See more...
Hi @rohithvr19 , this is the opposite of the normal way to run of Splunk: Splunk isn't a client of external platforms to use when needed. The usual way to run is: schedule the ingestions of logs from the external source (e.g. Zabbix and save the extraction in an index, run a search n a dashboard and display logs. It's the same approach to use DB-Connect: you can run SQL queries but the correct approach is schedule queries and run on indexed results. Why this? because your approach is very very slow and results aren't saved in any archive, so you have ro run the API script every time and it consumes a large amount of resources. Use the Splunk Add-On for Zabbix ( https://splunkbase.splunk.com/app/5272 ) to extract logs and then create your own dashboards. Ciao. Giuseppe
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the ... See more...
Is it possible to create a button in a Splunk dashboard that, when clicked, runs a script to export logs from Zabbix and display them on the dashboard? The dashboard should only be visible after the button is clicked. Has anyone implemented something like this before? Please help, as I’m really stuck on this!
w can one understand it or interprete it?  
Please share event with raw text, not search app's format. Regardless, you should not need any regex to deal with this data because Splunk already extracted everything.  Secondly, you do not need to... See more...
Please share event with raw text, not search app's format. Regardless, you should not need any regex to deal with this data because Splunk already extracted everything.  Secondly, you do not need to consider logs{}.action because your requirement only concerns status "Open" and "Escalated".  What actions have been taken is irrelevant to filter. In other words, given status and id like the following: _time id status 2025-01-10 23:24:57 xxx10 Escalated 2025-01-10 23:17:57 xxx10 Other 2025-01-10 23:10:57 xxx10 Open 2025-01-10 23:03:57 xxx10 Other 2025-01-10 22:56:57 xxx10 Open 2025-01-10 23:30:57 xxx11 Closed 2025-01-10 23:23:57 xxx11 Closed 2025-01-10 23:16:57 xxx11 Open 2025-01-10 23:09:57 xxx11 Escalated 2025-01-10 23:02:57 xxx11 Other 2025-01-10 22:55:57 xxx11 Open 2025-01-10 23:29:57 xxx12 Assigned 2025-01-10 23:22:57 xxx12 Open 2025-01-10 23:15:57 xxx12 Closed 2025-01-10 23:08:57 xxx12 Closed 2025-01-10 23:01:57 xxx12 Open 2025-01-10 22:54:57 xxx12 Escalated 2025-01-10 23:28:57 xxx13 Open 2025-01-10 23:21:57 xxx13 Open 2025-01-10 23:14:57 xxx13 Assigned 2025-01-10 23:07:57 xxx13 Open 2025-01-10 23:00:57 xxx13 Closed 2025-01-10 22:53:57 xxx13 Closed 2025-01-10 23:27:57 xxx14 Assigned 2025-01-10 23:20:57 xxx14 Escalated 2025-01-10 23:13:57 xxx14 Open 2025-01-10 23:06:57 xxx14 Open 2025-01-10 22:59:57 xxx14 Assigned 2025-01-10 22:52:57 xxx14 Open 2025-01-10 23:26:57 xxx15 Open 2025-01-10 23:19:57 xxx15 Open 2025-01-10 23:12:57 xxx15 Assigned 2025-01-10 23:05:57 xxx15 Escalated 2025-01-10 22:58:57 xxx15 Open 2025-01-10 22:51:57 xxx15 Open 2025-01-10 23:25:57 xxx16 Open 2025-01-10 23:18:57 xxx16 Other 2025-01-10 23:11:57 xxx16 Open 2025-01-10 23:04:57 xxx16 Open 2025-01-10 22:57:57 xxx16 Assigned You only want to count events for id's xxx10 (last status Escalated), xxx13 (Open), xxx15 (Open), and xxx16 (Open).  Using eventstats is perhaps the easiest.     | eventstats latest(status) as final_status by id | search final_status IN (Open, Escalated) | stats count by id final_status   Here, final_status is thrown in just to confirm that final_status only contains Open or Escalated.  The above mock data will result in id final_status count xxx10 Escalated 5 xxx13 Open 6 xxx15 Open 6 xxx16 Open 5 Here is the emulation that generates the mock data.  Play with it and compare with real data.   | makeresults count=40 | streamstats count as _count | eval _time = _time - _count * 60 | eval id = "xxx" . (10 + _count % 7) | eval status = mvindex(mvappend("Open", "Assigned", "Other", "Escalated", "Closed"), -(_count * (_count % 3)) % 5) ``` data emulation above ```   Hope this helps.
Hi @splunklearner , I usually avoid to use the DS to manage SHC-Deployer and Cluster Manager, even if it's possible do deploy apps to them. You have to create a deploymentclient.conf file specializ... See more...
Hi @splunklearner , I usually avoid to use the DS to manage SHC-Deployer and Cluster Manager, even if it's possible do deploy apps to them. You have to create a deploymentclient.conf file specialized for the CM or the SHCD adding [deployment-client] # NOTE: Because of a bug in the way the client works when installing apps # outside of $SPLUNK_HOME/etc/apps, these apps aren't listed as "installed" # by the deployment client, meaning that taking an app away from the cluster # manager's serverclass won't remove it from the manager-apps directory. This # would have to be done by hand. Updates to existing apps will transfer # from the deployment server just fine, however. repositoryLocation = $SPLUNK_HOME/etc/manager-apps serverRepositoryLocationPolicy = rejectAlways in this way the DS deploys apps not in the $SPLUNK_HOME/etc/apps folder but in the folders of the CM  (as in the example) or n the DS folder. the real problem is how to run the push command: for the CM it's possible from GUI but it isn't possible for the SHCD, so it's easier to use a script. And at least anyway, as also @PickleRick said, I'd avoid to search problems by myself! Ciao. Giuseppe
We are facing the same challenge. Did you get a solution to this issue? I have upgraded to 9.2.3 recently.
Can DS push apps to Deployer directly and there deployer will push to cluster SHs?  An you please explain how to push apps from deployer to SHs in splunk web?
Ok. So these actually were different ids? Because of your anonymization they looked as if they were to be the same id. So you have only one event per each id? And you want to exclude those that have... See more...
Ok. So these actually were different ids? Because of your anonymization they looked as if they were to be the same id. So you have only one event per each id? And you want to exclude those that have action=closed or some other values? As far as I can see your jsons should parse so that you get a multivalued field some.path{}.log.action, am I right? If so, you can use normal field=value conditions. Just remember that with multivalued fields key!="myvalue" matches an event where there is at least one value in the field key not equal to myvalue whereas NOT key="myvalue" requires that none of the values in the field key match myvalue (or the field is empty).
Hi @avi123 , I'm not 100% sure if I understood the requirements, but I'm giving it a shot here. Let me know if this works for you: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.... See more...
Hi @avi123 , I'm not 100% sure if I understood the requirements, but I'm giving it a shot here. Let me know if this works for you: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contacts_Email_address | eval Expiry_Date = strptime(Expiry_date, "%m/%d/%Y") | eval Current_Time = now() | eval Expiry_Date_Timestamp = strftime(Expiry_Date, "%Y/%m/%d %H:%M:%S") | eval Days_until_expiry = round((Expiry_Date - Current_Time) / 86400, 0) | eval alert_type = case( Days_until_expiry <= 7, "Owner", Days_until_expiry <= 15, "Support", Days_until_expiry < 1, "Expired", Days_until_expiry > 15, "Others", true(), "None") | search alert_type != "None" | eval email_list = case( alert_type == "Owner", Escalation_Contacts_Email_address, alert_type == "Support", Ops_Leads_Email_address, alert_type == "Expired", Escalation_Contacts_Email_address, true(), "None") | eval cc_email_list = case( alert_type == "Owner", Owner_Email_address, alert_type == "Support", Owner_Email_address, alert_type == "Expired", mvappend(Owner_Email_address, Ops_Leads_Email_address), true(), "None") | eval email_list = split(mvjoin(email_list, ","), ",") | eval cc_email_list = split(mvjoin(cc_email_list, ","), ",") | dedup Application_name Environment email_list | eval email_recipient = mvdedup(email_list) | eval email_recipient = mvjoin(email_recipient, ",") | eval email_cc = mvdedup(cc_email_list) | eval email_cc = mvjoin(email_cc, ",") | table Application_name, Environment, Type, Sub_Type, Expiry_Date_Timestamp, Days_until_expiry, email_recipient, email_cc | fields - alert_type, Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address  
Based on the docs, I can't tell if there's a functional difference  between this: [clustering] multisite = true [clustermanager:prod] multisite = true [clustermanager:dev] multisite = false ... See more...
Based on the docs, I can't tell if there's a functional difference  between this: [clustering] multisite = true [clustermanager:prod] multisite = true [clustermanager:dev] multisite = false and this: [clustering] [clustermanager:prod] multisite = true [clustermanager:dev] in server.conf on searchheads.      
Collect is very time sensitive and as @gcusello pointed out. My search and collect writing to index was working. I changed the _time=now() to use a now time 14 eval statements earlier in the search a... See more...
Collect is very time sensitive and as @gcusello pointed out. My search and collect writing to index was working. I changed the _time=now() to use a now time 14 eval statements earlier in the search and it stopped writing to the index. After viewing this thread, I changed it back to these final three lines in search and now successfully writing the results to index every time: | eval now=now() | eval _time=now | collect index=index output_format=raw spool=true source=yourSource sourcetype=stash
I don't need those really. I only need the ones that have not been updated so the status is still Open or Esclated as I am trying to get a number for volume on what is still outstanding. So yes you a... See more...
I don't need those really. I only need the ones that have not been updated so the status is still Open or Esclated as I am trying to get a number for volume on what is still outstanding. So yes you are correct, these events are associted with the same ID but there are different IDs. What I want is to exclude all of the IDs where the status has been updated and closed (I don't want them to show the open or escalated event). 
Dedup is rarely the way to go. OK. So you have some json events (for future reference - It's better to copy-paste _raw_ event, not the rendered form from the Splunk UI into a code block or a preform... See more...
Dedup is rarely the way to go. OK. So you have some json events (for future reference - It's better to copy-paste _raw_ event, not the rendered form from the Splunk UI into a code block or a preformatted-styled paragraph. I assume all your events for a single alert share the id field, right? I'm not sure however what is the relation between those two evens since one seems to contain subset of the data contained in the other one. It looks a bit wasteful if you're not just logging changes in your alert's state but repeat the growing "history" with each subsequent event. What are you trying to get from those event then?
DS is not supposed to serve apps to clustered search heads directly. That's what deployer is for. If by any chance you managed to get clustered SHs to pull apps directly from DS, you're in for a poss... See more...
DS is not supposed to serve apps to clustered search heads directly. That's what deployer is for. If by any chance you managed to get clustered SHs to pull apps directly from DS, you're in for a possible loads of problems.
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and... See more...
We have a deployment server which deploys apps (which contains configs) to Search head cluster (3 SH). I am not sure whether DS distributes apps directly to SH members or it will sent to deployer and from there deployer will distribute apps to SH members? Please clarify. We have created a role in DS app which restricts to specific index. When we try to push it... That role is not reflecting in SH members? But when we are checking in Deployer that app is present under shcluster/apps and that role is updated. But it is not showing in SH UI. What is the problem?   Do we need to manually push the config from deployer to SH members everytime? We have config in Deployer as deployer_push_mode=merge_to_default... Is it means distribution is automated? If not how to push config from Deployer to SH members through Splunk web? We don't have access to backend server to give CLI command.
Yes ofcourse Here is my example data. On the left is the alert when it first comes in and on the right is after it has been reviewed and closed.   Alert Received Alert Closed  alert_type: s... See more...
Yes ofcourse Here is my example data. On the left is the alert when it first comes in and on the right is after it has been reviewed and closed.   Alert Received Alert Closed  alert_type: search query    asset: { [+]    }    asset_term: null    content_created_at: 2017-01-10T11:00:00+00:00    escalated: false    id: XXXXXX112    last_modified: 2025-01-09T16:33:37Z    logs: [ [-]      { [-]        action: open        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }      { [-]        action: modify tags        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }    ]    metadata: { [+]    }    network: domains    severity: 4    status: Open    timestamp: 2025-01-09T16:33:37+00:00    timestamp_modify_tags: 2025-01-09T16:33:37+00:00 alert_type: search query    asset: { [+]    }    asset_term: null    content_created_at: 2017-01-10T11:00:00+00:00    escalated: false    id: XXXXXX112    last_modified: 2025-01-09T17:10:52Z    logs: [ [-]      { [-]        action: close        detail:        id:         subject:        timestamp: 2025-01-09T17:10:52+00:00      }      { [-]        action: modify notes        detail:        id:         subject:        timestamp: 2025-01-09T17:10:48+00:00      }      { [-]        action: assign        detail:        id:         timestamp: 2025-01-09T17:09:25+00:00      }      { [-]        action: open        actor:        detail:        id:         subject:        timestamp: 2025-01-09T16:33:37+00:00      }      { [+]      }    ]    metadata: { [+]    }    network: domain    severity: 4    status: Closed    timestamp: 2025-01-09T16:33:37+00:00    timestamp_modify_notes: 2025-01-09T17:10:48+00:00    timestamp_modify_tags: 2025-01-09T16:33:37+00:00   I tried initially to just dedup but that was before I knew it had mulitple events it was pulling in. Since then I have tried the following:  1. I tried doing an mvindex on the status events but it was still pullin in all of the events. 2. I then tried doing the lastest(status) but realized that was only going to pull in what the actualy lastest status of the ID was and would still include all of the events.  3. I also tried doing a sub search per some guidance from a colleague that ended up looking lik the following:  | dedup id | where  [ search index=source | stats latest(status) as latest_status by id | where latest_status="closed" | return $id ] 4. Lastly, I tried going at the metadata which looked like the following:  | dedup id | fields id | format | rex mode=sed field=search "s/ / OR /g" | eval search="NOT (id IN (" + search + "))" | fields search | format "" "" "" "" "" "" "search" and turned into this [ | search index="source" NOT (status="Open" OR status="Escalated") | stats count by id | fields id | format "" "" "" "" "," "OR" "id!=" | rex mode=sed field=search "s/^(.*)$/NOT (id IN (\1))/"
As @richgalloway said don’t use splunk to terminate syslog feed. When you are using real syslog server then it’s better to use UF instead of HF to send those forward. Or use SC4S especially if you hav... See more...
As @richgalloway said don’t use splunk to terminate syslog feed. When you are using real syslog server then it’s better to use UF instead of HF to send those forward. Or use SC4S especially if you haven’t experience of running syslog server.
Can you show some sample events and SPL what you have tried?
Hi when you are using stats it removed all other fields.  Basically you have two options to do this. You should use timechart and also trendline or streamchat with window parameter. r. Ismo
Hi all...I got this fix by a simple logic of set diff command..thanks everyone