All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would like to be able to see the daily traffic flow rate of Splunk Enterprise on my dashboard. Ideally, I would like to be able to see the traffic flow per forwarder, but at the very least I would... See more...
I would like to be able to see the daily traffic flow rate of Splunk Enterprise on my dashboard. Ideally, I would like to be able to see the traffic flow per forwarder, but at the very least I would like to see the overall traffic flow. Is this possible?
Splunk APM is part of the Observability Cloud and is available as a SaaS offering only. https://www.splunk.com/en_us/products/pricing/observability.html
The query I provided was an example that must be customized for your environment.  At the very least, the index name "foo" must be changed to the name of the index that holds the event data.  The fie... See more...
The query I provided was an example that must be customized for your environment.  At the very least, the index name "foo" must be changed to the name of the index that holds the event data.  The field names also may need to be changed.  Look at the events to see what is available to you.
Oh okay I just assumed it was a Splunk lookup. So if you are indexing the data from a CSV then you can probably do something like this (assuming field extractions are in place) index=<index> sou... See more...
Oh okay I just assumed it was a Splunk lookup. So if you are indexing the data from a CSV then you can probably do something like this (assuming field extractions are in place) index=<index> sourcetype=<sourcetype> | table [ | makeresults | fields - _time | eval ID=[ | search index=<index> sourcetype=<sourcetype> | stats latest(ID) as ID | return $ID ], field_list_id_zero="NAME,STATUS,DATE,ACTION", field_list_id_positive="DATE-Changed,ID,NAME,DATE_DOWN,ACTION", final_field_list=if( 'ID'==0, 'field_list_id_zero', 'field_list_id_positive' ) | fields + final_field_list | return $final_field_list ]   where <index> and <sourcetype> is where your CSV is being indexed.
dtbur;rows3 Wow, fast reply. Thanks. The ID gets set when the csv file is written. I have a python program that queries a MySQL database, and writes a "0" as ID if no results are returned from the q... See more...
dtbur;rows3 Wow, fast reply. Thanks. The ID gets set when the csv file is written. I have a python program that queries a MySQL database, and writes a "0" as ID if no results are returned from the query. If there is data returned, the ID is taken from query results (i.e ID=34, etc). The csv file is on a remote server. I use the Splunk Universal Forwarder to send the file to splunk. Is there a way to get this file set as an "input lookup" or does the "input lookupo" required the file to be local to the Splunk server? Thanks for quick help. EWHolz
Not sure exactly how your ID value is being derived in this situation but you may be able to utilize a subsearch holding you list of fields for each scenario and then set up an eval if() function to ... See more...
Not sure exactly how your ID value is being derived in this situation but you may be able to utilize a subsearch holding you list of fields for each scenario and then set up an eval if() function to determine which to use based on the value in the ID field. Then with a return command you can return that conditional field list back into the parent search after a fields command. Something like this.   | inputlookup <lookup> | fields [ | makeresults | fields - _time | eval ``` Not sure how the ID is being derived but there should be a variety of ways to get it here ``` ``` From lookup method ``` ``` ID=[ | inputlookup <lookup> | stats max(ID) as ID | return $ID ] ``` ``` From token method ``` ``` ID=$ID_token$ ``` ``` This is hardcoded for a POC ``` ID=1, field_list_id_zero="NAME,STATUS,DATE,ACTION", field_list_id_positive="DATE-Changed,ID,NAME,DATE_DOWN,ACTION", final_field_list=if( 'ID'==0, 'field_list_id_zero', 'field_list_id_positive' ) | fields + final_field_list | return $final_field_list ]     Sample output when ID=0 Sample output when ID>0  
Hello All, I have a search question. I have a csv file that returnds data. the ID field if there is no data - I want to have a table which shows 4 columns: NAME,STATUS,DATE,ACTION. These come from ... See more...
Hello All, I have a search question. I have a csv file that returnds data. the ID field if there is no data - I want to have a table which shows 4 columns: NAME,STATUS,DATE,ACTION. These come from the csv file header line. If the ID >0 I want to show these columns: DATE-Changed,ID,NAME,DATE_DOWN,ACTION. I have not yet seen how I might do this. What I need, in a sense, it two searches, one when ID=0, and one when ID>0. Any suggestions?   Thanks, EWHOLZ
What is the question?
Hi @Krishanu.Maity, I will be sending you a private message via the Community where I'll be asking you for some information. 
Hello, Thank you so much. The event IDs listed are all regarding changes to the system. This report would be the "report that shows Changes to System Sec Config events". Regarding all logs, we have ... See more...
Hello, Thank you so much. The event IDs listed are all regarding changes to the system. This report would be the "report that shows Changes to System Sec Config events". Regarding all logs, we have identified the specific ones.  I am running the query you suggested but it's not giving any results. No error messages. Thanks again! index=foo eventid IN (4727, 4728, 4729, 4730, 4731, 4732, 4733, 4734, 4735, 4736, 4737, 4740, 4754, 4755, 4756, 4757, 4758, 4759, 4783, 4784, 4785, 4786, 4787, 4788, 4789, 4791, 631)  | fields user, action, subject, ProcessName | stats min(_time) as FirstEvent max(_time) as LastEvent count by user, _time, action, subject, ProcessName AND NOT User IN (list_of_users ) AND User_Impacted != (AD_Group) | where NOT (match(Host_Impacted, "sc") OR match(Host_Impacted, "sd") OR match(Host_Impacted, "^sc.+") OR match(Host_Impacted, "^sd.+")) | table User, _time, EventID, Group, Host, Host_Impacted, Login, VendorMsgID, Domain Impacted) | stats values(*) as * by User
This XML file does not appear to have any style information associated with it. The document tree is shown below. <response> <messages> <msg type="ERROR">Unauthorized</msg> </messages> </response>
Hi, I am trying to use otel collector with appD controller and I am unable to get the access key from the button. Nothing happens when i click the generate access key button. It waits for 30 sec to ... See more...
Hi, I am trying to use otel collector with appD controller and I am unable to get the access key from the button. Nothing happens when i click the generate access key button. It waits for 30 sec to get the key but nothing appears in the UI. Can you pls help? Thanks, Krishanu
Hi probably you have copy paste error with file name as those don’t match? You could check if uf has read that file by splunk list inputstatus r. Ismo 
Hi it’s probably same, but (at least in there) if you have lot of those in conf files then those could minimally slow down the execution time as those conf files load every time when you are execute... See more...
Hi it’s probably same, but (at least in there) if you have lot of those in conf files then those could minimally slow down the execution time as those conf files load every time when you are executed a query. But unless you haven’t thousands of those it probably don’t mark anything. r. Ismo
Hi as @richgalloway said most of parameters are still in use. Just look version 7.0.0 to get the nearest docs. You found those settings at least server and inputs conf files. If/when you are talking... See more...
Hi as @richgalloway said most of parameters are still in use. Just look version 7.0.0 to get the nearest docs. You found those settings at least server and inputs conf files. If/when you are talking about intermediate forwarders you should ensure that you are using also persistent disk queues and not only memory based. Otherwise you lost events if nodes or services goes down before indexers are up. r. Ismo
Awesome! Glad it's working out so far.  Feel free to leave reply if you run into any issues and I we can try to resolve.
From what I can tell in testing this over the last few hours this solution works really well. Still testing it out and validating accuracy but so far, it's great. I was actually working on adding dur... See more...
From what I can tell in testing this over the last few hours this solution works really well. Still testing it out and validating accuracy but so far, it's great. I was actually working on adding duration but you definitely beat me to it. Thanks!
Hi @indeed_2000, I'm not sure, but this app isn't downloadable in Splunkbase, so I suppose is only in Cloud, Youshould ask to your reference Splunk Partner. Ciao. Giuseppe
You may be able to use streamstats assuming that there is some degree off distribution of _time between each event. <base_search> | rex field=_raw "Processing\s+(?<process>[^\-]+)\-" | r... See more...
You may be able to use streamstats assuming that there is some degree off distribution of _time between each event. <base_search> | rex field=_raw "Processing\s+(?<process>[^\-]+)\-" | rex field=_raw "Person\s+Name\:\s+(?<person_name>[^\,]+)\," | sort 0 +_time | streamstats reset_before="("isnotnull(process)")" values(process) as current_process | streamstats window=2 first(_raw) as previous_log | eval checked_person_name=if( match(previous_log, "\-Check\s+for\s+Person\-"), 'person_name', null() ) | stats min(_time) as _time by current_process, checked_person_name | fields + _time, current_process, checked_person_name   The final output should look something like this The table before the final stats aggregation looked like this and show more context around what the streamstats are doing here. Note: For this method to work properly _timestamps of each process event shouldn't be exactly the same, there would need to be some sort of step up in time to the next event (event if it is milliseconds). This is because we need the events in the correct sequence for the streamstats to work as expected.  
Main question is: is there any splunk apm on-premises available or it just available in cloud version?