All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi.  In my heavy forwarder I am trying to understand how logs are appearing on a particular source type.  I go to  Settings < source type< and search for it. I find it. I edit it. But it's not tell... See more...
Hi.  In my heavy forwarder I am trying to understand how logs are appearing on a particular source type.  I go to  Settings < source type< and search for it. I find it. I edit it. But it's not telling me any detail on how those .csv files from the various host are getting the file to the heavy forwarder. The universal forwarder inputs.conf file on the host does not reference the .csv files.  Anything else I can do on the heavy forwarder to find out how the host are sending to it? It's not syslog. 
Search results Job,Id aaa,1234 ccc,2345 ddd,9879 fff,6743 eee,8754 zzz,4006   Lookup file  Job1, Job2 , Job3 ccc,eee,zzz ddd,fff,aaa   Output table should look like below: Job1,... See more...
Search results Job,Id aaa,1234 ccc,2345 ddd,9879 fff,6743 eee,8754 zzz,4006   Lookup file  Job1, Job2 , Job3 ccc,eee,zzz ddd,fff,aaa   Output table should look like below: Job1,id1,Job2,id2,Job3,id3 ccc,2345,eee,8754,zzz,4006 ddd,9879,fff,6743,aaa,1234 Tried join, append, appendcols but all are returning incorrect results
Hi All, curious if anyone has any SPL that can track a particular domain's SSL certificate and where it's being used at?  Any help with this is GREATLY appreciated. Example: I need to know every ... See more...
Hi All, curious if anyone has any SPL that can track a particular domain's SSL certificate and where it's being used at?  Any help with this is GREATLY appreciated. Example: I need to know every place "exampledomain.com" SSL certificate is being used. Thanks again!   
When I have this case statement like this, it "works". It runs and puts values in the iSeries column, but they are wrong. | eval Platform=case((source="A" OR source="B" OR source="C") , "iSeries", ... See more...
When I have this case statement like this, it "works". It runs and puts values in the iSeries column, but they are wrong. | eval Platform=case((source="A" OR source="B" OR source="C") , "iSeries", true(),"Other") When I add an AND to it so that it fulfills the true condition, no values are put in the iSeries column, and everything goes to Other. | eval Platform=case((source="A" OR source="B" OR source="C") AND (dest=X OR dest=Y OR dest=Z), "iSeries", true(),"Other") What am I doing wrong?
Hi, I am very new in Splunk and need some help to understand Splunk command execution structure. Case: We are having input data coming from DB and ingesting into splunk. Now the data has some patte... See more...
Hi, I am very new in Splunk and need some help to understand Splunk command execution structure. Case: We are having input data coming from DB and ingesting into splunk. Now the data has some patterns (including dynamic values and static). We have one lookup table there we are maintaining part of the strings(error message) which needs to be checked with ingested data. If pattern found then we need to take the string of lookup table and get the count of similar type of error occurred in a day. Approch: Using lookup table and wildcard (matchtype) in transform.conf file I am able to match static errors of ingested data, but dynamic errors are not able to match. for eg. "30032_SomeID_23448:Name:test--curt:fields" if such type of pattern is available in ingested data then in lookup table I have added partial string(*_SomeID_*--curt:fields*) so that it does get matched using wildcart(matchtype) of transform.conf file. But I need to extract "30032_SomeID_23448:Name:test--curt:fields" string from ingested data.  Hence I am using regex. and in the lookup table maintaining regex sting so that if match found I can get the corresponding regex sting and can pass in to splunk search for execution. Please correct me if my logic is correct to do this task or I shall use better way in Splunk. regex storing in lookup table.  ": (?<ename>.+?) Field" getting this in search thru lookup command and trying to execute thru getting the string into variable from lookup output. | lookup application_lookup email_body AS email_body OUTPUT email_body AS email_body_lookup application_name alert_type rge_col | rex field=email_body + rge_col but getting below error. Error in 'rex' command: Encountered the following error while compiling the regex '+': Regex: quantifier does not follow a repeatable item. If someone can guide me that would be great help of me. Thanks in advance! @MuS  @wrangler2x 
I need to create a search that counts IPs which return events for two different fields in the same index. Search 1 will not contain field1=ABC when Search 2 contains field2=123 Search 1: index=we... See more...
I need to create a search that counts IPs which return events for two different fields in the same index. Search 1 will not contain field1=ABC when Search 2 contains field2=123 Search 1: index=weblogs field1=ABC Search 2: index=weblogs field2=123
Greetings! Just wanted to know the steps for adding an input to an UF using the CLI. Thank you in advance. 
Hi Splunk Answer guys/gals, I have a question regarding DBConnect and I was curious if anyone had any insight on it. After reading the documentation, I'm still a bit unsure on how to use the DB Out... See more...
Hi Splunk Answer guys/gals, I have a question regarding DBConnect and I was curious if anyone had any insight on it. After reading the documentation, I'm still a bit unsure on how to use the DB Output feature that is included in dbconnect. I have dbconnect installed on one of our HF's but in order for the output command to work, I'll need to search the data that is in our Splunk Cloud instance.  The HF is currently configured to send data to the indexers, but I'm stuck on figuring out how I'll be able to perform a query on the HF that will allow me to then send the data I return to a DB. Currently, I'm unable to pull any data on the HF. Thank you! DFurtaw
I have a search that evals out a calculation from other fields to a "Duration" field for netflow data.  Is there a way to populate duration in the network traffic datamodel with the results of the ca... See more...
I have a search that evals out a calculation from other fields to a "Duration" field for netflow data.  Is there a way to populate duration in the network traffic datamodel with the results of the calculation?  It currently has firewall data in it but I'd like to add netflow as well. Thanks!
I'm performing a REST Search that ends with a | table command When I configure the script to csv format, I get 5 events.  Raw format, 5 events. Splunk Web: 5 statistics/events.  But when I switch t... See more...
I'm performing a REST Search that ends with a | table command When I configure the script to csv format, I get 5 events.  Raw format, 5 events. Splunk Web: 5 statistics/events.  But when I switch the format to json, 10 events, 5 of which are duplicates.   Is there any valid reason why json should be any different than all the other types? I've read solutions that suggest going to the config files, this is not available to me. Hopefully, there is a way inline with my search to tell Splunk that I want, say,  just the data that shows up in my table.  This would be ideal. Thanks so much.
Hello, I'm trying to send rsyslog logs via ssl to indexer (splunk version 8), the logs are received by the indexer but not readable (which is my problem, logs are not decrypted). I've configured th... See more...
Hello, I'm trying to send rsyslog logs via ssl to indexer (splunk version 8), the logs are received by the indexer but not readable (which is my problem, logs are not decrypted). I've configured the input.conf like this : tcp-ssl://8514] Sourcetype = syslog disabled = false index = linux [SSL] password = #requireClientCert = false rootCA = $SPLUNK_HOME/etc/auth/xxx.ca-bundle serverCert = $SPLUNK_HOME/etc/auth/xxx.crt   netstat show the tcp/8514 opened for splunkd on indexer and btool /opt/splunk/etc/apps/search/local/inputs.conf [tcp://8514] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/apps/search/local/inputs.conf connection_host = dns /opt/splunk/etc/system/local/inputs.conf host = xxxxx.fr /opt/splunk/etc/apps/search/local/inputs.conf index = linux /opt/splunk/etc/apps/search/local/inputs.conf sourcetype = syslog   Thank you for your help
Hi, I am trying to achive a logic for below scenario , but getting conflict .. Table id start_time end_time  Ov_status value value_status xyz.123 2020-07-22   Inprog... See more...
Hi, I am trying to achive a logic for below scenario , but getting conflict .. Table id start_time end_time  Ov_status value value_status xyz.123 2020-07-22   Inprogress myvalue Failed xyz.123 2020-07-22 2020-07-22 completed yourval Completed abc.321 2020-07-22   Inprogress isval Inprogress   Here i have used below case statement to get Ov_status | eval Ov_status=case(isnotnull(start_time) AND isnull(end_time ),"Inprogress",isnotnull(start_time) AND isnotnull(end_time ),"completed") Now i want if value_status is failed for any value of id(xyz.123) the Ov_status should reset to failed
So suppose that everyday Splunk takes in a report that houses 9 different fields, one of which is called 'status'. Status has the option of being 'New', 'Closed', or 'Open'. I'm trying to show a time... See more...
So suppose that everyday Splunk takes in a report that houses 9 different fields, one of which is called 'status'. Status has the option of being 'New', 'Closed', or 'Open'. I'm trying to show a time-chart that shows the count per day of reports that have 'Closed' and 'New' status , along with the difference of the two (everyday). So a file with report_date '2020-07-23' is ingested in Splunk and shows we had 5 'New' reports, 7 'Closed' Reports, so the difference should be 2 for that day. How do I go about doing this in my search query?   index=blah... | timechart count(report_date) by status| fields - OPEN   This is where I'm stuck, how do I get the difference of only NEW and CLOSED included into my graph. Thanks in advance
We have a number of alerts set up with Splunk.  One of them monitors the state of VPN's from our Cisco routers.  One of these routers has a couple of VPN's that are actually supposed to be down (unti... See more...
We have a number of alerts set up with Splunk.  One of them monitors the state of VPN's from our Cisco routers.  One of these routers has a couple of VPN's that are actually supposed to be down (until they get traffic sent over them) so we keep getting a false positive on the alert.  This is the query: splunk query - sourcetype="cisco:ios" AND "down"  I am VERY new to this and writing queries.  What is the syntax I need to add to this query that will ignore this one particular router?   Thank you for any Reply - 
Hello,   Il would like to know if i could forward data based on sourcetype between 2 indexers or between indexer and search head. Il would like to forward only data of a certain sourcetype. Thank... See more...
Hello,   Il would like to know if i could forward data based on sourcetype between 2 indexers or between indexer and search head. Il would like to forward only data of a certain sourcetype. Thank you for your help
Hi, I have tried logging to the  splunk free trial instance by using default credentials (Username: admin) and also  with my splunk account credentials.But always getting the "Invalid username/passw... See more...
Hi, I have tried logging to the  splunk free trial instance by using default credentials (Username: admin) and also  with my splunk account credentials.But always getting the "Invalid username/password" error. Could you please help me to fix this?
I have a generic search that is looking for logins and there is a field that has two values – “authentication” for a successful login, and “failed login” for a failed login. So I modified an existin... See more...
I have a generic search that is looking for logins and there is a field that has two values – “authentication” for a successful login, and “failed login” for a failed login. So I modified an existing search that looks for X amount >=3 attempts with success >0 and failed >=3 within 15 mins like so: index="foo" host="bar" Application="app1" | dedup _time | eval time = _time | bin time span=15m | stats count("Activity Name") as Attempts, count(eval(match("Activity Name","FAILED LOGIN"))) as Failed, count(eval(match("Activity Name","AUTHENTICATION"))) as Success earliest(_time) as FirstAttempt latest(_time) as LatestAttempt by Username time | rename time as _time | search Attempts>=3 AND Success>0 AND Failed>=3 | eval FirstAttempt=strftime(FirstAttempt,"%x %X") | eval LatestAttempt=strftime(LatestAttempt,"%x %X")   For some reason it is not liking the count(eval(match as if I shorten the search to the following, I see results for attempts, but nothing for success or failed   index="foo" host="bar" Application="app1" | dedup _time | eval time = _time | bin time span=15m | stats count("Activity Name") as Attempts, count(eval(match("Activity Name","FAILED LOGIN"))) as Failed, count(eval(match("Activity Name","AUTHENTICATION"))) as Success   Any help would be greatly appreciated   Thx
Hello all, using the Splunk Security Essentials splunkbase app we would like to add additional phases to the Manage BookMarks page. Is there a lookup we can modify to add additional phases? We fou... See more...
Hello all, using the Splunk Security Essentials splunkbase app we would like to add additional phases to the Manage BookMarks page. Is there a lookup we can modify to add additional phases? We found some arrays in the javascript, but cannot follow all the logic?     
Hi, I am having an issue that the Microsoft 0365 Reporting Add-On continues to stop working. We found that usually a restart of the input in the WebUI fixes the issue. Now I was wondering if it poss... See more...
Hi, I am having an issue that the Microsoft 0365 Reporting Add-On continues to stop working. We found that usually a restart of the input in the WebUI fixes the issue. Now I was wondering if it possible to restart those inputs through an API call. I found the command that Splunk performs when you disable the Inputs in the Add-On to be from tine internal logs:     admin [22/Jul/2020:09:49:19.674 -0400] "POST /servicesNS/nobody/TA-MS_O365_Reporting/data/inputs/ms_o365_message_trace/ms_o365_message_trace/disable HTTP/1.1"     I have tried in powershell different variations of this command below but didn't have any luck so far. Is possible to call these WebUI commands through an API command ?     Invoke-Restmethod -Uri http://localhost:8088/servicesNS/nobody/TA-MS_O365_Reporting/data/inputs/ms_o365_message_trace/ms_o365_message_trace/disable -Method POST Invoke-Restmethod : {"text":"The requested URL was not found on this server.","code":404} At line:1 char:1 + Invoke-Restmethod -Uri http://localhost:8088/servicesNS/nobody/TA-MS ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~   Thank you, Oliver
Hi all, I need to show the number of concurrent logged users within the last 30 days. What I would like to have is a linechart showing for each day an overview with the most important spikes happened... See more...
Hi all, I need to show the number of concurrent logged users within the last 30 days. What I would like to have is a linechart showing for each day an overview with the most important spikes happened. So far I have implemented this query which works as expected but takes more than a minute to load entirely since it shows all the concurrent users for each minute of the day, for each day of the month. I don't need to see the status for each minute of the day, that's why I'm thinking about grouping data per day. sourcetype=my_log source=/var/log/mylog.log | bucket _time span=1m | stats dc(cID) by _time | rename dc(cID) as concurrent_users cID is the unique identifier string per user