All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello my friends.    I have a report that uses the "ldapsearch" command. He works every day    The problem is that the report can be executed. and the next day the report is in the "running" ... See more...
Hello my friends.    I have a report that uses the "ldapsearch" command. He works every day    The problem is that the report can be executed. and the next day the report is in the "running" status.  And it works through. somewhere right away somewhere it keeps
I have this query:   my search | rex field=line ".*customerId\":(?<customer_id>[0-9]+)" | dedup customer_id | table customer_id   That returns multiple rows and generate a table:   customer... See more...
I have this query:   my search | rex field=line ".*customerId\":(?<customer_id>[0-9]+)" | dedup customer_id | table customer_id   That returns multiple rows and generate a table:   customer_id ----------- 1 2 3 4 5   I also have another query  that returns a single row with an array  of ids:   Synced accounts: [ 1, 3, 5 ]   My questions are : 1) How can I convert the row from query 2 into a table with the ids  2) how can I do left join between the results ( that I will see on the table only the ids from query 2)?   customer_id ---------- 1 3 5   Thanks in advance  Elad
Caused by: java.sql.SQLException: Io exception: Socket closed i want to extract "java.sql.SQLException"   Can you please do the needful.
Hi all, I'm currently thinking about what to monitor on application level from Splunk Servers using Nagios. Can you give me some ideas and possibilities? I could not find any good ideas in the "Sp... See more...
Hi all, I'm currently thinking about what to monitor on application level from Splunk Servers using Nagios. Can you give me some ideas and possibilities? I could not find any good ideas in the "Splunk Add-on for Nagios" documentation. And i would like to have an overview about what is best to monitor using Nagios and what with Splunk self monitoring. I would appreciate iIf you can point me to the right direction.  Best, Oj.
Hello, We have a problem with the monitoring of a simple file with five fields. The problem is on the date field that Splunk can't match as shown in the attached image. Thanks in advance for you... See more...
Hello, We have a problem with the monitoring of a simple file with five fields. The problem is on the date field that Splunk can't match as shown in the attached image. Thanks in advance for your help. Best regards.  
" ERROR'>=' not supported between instances of 'HTTPError' and 'int', we got this error on one HF, but it works fine on other HFs. I tried reinstalled this add-on still didn't work. Do you have any... See more...
" ERROR'>=' not supported between instances of 'HTTPError' and 'int', we got this error on one HF, but it works fine on other HFs. I tried reinstalled this add-on still didn't work. Do you have any ideas? 
I have a field message which have values has json format need to extract all the values in the json.   { [-] guessedService: ejj logGroup: /aws/ejj/cluster logStream: kube-apt-15444d2f8c4b... See more...
I have a field message which have values has json format need to extract all the values in the json.   { [-] guessedService: ejj logGroup: /aws/ejj/cluster logStream: kube-apt-15444d2f8c4b216a9cb69ac message:{"kind":"Event","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/jej/endpoints/eji.com-aws-eji","verb":"update","user":{"username":"system:serviceaccount:efs:efs-provisioner","uid":"ab5d27b4c-71a4f77323b0","groups":["system:serviceaccounts","system:serviceaccounts:eji","system:authenticated"]},"sourceIPs":["10.0.0.0"],"userAgent":"eji-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"endpoints","namespace":"edd","name":"dds.com-aws-edds","uid":"44ad8-899f-fbc1f4befb2f","apiVersion":"v1","resourceVersion":"8852157"},"responseStatus":{"metadata":{},"code":200}}   here from message field need to extract kind, stage, requestURI... and these fields inside json are dynamic(it can be more in other event). need help in extracting these fields in index time using props and transforms   Thanks
I was trying to install the UF in one of the windows server but it’s getting failed 11-17-2021 10:18:53.591 +0300 FATAL HTTPServer - Could not bind to port 8089 deploymentclient.conf :  [target-br... See more...
I was trying to install the UF in one of the windows server but it’s getting failed 11-17-2021 10:18:53.591 +0300 FATAL HTTPServer - Could not bind to port 8089 deploymentclient.conf :  [target-broker:deploymentServer] targetUri = x.x.x.x:8089 [deployment-client] clientName = xxxxx
Hi I write the Splunk query below to monitor server log index="abc" sourcetype="abc" login "response.status"=200 source="abc.log" | timechart span=2m count | timewrap d series=short | addtotals ... See more...
Hi I write the Splunk query below to monitor server log index="abc" sourcetype="abc" login "response.status"=200 source="abc.log" | timechart span=2m count | timewrap d series=short | addtotals s* | eval daysAvg=round(Total/14.0,0) | eval yesterday_time=strftime(_time,"%H:%M") | table _time, yesterday_time, s0, daysAvg,s6 | outputlookup openapi_login_last_days_lam.csv   However, my query is rely on time range to count daysAvg value, for example in this case time range is 14 day so eval daysAvg=round(Total/14.0,0). I want to calculate daysAvg dynamic. That means I don't need to change time range value when I apply other range. To achieve that, I wrote code to calculate time range like this index="abc" sourcetype="abc" login "response.status"=200 source="abc.log"    | stats earliest(_time) as earliest_time    | eval latest_time=now()    | eval difference=floor((latest_time-earliest_time)/(3600*24))    | table earliest_time, latest_time, difference   Finally, I combine two search like this index="abc" sourcetype="abc" login "response.status"=200 source="abc.log" | timechart span=2m count | timewrap d series=short | addtotals s* | append     [ search index="abc" sourcetype="abc" login "response.status"=200 source="abc.log"         | stats earliest(_time) as earliest_time         | eval earliest=earliest_time     ] | eval latest_time=now() | eval daysAvg=round(Total/14.0,0) | eval yesterday_time=strftime(_time,"%H:%M") | table _time, yesterday_time, s0, daysAvg, s6, latest_time, earliest   But earliest from subsearch did not pass to outer search. Please help me. Thank you
I have a JSON object value as timetaken= 63542 in milliseconds, I  need to convert the stats max(timetaken) value which is in ms to seconds tried as like below  but it is not working out.  search en... See more...
I have a JSON object value as timetaken= 63542 in milliseconds, I  need to convert the stats max(timetaken) value which is in ms to seconds tried as like below  but it is not working out.  search envId = *  message=PostLogin* | stats min(timetaken),max(timetaken), Avg(timetaken) | eval max_time=round((max(timetaken)/1000),2) , min_time = round((min(timetaken)/1000),2) | table max_time,min_time,Avg(timetaken)   But the above is producing the values in same ms even we convert it to seconds    
Hello all,   I have been getting the data and time format in the below way. How do I convert it to the given readable format   20210901225446 -> 2021-09-01 22:54:46 20210901224509 -> 2021-09-01 ... See more...
Hello all,   I have been getting the data and time format in the below way. How do I convert it to the given readable format   20210901225446 -> 2021-09-01 22:54:46 20210901224509 -> 2021-09-01 22:45:09   Thank you.
Hello all,   I have been facing problem with the below extraction where the extraction is working on a few events and not on others. Please help on how this can be fixed.   Below are the differen... See more...
Hello all,   I have been facing problem with the below extraction where the extraction is working on a few events and not on others. Please help on how this can be fixed.   Below are the different kind of alerts:   The extraction is working as expected on the below alert: 50271234,00004105,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","psd217",46769359,"MS932","Server-I ジョブ(Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B:@5V689)を開始します(host: UXC510, JOBID: 56620)","Information","User","/App/App/Server","JOB","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","JOBNET","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX","User:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","START","20210901225500",""   The same extraction is not working on the below alerts and is extracting the underlined red fields which is not expected and need to extract the green marked fields. 50271233,00004125,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","psd217",46769358,"MS932","KAVS0278-I ジョブ(AJSROOT1:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B:@5V689)のサブミットを開始します","Information","User","/App/App/Server","JOB","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","JOBNET","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX","User:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","START","20210901225500","" 50271226,00004106,00000000,3088,"20210901225446","20210901225446",4,-1,-1,"SYSTEM","","psd240",316413750,"MS932","Server-I ジョブ(Server:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック/EDI送信情報リスト_HULFT送信:@50R6189)が正常終了しました(host: PSC666, code: 0, JOBID: 88039)","Information","User","/App/App/Server","JOB","Server:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック/EDI送信情報リスト_HULFT送信","JOBNET","Server:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック","AJSROOT1:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック/EDI送信情報リスト_HULFT送信","END","20210901225446","20210901225446","0"   Please help in resolving this.
Howdy, Been researching on how to give time for the next sequential event to occur, but have not found a way. Lets say field X occurred and the next event to take place is field Y, but field Y is nu... See more...
Howdy, Been researching on how to give time for the next sequential event to occur, but have not found a way. Lets say field X occurred and the next event to take place is field Y, but field Y is null  if under 24 hrs give Length_of_Time in min once Y happens. Issue is if its the same day and Y still has not occurred following X -- , give X 24 hours to happen from the time field Y  happened before marking it as failure of error... So far this is what I have...      | eval X = strptime(StartTime,"%Y-%m-%d %H:%M:%S.%q"), Y =strptime(EndTime,"%Y-%m-%d %H:%M:%S.%6N") note: 86400 is 24 hrs in seconds | eval Length_of_Time = if(isNull(Y)AND Y-X < 86400 AND 86400<=X,round((X-Y)/60,0))      
Hey , I'm trying to get the time difference between when an event was received and a string representation of the time in the event.   Here's an example of the event:   { "action": "created... See more...
Hey , I'm trying to get the time difference between when an event was received and a string representation of the time in the event.   Here's an example of the event:   { "action": "created", "alert": { "number": 818, "created_at": "2021-11-16T21:52:12Z", "url": "https://somewebsite.com" } }   The issue is the conversion of the time in "alert.created_at" from string to epoch.  Once I'm able to get the epoch representation, calculating the difference from _time is easy.   I'm working off this eval statement, but cant get it to work:   | eval strtime=strptime(alert.created_at, "%Y-%m-%dT%H:%M:%SZ") | table strtime   Any thoughts?  Thanks!
Hey all. We're evaluating Splunk SOAR and are looking at highly automated configuration management. Part of the setup is creating tenants, but I can't seem to find any documentation on using the REST... See more...
Hey all. We're evaluating Splunk SOAR and are looking at highly automated configuration management. Part of the setup is creating tenants, but I can't seem to find any documentation on using the REST API to do so. The /rest/container endpoint documentation makes reference to a /rest/tenant endpoint, but there is no actual information on /rest/tenant. Am I looking in the wrong place or is there documentation hidden away? Thank you.
I have a Splunk query that parses the msg field, fetches the fields from the result and displays them in a table. PFA  Now, the issue is, each field in the row has a unique time, but more than 1 row... See more...
I have a Splunk query that parses the msg field, fetches the fields from the result and displays them in a table. PFA  Now, the issue is, each field in the row has a unique time, but more than 1 row could have the same fields, except the time as shown in attached file.  Can we enhance the query in a way, that if for more than 1 row, the fields are same except time, then we can have just row with those fields, and times can as be added as a list (separated by commas) to that final row.  Example, if 2 rows are   Value1, time1, Value2, Value3 Value1, time2, Value2, Value3   Then it could be represented as   Value1, {time1, time2}, Value 2, Value3   This would reduce the space the 2 (or more than 2) rows take on the Dashboard page.  Here is the existing query:    index=myIndex "ERROR * ---" "taskExecutor-*" | rex field=msg "^(?<Time>\S+\s+\S+)\s+\S+\s+(?<Error_Code>\d+)[^\]]+\]\s+(?<Service_Name>\S+)\s+:\s+(?<Error_Message>.+)" | table Error_Message Error_Code Service_Name Time | eventstats count as Count by Error_Message Error_Code Service_Name | sort -Count   Any help would be appreciated. 
Hi all, we are currently testing desaster recovery of our enviroment. We have a full backup of kvstore, apps and passwd for the searchhead instance. We are using local technical users using tokens ... See more...
Hi all, we are currently testing desaster recovery of our enviroment. We have a full backup of kvstore, apps and passwd for the searchhead instance. We are using local technical users using tokens to authenticate and edit kvstores using rest api.  In the backup we found system/JsonWebTokensV1/JsonWebTokensV10.json and restored that. Now we see tokens in the gui, but getting 500 errors when trying to log in using the tokens. The json structure of the kvstore backup only seems to hold meta information about the token, like description and id.  But where are the token actually are stored? What file information have to be recovered on a complete new instance? Thanks for your help in advance, Andreas
Hi Splunk Community, It's been a while since I've last used Splunk and regex, and now I'm struggling with both Fields that I need to use ("resourceId") contain two user IDs and timestamps (e.g... See more...
Hi Splunk Community, It's been a while since I've last used Splunk and regex, and now I'm struggling with both Fields that I need to use ("resourceId") contain two user IDs and timestamps (e.g., "owner-10785-user-3801-key-1637099215"). I'm looking to keep the IDs and remove timestamps (basically everything after "owner-19803-user-8925-"). I came up with this clumsy thing: index=main | eval resourceId1=replace (resourceId, "user-(?<user_id>\d+)", "") | eval resourceId2=replace (resourceId1, "owner-(?<owner_id>\d+)", "") | table resourceId2 It kind of works, the only problem is that it gives me the opposite result - it removes all the IDs leaving the timestamps, like this: resourceId2 --key-1637100297 --1637100120.0929909 --key-1637100118 But I need the opposite. Can anyone please help?
When I start Splunk after a reboot of the server splunk is running fine but the webserver is not starting. ./splunk cmd btool web list --debug |grep startwebserver /opt/splunk/etc/apps/SplunkLigh... See more...
When I start Splunk after a reboot of the server splunk is running fine but the webserver is not starting. ./splunk cmd btool web list --debug |grep startwebserver /opt/splunk/etc/apps/SplunkLightForwarder/default/web.conf startwebserver = 0 Is there a way to manualy start the webserver? I run Ubuntu Thanks, Laurence
anyone have a splunkbase app that will perform this function?     I wish to email a CSV file (possibly ZIPPED too) into Splunk to be consumed into splunk.   Is this possible at all? thanks!  Mik... See more...
anyone have a splunkbase app that will perform this function?     I wish to email a CSV file (possibly ZIPPED too) into Splunk to be consumed into splunk.   Is this possible at all? thanks!  Mike B