All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a JSON object value as timetaken= 63542 in milliseconds, I  need to convert the stats max(timetaken) value which is in ms to seconds tried as like below  but it is not working out.  search en... See more...
I have a JSON object value as timetaken= 63542 in milliseconds, I  need to convert the stats max(timetaken) value which is in ms to seconds tried as like below  but it is not working out.  search envId = *  message=PostLogin* | stats min(timetaken),max(timetaken), Avg(timetaken) | eval max_time=round((max(timetaken)/1000),2) , min_time = round((min(timetaken)/1000),2) | table max_time,min_time,Avg(timetaken)   But the above is producing the values in same ms even we convert it to seconds    
Hello all,   I have been getting the data and time format in the below way. How do I convert it to the given readable format   20210901225446 -> 2021-09-01 22:54:46 20210901224509 -> 2021-09-01 ... See more...
Hello all,   I have been getting the data and time format in the below way. How do I convert it to the given readable format   20210901225446 -> 2021-09-01 22:54:46 20210901224509 -> 2021-09-01 22:45:09   Thank you.
Hello all,   I have been facing problem with the below extraction where the extraction is working on a few events and not on others. Please help on how this can be fixed.   Below are the differen... See more...
Hello all,   I have been facing problem with the below extraction where the extraction is working on a few events and not on others. Please help on how this can be fixed.   Below are the different kind of alerts:   The extraction is working as expected on the below alert: 50271234,00004105,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","psd217",46769359,"MS932","Server-I ジョブ(Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B:@5V689)を開始します(host: UXC510, JOBID: 56620)","Information","User","/App/App/Server","JOB","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","JOBNET","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX","User:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","START","20210901225500",""   The same extraction is not working on the below alerts and is extracting the underlined red fields which is not expected and need to extract the green marked fields. 50271233,00004125,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","psd217",46769358,"MS932","KAVS0278-I ジョブ(AJSROOT1:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B:@5V689)のサブミットを開始します","Information","User","/App/App/Server","JOB","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","JOBNET","Server:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX","User:/新基幹_本番処理/値札発行/04_値札指示データ連携_午前1TAX/V9B01_B","START","20210901225500","" 50271226,00004106,00000000,3088,"20210901225446","20210901225446",4,-1,-1,"SYSTEM","","psd240",316413750,"MS932","Server-I ジョブ(Server:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック/EDI送信情報リスト_HULFT送信:@50R6189)が正常終了しました(host: PSC666, code: 0, JOBID: 88039)","Information","User","/App/App/Server","JOB","Server:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック/EDI送信情報リスト_HULFT送信","JOBNET","Server:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック","AJSROOT1:/新基幹_本番処理/MCS/監視/09_注文送信未更新項目チェック/EDI送信情報リスト_HULFT送信","END","20210901225446","20210901225446","0"   Please help in resolving this.
Howdy, Been researching on how to give time for the next sequential event to occur, but have not found a way. Lets say field X occurred and the next event to take place is field Y, but field Y is nu... See more...
Howdy, Been researching on how to give time for the next sequential event to occur, but have not found a way. Lets say field X occurred and the next event to take place is field Y, but field Y is null  if under 24 hrs give Length_of_Time in min once Y happens. Issue is if its the same day and Y still has not occurred following X -- , give X 24 hours to happen from the time field Y  happened before marking it as failure of error... So far this is what I have...      | eval X = strptime(StartTime,"%Y-%m-%d %H:%M:%S.%q"), Y =strptime(EndTime,"%Y-%m-%d %H:%M:%S.%6N") note: 86400 is 24 hrs in seconds | eval Length_of_Time = if(isNull(Y)AND Y-X < 86400 AND 86400<=X,round((X-Y)/60,0))      
Hey , I'm trying to get the time difference between when an event was received and a string representation of the time in the event.   Here's an example of the event:   { "action": "created... See more...
Hey , I'm trying to get the time difference between when an event was received and a string representation of the time in the event.   Here's an example of the event:   { "action": "created", "alert": { "number": 818, "created_at": "2021-11-16T21:52:12Z", "url": "https://somewebsite.com" } }   The issue is the conversion of the time in "alert.created_at" from string to epoch.  Once I'm able to get the epoch representation, calculating the difference from _time is easy.   I'm working off this eval statement, but cant get it to work:   | eval strtime=strptime(alert.created_at, "%Y-%m-%dT%H:%M:%SZ") | table strtime   Any thoughts?  Thanks!
Hey all. We're evaluating Splunk SOAR and are looking at highly automated configuration management. Part of the setup is creating tenants, but I can't seem to find any documentation on using the REST... See more...
Hey all. We're evaluating Splunk SOAR and are looking at highly automated configuration management. Part of the setup is creating tenants, but I can't seem to find any documentation on using the REST API to do so. The /rest/container endpoint documentation makes reference to a /rest/tenant endpoint, but there is no actual information on /rest/tenant. Am I looking in the wrong place or is there documentation hidden away? Thank you.
I have a Splunk query that parses the msg field, fetches the fields from the result and displays them in a table. PFA  Now, the issue is, each field in the row has a unique time, but more than 1 row... See more...
I have a Splunk query that parses the msg field, fetches the fields from the result and displays them in a table. PFA  Now, the issue is, each field in the row has a unique time, but more than 1 row could have the same fields, except the time as shown in attached file.  Can we enhance the query in a way, that if for more than 1 row, the fields are same except time, then we can have just row with those fields, and times can as be added as a list (separated by commas) to that final row.  Example, if 2 rows are   Value1, time1, Value2, Value3 Value1, time2, Value2, Value3   Then it could be represented as   Value1, {time1, time2}, Value 2, Value3   This would reduce the space the 2 (or more than 2) rows take on the Dashboard page.  Here is the existing query:    index=myIndex "ERROR * ---" "taskExecutor-*" | rex field=msg "^(?<Time>\S+\s+\S+)\s+\S+\s+(?<Error_Code>\d+)[^\]]+\]\s+(?<Service_Name>\S+)\s+:\s+(?<Error_Message>.+)" | table Error_Message Error_Code Service_Name Time | eventstats count as Count by Error_Message Error_Code Service_Name | sort -Count   Any help would be appreciated. 
Hi all, we are currently testing desaster recovery of our enviroment. We have a full backup of kvstore, apps and passwd for the searchhead instance. We are using local technical users using tokens ... See more...
Hi all, we are currently testing desaster recovery of our enviroment. We have a full backup of kvstore, apps and passwd for the searchhead instance. We are using local technical users using tokens to authenticate and edit kvstores using rest api.  In the backup we found system/JsonWebTokensV1/JsonWebTokensV10.json and restored that. Now we see tokens in the gui, but getting 500 errors when trying to log in using the tokens. The json structure of the kvstore backup only seems to hold meta information about the token, like description and id.  But where are the token actually are stored? What file information have to be recovered on a complete new instance? Thanks for your help in advance, Andreas
Hi Splunk Community, It's been a while since I've last used Splunk and regex, and now I'm struggling with both Fields that I need to use ("resourceId") contain two user IDs and timestamps (e.g... See more...
Hi Splunk Community, It's been a while since I've last used Splunk and regex, and now I'm struggling with both Fields that I need to use ("resourceId") contain two user IDs and timestamps (e.g., "owner-10785-user-3801-key-1637099215"). I'm looking to keep the IDs and remove timestamps (basically everything after "owner-19803-user-8925-"). I came up with this clumsy thing: index=main | eval resourceId1=replace (resourceId, "user-(?<user_id>\d+)", "") | eval resourceId2=replace (resourceId1, "owner-(?<owner_id>\d+)", "") | table resourceId2 It kind of works, the only problem is that it gives me the opposite result - it removes all the IDs leaving the timestamps, like this: resourceId2 --key-1637100297 --1637100120.0929909 --key-1637100118 But I need the opposite. Can anyone please help?
When I start Splunk after a reboot of the server splunk is running fine but the webserver is not starting. ./splunk cmd btool web list --debug |grep startwebserver /opt/splunk/etc/apps/SplunkLigh... See more...
When I start Splunk after a reboot of the server splunk is running fine but the webserver is not starting. ./splunk cmd btool web list --debug |grep startwebserver /opt/splunk/etc/apps/SplunkLightForwarder/default/web.conf startwebserver = 0 Is there a way to manualy start the webserver? I run Ubuntu Thanks, Laurence
anyone have a splunkbase app that will perform this function?     I wish to email a CSV file (possibly ZIPPED too) into Splunk to be consumed into splunk.   Is this possible at all? thanks!  Mik... See more...
anyone have a splunkbase app that will perform this function?     I wish to email a CSV file (possibly ZIPPED too) into Splunk to be consumed into splunk.   Is this possible at all? thanks!  Mike B
Here is my query - I'm doing two searches that are independent of each other. In both searches, I'm restricting the time to a certain hour and then grouping by day.  index="first search" | eval date... See more...
Here is my query - I'm doing two searches that are independent of each other. In both searches, I'm restricting the time to a certain hour and then grouping by day.  index="first search" | eval date_hour=strftime(_time, "%H") | eval dateday=strftime(_time, "%d") | search date_hour>=10 date_hour<11 | stats count as totalFail by dateday | append [search index="second search" | eval date_hour=strftime(_time, "%H") | search date_hour>=10 date_hour<11 | eval date_day=strftime(_time, "%d") | stats count as totalProcess by date_day | eval failureRate = totalFail/totalProcess] | table dateday, totalFail, totalProcess, failureRate   Trying to achieve  two things here: 1) Getting the data to be outputted "correctly" as a table (ie, data is uniform across rows) and 2) Getting a simple calculation (percentage) to work.  Right now the table is not formatted correctly (ie, 10 rows, instead of 5) and the percentage calculation doesn't appear to be working.  Here is the desired output: Day | Fail | Total | Percentage 10 | 1 | 10 | 10% 11 | 2 | 10 | 20% 12| 0| 10| 0%  
Functionality worked before the upgrade. After upgrade logon shows "Tab-completion of "splunk <verb> <object> is available". Like, $ source ./setSplunkEnv Tab-completion of "splunk <verb> <objec... See more...
Functionality worked before the upgrade. After upgrade logon shows "Tab-completion of "splunk <verb> <object> is available". Like, $ source ./setSplunkEnv Tab-completion of "splunk <verb> <object>" is available.   But when utilized it returns "Splunk re-bash: completion: function 'fSplunkComplete' not found. File is in "/opt/splunk/bin/setSplunkEnv" "/opt/splunk/share/splunk/cli-command-completion.sh" location and referenced files are both available.
Hello  wonder if anyone got this app working for rss feeds?.  https://splunkbase.splunk.com/app/2646/#/details Broad feed support: the input supports all of the major feed types (RSS, ATOM, RDF) a... See more...
Hello  wonder if anyone got this app working for rss feeds?.  https://splunkbase.splunk.com/app/2646/#/details Broad feed support: the input supports all of the major feed types (RSS, ATOM, RDF) and will automatically determine the type of the feed and import it automatically   was only able to ingest BBC news, cisco webex status feed . the ones i am interested in fail with error   But these fail to be ingested ;  the error is same for all the feeds tested https://www.csoonline.com/in/index.rss https://feeds.feedburner.com/securityweek http://krebsonsecurity.com/feed/ https://threatpost.com/feed/ https://www.darkreading.com/rss/all.xml https://feeds.feedburner.com/TheHackersNews https://www.theregister.com/security/headlines.atom https://nvd.nist.gov/feeds/xml/cve/misc/nvd-rss.xml https://www.bleepingcomputer.com/feed/ https://www.infosecurity-magazine.com/rss/news   Does not look like a dns error as it works for bbc & webex url.  same error from test machine fully open to the internet.    Supported Splunk Versions: 7.2, 7.3, 8.0, 8.1, 8.2 ;    http TRACE: Request URL: https://www.csoonline.com/in/index.rss Request Method: GET Status Code: 200 OK Remote Address: 172.22.59.131:80     ERROR TRACE:     2021-11-16 19:25:53,176 ERROR Unable to get the feed, url=https://www.infosecurity-magazine.com/rss/news Traceback (most recent call last): File "/opt/splunk/etc/apps/syndication/bin/syndication.py", line 350, in run results, last_entry_date_retrieved = self.get_feed(feed_url.geturl(), return_latest_date=True, include_later_than=last_entry_date, logger=self.logger, username=username, password=password, clean_html=clean_html) File "/opt/splunk/etc/apps/syndication/bin/syndication.py", line 167, in get_feed d = feedparser.parse(feed_url) File "/opt/splunk/etc/apps/syndication/bin/syndication_app/feedparser/api.py", line 241, in parse data = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result) File "/opt/splunk/etc/apps/syndication/bin/syndication_app/feedparser/api.py", line 141, in _open_resource return http.get(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result) File "/opt/splunk/etc/apps/syndication/bin/syndication_app/feedparser/http.py", line 200, in get f = opener.open(request) File "/opt/splunk/lib/python2.7/urllib2.py", line 429, in open response = self._open(req, data) File "/opt/splunk/lib/python2.7/urllib2.py", line 447, in _open '_open', req) File "/opt/splunk/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/opt/splunk/lib/python2.7/urllib2.py", line 1241, in https_open context=self._context) File "/opt/splunk/lib/python2.7/urllib2.py", line 1198, in do_open raise URLError(err) URLError: <urlopen error [Errno -2] Name or service not known>       https://lukemurphey.net/projects/splunk-syndication-input/wiki/Troubleshooting   Troubleshooting If you experience problems with the input, run the following search to see both the output from the input and the modular input logs together in order to see if the logs indicate what is wrong: (index=main sourcetype=syndication) OR (index=_internal sourcetype="syndication_modular_input") If you have debug logging enabled, then you can see details with the following: index=_internal sourcetype="syndication_modular_input" | rex field=_raw "(?<action>((Skipping)|(Including)))" | search count>0 OR action=Including | table date latest_date title action count    
I have Splunk results in following format:   2021-11-13 01:02:50.127 ERROR 23 --- [ taskExecutor-2] c.c.p.r.service.RedisService : The Redis Cache had no record for key: null Returning ... See more...
I have Splunk results in following format:   2021-11-13 01:02:50.127 ERROR 23 --- [ taskExecutor-2] c.c.p.r.service.RedisService : The Redis Cache had no record for key: null Returning empty list. 2021-10-22 21:11:51.996 ERROR 22 --- [ taskExecutor-1] c.c.p.r.service.SftpService : Could not delete file: /-/XYZ.FILE - 4: Failure 2021-10-22 02:05:14.426 ERROR 22 --- [ taskExecutor-1] c.c.p.r.service.SftpService : Could not delete file: /-/XYZ.FILE - 4: Failure   I want to create a Visualization in the following format - In the attached screenshot.  The **count variable** is based on the "Error Message" only. Since "Could not delete file: /-/XYZ.FILE - 4: Failure" appeared twice, hence the count is set to 2. As the logs grow, and this message occurrence increase, this count should increase too. I tried using erex and substring from Splunk but kinda failed miserably! Any help on how to form the Splunk query for this visualization would be appreciated. Thanks
I have faced a very interesting situation and have no clue what is going wrong. I have a forwarded info from a particular host and if use a search like this I have all results. index=win host=MYHOS... See more...
I have faced a very interesting situation and have no clue what is going wrong. I have a forwarded info from a particular host and if use a search like this I have all results. index=win host=MYHOST If I use this search it gives no results. index=win host=MYHOST sourcetype=mysourcetype BUT in realtime search it gives me the results! The setup on the host looks like this for mysourcetype. [mylogsource] disabled = 0 index = win sourcetype = mysourcetype
I need to learn the data Retention settings for Splunk Ent. & ES in my environment. Would you share a SPL if there are any please. Thanks a million.
what's the best way to set a sedcmd in props to remove spaces and add a " _ " in just the a cvs header line? for example- I have a csv header like this; field 1, field 2, filed 3 and I want the fi... See more...
what's the best way to set a sedcmd in props to remove spaces and add a " _ " in just the a cvs header line? for example- I have a csv header like this; field 1, field 2, filed 3 and I want the fields to be field_1, field_2, filed_3 but just for the header, not for the data in the fields.  I see this in a old post but, it will change everything with a space even the data: inputs.conf [monitor://c:\temp\sed-csv.log] disable=false index=sedcsv sourcetype=sedcsv props.conf [sedcsv] SEDCMD-replacespace = s/ /_/g https://community.splunk.com/t5/Splunk-Search/CSV-Field-Extraction-with-spaces-in-field-name/m-p/245974 
Hello. I've noticed that in many solutions when there is a need for a value from previous row, streamstats with window=1 is used. For example - https://community.splunk.com/t5/Splunk-Search/Unable-t... See more...
Hello. I've noticed that in many solutions when there is a need for a value from previous row, streamstats with window=1 is used. For example - https://community.splunk.com/t5/Splunk-Search/Unable-to-subtract-one-days-hours-from-previous-days-to-create/m-p/575093/highlight/true#M200392 In similar cases I tended to use autoregress which behaves more or less the same. The question is - what are pros/cons of each of those commands? Do they have some non-obvious limitations? Is any "better" than the other?
Hello,   I am trying to figure out how many good IP addresses vs bad IP addresses there are based on Tenable Security center results (severity=low, medium, high, critical).  A good scan should show... See more...
Hello,   I am trying to figure out how many good IP addresses vs bad IP addresses there are based on Tenable Security center results (severity=low, medium, high, critical).  A good scan should show multiple severity level results vs a bad scan would not show as many severity level results.   I would like to get as many fields filled based on SPL query.  More importantly I would like to get the good vs bad scan results (credentialed scans) from Tenable Security Center (ACAS).  What I mean by this is that when a scan has been initiated, you know a good scan vs a bad scan, where a good scan can pull multiple vulnerabilities based on the severity levels.  Where as for a bad scan does not pull as many vulnerabilities and the severity levels are very low or close to nothing at all.  I created a SPL query that provides the 26 data standard fields:  IP repository.dataFormat  netbiosName dnsName AWS hostname macAddress OS_Type, OS_Version operatingSystem SystemManufacture SystemSerialNumber SYStemModel AWSAccountNumber AWSInstanceID AWSENI passFail plugin_id pluginName repository.name, cpe low, medium, high critical total Country lat lon SPL Query earliest=7d@d index=acas sourcetype="tenable:sc:vuln"  | rex field=operatingSystem "^(?P<OS_Type>\w+)\.(?P<OS_Version>.*)$" | rex field=dnsName "^(?P<hostname>\w+)\.(?P<domain>.*)$" | rex field=system "^(?P<manufacture>\w+)\.(?P<serialnumber>.*)$" | rex field=pluginText "\<cm\:compliance-result\>(?<status>\w+)\<\/cm\:compliance-result\>" | eval AWS=if(like(dnsName,"clou%"),"TRUE","FALSE") | iplocation ip | eventstats count(eval(severity="informational")) as informational, count(eval(severity="low")) as low, count(eval(severity="medium")) as medium, count(eval(severity="high")) as high, count(eval(severity="critical")) as critical by ip | dedup ip | eval total = low+medium+high+critical | table ip, repositiory.dtatFormat, netbiosName, dnsName, AWS, hostname, macAddress, OS_Type, OS_Version, operatingSystem, SystemManufacture, SystemSerialNumber, SystemModel, AWSAccountNumber, AWSINstanceID, AWSENI, passFail, plugin_id, pluginName, repository.name, cpe, low, medium, high, critical, total, Country, lat, lon