All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I keep getting "DateParserVerbose [6827 merging] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (75) characters of event. Defaulting to timestamp of previous event" warnings. ... See more...
Hi all, I keep getting "DateParserVerbose [6827 merging] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (75) characters of event. Defaulting to timestamp of previous event" warnings. The time stamp in the logs looks like: 2021/10/28T16:06:08.183-07:00 props.conf looks like: DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = true MAX_TIMESTAMP_LOOKAHEAD = 75 MAX_DAYS_AGO = 36500 MAX_DAYS_HENCE = 36500 TIME_FORMAT = %d-%b-%y %I.%M.%S.%6Q %p SHOULD_LINEMERGE = false TRUNCATE = 500000 Anyone know what my time_format should be instead?
Hi All, I'm trying to work out best practice with regards to alert throttling and max time frames. Trying to determine whether if we where to throttle something for 2 weeks, would we actually be be... See more...
Hi All, I'm trying to work out best practice with regards to alert throttling and max time frames. Trying to determine whether if we where to throttle something for 2 weeks, would we actually be better off filtering in a different way, either by using a lookup or a subsearch. I'd like to know where the values that are used for throttling are stored, and what whether there is any performance considerations we need to account for when looking at throttling for longer periods.
Greetings,   I'm looking to craft a correlation that allows me to compare the results between two separate searches. Here's the use case: I have 2 indexes, one containing Threat Intelligence data... See more...
Greetings,   I'm looking to craft a correlation that allows me to compare the results between two separate searches. Here's the use case: I have 2 indexes, one containing Threat Intelligence data (including domain names to be specific for this case). While the other index holds all DNS requests. I'm looking to craft a Splunk correlation that reads each domain within the DNS requests, which then compares each of those domains to the Threat Intelligence data and see if there's any matches.    For instance, maybe something along the lines of the logic below: index=Threat_Intelligence | table DomainName | where DomainName IN [search index=DNS | table RequestedDomain]   FYI: The latest Threat Intelligence feeds are pulled every single morning and is updated within Splunk. I thought about using lookup tables or KV Store lookups, but we're pulling in several files each morning, 2 of which are close to 1GB in size. It looks like Splunk Cloud caps the event limit of these lookups to 10,000 events by default, and I've read to be cautious about increasing this limit. 
Hi. I am trying to run this in splunk cloud: |rest /services/search/jobs|search isRealTimeSearch=1 But getting this: Restricting results of the "rest" operator to the local instance because y... See more...
Hi. I am trying to run this in splunk cloud: |rest /services/search/jobs|search isRealTimeSearch=1 But getting this: Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability I have looked at users and roles and that capability is not in the list to choose.  It is in theSplunk Cloud documentation but simply isnt there to select. Any ideas why? Thanks, Keith
index=myindex  | eval createdepoch = strptime(created, "%Y-%m-%d") | eval _time = createdepoch | search earliest=-90d@d  | table _time This returns no results,  Can anyone tell me why this wou... See more...
index=myindex  | eval createdepoch = strptime(created, "%Y-%m-%d") | eval _time = createdepoch | search earliest=-90d@d  | table _time This returns no results,  Can anyone tell me why this wouldn't work?  
Oct 28 20:08:57 XXX.XXX.com Microsoft-Windows-Security-Auditing[4]: EventID: 4663 An attempt was made to access an object. Subject: Security ID: XXX Account Name: John Account Domain: XXX    My que... See more...
Oct 28 20:08:57 XXX.XXX.com Microsoft-Windows-Security-Auditing[4]: EventID: 4663 An attempt was made to access an object. Subject: Security ID: XXX Account Name: John Account Domain: XXX    My question is how do I extract the "Account Name: user" from this? I tried creating a new field extract with the space delimiter but if I selected John above, it wouldn't pull the account name from the rest of the log entries. Thanks in advance!
i have data as below :     Request-all-Headers = Accept - */* Authorization - Bearer m6CsheaxrlMKIBH3vZ0EXk5G3rw6 Content-Type - application/json Host - api.ingrammicro.com IM-CorrelationID - 213.... See more...
i have data as below :     Request-all-Headers = Accept - */* Authorization - Bearer m6CsheaxrlMKIBH3vZ0EXk5G3rw6 Content-Type - application/json Host - api.ingrammicro.com IM-CorrelationID - 213.45245849 IM-CountryCode - TN IM-CustomerNumber - 44-999999 IM-SenderID - Global Reward Solutions simulateStatus - IM::SHIPPED X-Forwarded-For - 10.0.0.0X-Forwarded-Port - 123 X-Forwarded-Proto - https    and working rex below from regex 101  :   IM-CountryCode\s+-\s+(?P<country>[A-Z]{2})\s+IM-CustomerNumber\s+-\s+(?P<custno>[0-9]+-[0-9]{6})   now when I tried the same with splunk. splunk is not able to extract the fields . my splunk query is below : index=test sourcetype="test" | rex field=Request-all-Headers "IM-CountryCode\s+-\s+(?P<country>[A-Z]{2})" | rex field=Request-all-Headers "IM-CustomerNumber\s+-\s+(?P<custno>[0-9]+-[0-9]{6})"
Currently on Splunk version 8.2.2.1. Currently I have Splunk add-on for Unix version 5.1.2.  According to the documents, I need to upgrade to version 6 first then version 7 before upgrading to the la... See more...
Currently on Splunk version 8.2.2.1. Currently I have Splunk add-on for Unix version 5.1.2.  According to the documents, I need to upgrade to version 6 first then version 7 before upgrading to the latest 8. I cannot find the older versions of this add-on. Does anybody know where to get these?   Thanks in advance for your time.
Hi, I'm continuously receiving the error Regex: syntax error in subpattern name (missing terminator) when attempting to search with a 'rex' operation.  I've gone through several different message bo... See more...
Hi, I'm continuously receiving the error Regex: syntax error in subpattern name (missing terminator) when attempting to search with a 'rex' operation.  I've gone through several different message boards and nothing seems to resolve the issue.  Any help would be greatly appreciated! My intention is to grab the "Http-Method" value from the raw event. Search: [Search...] | rex field=_raw "Method: (?<Http-Method>.*)" Sample Event: 2021-10-28 10:55:39,505 1109468116 [http-bio-8443-exec-9] INFO o.a.c.i.LoggingInInterceptor - Inbound Message ---------------------------- ID: 41087 Address: [...Sensitive Information Removed...] Encoding: ISO-8859-1 Http-Method: POST Content-Type: application-xml Headers: [...Sensitive Information Removed...]
Hi, I would like to determine a field from different areas of a log. eg see below for my expectations.  Note: You can be sure these three  T INFO id=1 sourcetype=userservice FirstName=Vinod T+1 IN... See more...
Hi, I would like to determine a field from different areas of a log. eg see below for my expectations.  Note: You can be sure these three  T INFO id=1 sourcetype=userservice FirstName=Vinod T+1 INFO id=2 sourcetype=loginservice User 'Vinod' logged in T+2 INFO id=3 sourcetype=userservice FirstName=Jason T+3 INFO id=4 sourcetype=loginservice User 'Jason' logged in. T+4 INFO id=5 sourcetype=userservice User deleted: Jason   Output: Name | Count Vinod  | 2 Jason | 3
I have the following data. That I am trying to convert to a time series by Type with the last Status brought forward. Raw data: Timestamp Status Type 10/28/2021 12:00 down B 10/28/2021... See more...
I have the following data. That I am trying to convert to a time series by Type with the last Status brought forward. Raw data: Timestamp Status Type 10/28/2021 12:00 down B 10/28/2021 12:10 up A 10/28/2021 12:30 up B 10/28/2021 13:10 down B 10/28/2021 13:30 up B   After transformation I need a data point every 10 minutes for each Type with the pervious Status brought forward. Note I could have 40-50 different types. Example Transformation: Timestamp Status Type 10/28/2021 12:00 down B 10/28/2021 12:10 down B 10/28/2021 12:20 down B 10/28/2021 12:30 up B 10/28/2021 12:40 up B 10/28/2021 12:50 up B 10/28/2021 13:00 up B 10/28/2021 13:10 down B 10/28/2021 13:20 down B 10/28/2021 13:30 up B 10/28/2021 13:40 up B   Any Ideas?
Hi all, I have two sourcetypes: WinEventLog and XmlWinEventLog. Both were displaying as very hard to read XML data in the events. I was able to correct the WinEventLog data by editing the index.co... See more...
Hi all, I have two sourcetypes: WinEventLog and XmlWinEventLog. Both were displaying as very hard to read XML data in the events. I was able to correct the WinEventLog data by editing the index.conf file from RenederXML=true to false, but it did not fix the XmlWinEventlog sourcetype data.  I think it might be the props.conf > KV_MODE = xml, but it also did not correct the parsing problem.  Any assistance would be greatly appreciated! /Paul  
Hi all I'm trying to get Syslog to send from my Aruba HP-2530-8 switch to Splunk. I installed the Aruba add-ons and have my data input set up for TCP port 514 with source type aruba:syslog. I think ... See more...
Hi all I'm trying to get Syslog to send from my Aruba HP-2530-8 switch to Splunk. I installed the Aruba add-ons and have my data input set up for TCP port 514 with source type aruba:syslog. I think I'm missing something here any help is appreciated and thanks for reading. 
I'm trying to find a way to reverse the order of values for a multivalue field. Use the following SPL as the base search:     | makeresults ``` Create string of characters, separated by comma... See more...
I'm trying to find a way to reverse the order of values for a multivalue field. Use the following SPL as the base search:     | makeresults ``` Create string of characters, separated by comma ``` | eval mv_string = "banana,apple,orange,peach" ``` Split string into multivalue, using comma as the delimiter ``` | eval mv_ascending = split(mv_string, ",")   My goal is to have a multivalue field that I can mvjoin() in this order: "peach,orange,apple,banana" In programming languages, like Python, you can use slicing to reverse the direction of a list (i.e., multivalue). However, it seems mvindex() is a watered down version of this. To my knowledge, this SPL function doesn't allow reversing the order. You can grab different index values with mvindex(), but it's always with the original list order. Anyone else come across this?
I usually get many "skipped searches" reported & the ES is indicated as the host that I understand. Lately I get many skipped searches reported but a Search Head like SH01 is indicated as the host. P... See more...
I usually get many "skipped searches" reported & the ES is indicated as the host that I understand. Lately I get many skipped searches reported but a Search Head like SH01 is indicated as the host. Please help me understand. Thank u 
Hi, I want to insert Timerange picker value like $time$ in my query for a Dynamic input. Requesting help with the query where the $time$ will get injected and will not utilize the GUI Timerange Picke... See more...
Hi, I want to insert Timerange picker value like $time$ in my query for a Dynamic input. Requesting help with the query where the $time$ will get injected and will not utilize the GUI Timerange Picker in the Dynamic input widget. 
Hello, I am using "Splunk_TA_juniper" and I noticed a new problem with timestamp: there is a one hour offset for the timestamp compared to the time in the event. For instance, when I have an event w... See more...
Hello, I am using "Splunk_TA_juniper" and I noticed a new problem with timestamp: there is a one hour offset for the timestamp compared to the time in the event. For instance, when I have an event whose _raw value starts with "Oct 28 15:12:37 fw-01-gra RT_FLOW:  ...", the timestamps is "2021-10-28T16:12:37.000+02:00" (16h instead of 15h). In addition, the event will only appear after an hour after its received by the indexer, in fact when the timestamp value is less than the current time. This behaviour is new. When I examine  events for september (for instance), the timestamp matches the time in the event. I tried to restart Splunk and the forwarder, nothing was changed. I haven't modify the configuration files for a long time, and I don't know what to do. Do you have an idea of what is going on or a possible solution? Regards Denis
Hello Splunk Community ! I have an alert setup to report failed login attempts by a user > 4 times in 5 minutes. Alert query : index=win_os sourcetype="Security" EventCode=4625 | bin span=5m _ti... See more...
Hello Splunk Community ! I have an alert setup to report failed login attempts by a user > 4 times in 5 minutes. Alert query : index=win_os sourcetype="Security" EventCode=4625 | bin span=5m _time| stats count dc(user) by _time, user, Logon_Type,dest, src, Failure_Reason | where count > 3 | sort user | table _time, user, count, Logon_Type,dest, src, Failure_Reason Alert settings: Alert Type: Scheduled. Hourly, at 0 minutes past the hour. Trigger Condition: Number of Results is > 0 Issue : the last time this alert ran, i got results only from 3 PM attempts. the alert PDF did not report the results from 2:55 PM. Actual Query result:   Alert PDF that came in email: Any idea why the complete results were not shown from 2:55 PM when the alert triggered at the hour ? 
Hello  Is it possible to call this service /services/data/lookup_edit/lookup_contents to create a lookup in splunk cloud ?  thanks !
Hello, we receive somewhere between 3-5 messages in every Pod in every 1 minute. We have a situation where some of the pods go Zombie and stops writing messages.  Here's the query: index namespace... See more...
Hello, we receive somewhere between 3-5 messages in every Pod in every 1 minute. We have a situation where some of the pods go Zombie and stops writing messages.  Here's the query: index namespace pod="pod-xyz"  message="incoming events" | timechart count by pod span=1m I want help with this query to detect when the stats count in the minute time interval goes to zero.