All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid- This curl tool sounds useful. And @Zoe_  you just need to add | outputlookup <your-lookup-name> at the end of @livehybrid 's query.
@livehybrid  json_array_to_mv - that's sounds interesting. 
@gcusello  in my search query i thought it showed that I have a lookup containing all the holidays that I wanted to have mute. so yes I do have it. just wanted to question this line NOT (dat_wda... See more...
@gcusello  in my search query i thought it showed that I have a lookup containing all the holidays that I wanted to have mute. so yes I do have it. just wanted to question this line NOT (dat_wday="saturday" OR date_wday="sunday") why sat and sunday? I have my cron schedule to search  0 6 * * 1-5  so its monday-friday so that should cover it? could I just  Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") NOT [| lookup holidays.csv HolidayDate as Date output HolidayDate] | eval should_alert=if((isnull(HolidayDate)), "Yes", "No") | table Date should_alert | where should_alert="Yes
@Andre_- FYI, I haven't tried these config on my side so may need to read about them on spec file & Splunk docs. Also, I'm not sure how metrics based queries will be used for role based restriction.... See more...
@Andre_- FYI, I haven't tried these config on my side so may need to read about them on spec file & Splunk docs. Also, I'm not sure how metrics based queries will be used for role based restriction.   # props.conf.example [em_metrics] METRICS_PROTOCOL = statsd STATSD-DIM-TRANSFORMS = user, queue, app_id, state # transforms.conf.example [statsd-dims:user] REGEX = (\Quser:\E(?<user>.*?)[\Q,\E\Q]\E])   I hope this helps!!! Kindly upvote if it does!!!
@davidco- Did you check connectivity from Spark server to Splunk service on splunk HEC port? * via telnet or curl    
I have a server pushing audit logs data to a syslog, to login to the server you need SAML. My question is: how do I pull data of successful logins and unsuccessful logins of those SAML users in Splun... See more...
I have a server pushing audit logs data to a syslog, to login to the server you need SAML. My question is: how do I pull data of successful logins and unsuccessful logins of those SAML users in Splunk?   Thank you.
I was afraid of that.  Makes it hard for me because I don't have access to the source side of things for most things coming into HEC.
@WorapongJ- Yes in both case you will loose data.   And I know you are trying to understand the impact of it on Splunk. But there is usually a recovery option available for KVstore/Mongo depending ... See more...
@WorapongJ- Yes in both case you will loose data.   And I know you are trying to understand the impact of it on Splunk. But there is usually a recovery option available for KVstore/Mongo depending on what has happened or what's the issue.   I hope this helps!!!
@gn694- I don't think there is any direct way or internal logs you can use this for this what you need. Unless you can see the difference in data in terms of fields indexed OR you check on the sourc... See more...
@gn694- I don't think there is any direct way or internal logs you can use this for this what you need. Unless you can see the difference in data in terms of fields indexed OR you check on the source side.  
Is there any way to tell whether data coming into Splunk's HEC was sent to the event or raw endpoint? You can't really tell from looking at the events themselves, so I was hoping there was a way to ... See more...
Is there any way to tell whether data coming into Splunk's HEC was sent to the event or raw endpoint? You can't really tell from looking at the events themselves, so I was hoping there was a way to tell based on something like the token, sourcetype, source, or host. I have tried searching the _internal index and have not found anything helpful.
The query did not error, but also 0 events. Any other way? I have created a lookup table.  
Hi @CMAzurdia  Typically success/failed login attempts are recorded by the Identity Provider (IdP) rather than Splunk, however you can see successful logins to Splunk from SAML users with the follow... See more...
Hi @CMAzurdia  Typically success/failed login attempts are recorded by the Identity Provider (IdP) rather than Splunk, however you can see successful logins to Splunk from SAML users with the following query: index=_internal method=POST uri=/saml/acs | table _time user clientip  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunk team, I need a search query that can pull data back of successful and unsuccessful login attempts of users login into a server using SAML. I also need to create a dashboard of the resul... See more...
Hello Splunk team, I need a search query that can pull data back of successful and unsuccessful login attempts of users login into a server using SAML. I also need to create a dashboard of the results. Any additional information needed, please let me know. Do I need to extract a field of all the users using SAML? v/r cmazurdia
>9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Yes. >Last month in the "Splunk Security Advisories" it said to patch up to 9.4... See more...
>9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Yes. >Last month in the "Splunk Security Advisories" it said to patch up to 9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? I think the new advisory is just telling the fix is in 9.4.0, 9.3.2, 9.2.4 or 9.1.7 and above . However if you are already on 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions and above, you can ignore new email.
What is it you are trying to achieve here? I would just like to know the impact in case I encounter a KV Store status failure. How can I identify which apps, such as ES, might be affected If I remov... See more...
What is it you are trying to achieve here? I would just like to know the impact in case I encounter a KV Store status failure. How can I identify which apps, such as ES, might be affected If I remove or clear kvstore data?
9.3.3 is fine. 9.4.x/9.3.2/9.2.4/9.1.7 and above has the fix.
I have added even more settings to the props.conf: MAX_DAYS_AGO = 10951 MAX_DAYS_HENCE = 10950 MAX_DIFF_SECS_AGO = 2147483646 MAX_DIFF_SECS_HENCE = 2147483646 and checked the _internal but there... See more...
I have added even more settings to the props.conf: MAX_DAYS_AGO = 10951 MAX_DAYS_HENCE = 10950 MAX_DIFF_SECS_AGO = 2147483646 MAX_DIFF_SECS_HENCE = 2147483646 and checked the _internal but there are no warnings. Unfortunately there are no improvements: 2025-04-23 17:06:05 2023-12-01T00:00:00 11557603686635900 2025-04-23 17:06:05 2023-11-01T00:00:00 11341507392715400 2025-04-23 17:06:05 2023-10-01T00:00:00 11116993118051800 2025-04-23 17:06:05 2023-09-01T00:00:00 10521042084168300 2025-04-23 17:06:05 2023-08-01T00:00:00 10017490857052000 2025-04-23 17:06:05 2023-07-01T00:00:00 9691291660267240   Isn't there a workaround to force _time to take its value from a specific field?
The source is the mysql error logfile. The sourcetype is the splunk native "mysqld_error".   I have 12 database servers with universal splunkforwarder indexing the mysql error logfile 7 servers w... See more...
The source is the mysql error logfile. The sourcetype is the splunk native "mysqld_error".   I have 12 database servers with universal splunkforwarder indexing the mysql error logfile 7 servers working fine (instant indexing and correct tz) 5 servers having the same problem.   inputs.conf [default] host = MYSQL01 [monitor:///dblog/errorlog/mysql-error.log] disabled = false sourcetype = mysqld_error index = mysql-errorlog   props.conf [monitor:///dblog/errorlog/mysql-error.log] LEARN_SOURCETYPE = false   
If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for ... See more...
If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for this: 1) The source might be reporting no timezone information or even a wrong one. 2) The sourcetype might not be properly configured for timestamp recognition at all 3) The sourcetype might not assign proper timezone in case there is no timezone information in the original events. So it all depends on details of your particular case. You haven't provided too many details so we can't tell which one it is.