All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @CMAzurdia  Typically success/failed login attempts are recorded by the Identity Provider (IdP) rather than Splunk, however you can see successful logins to Splunk from SAML users with the follow... See more...
Hi @CMAzurdia  Typically success/failed login attempts are recorded by the Identity Provider (IdP) rather than Splunk, however you can see successful logins to Splunk from SAML users with the following query: index=_internal method=POST uri=/saml/acs | table _time user clientip  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunk team, I need a search query that can pull data back of successful and unsuccessful login attempts of users login into a server using SAML. I also need to create a dashboard of the resul... See more...
Hello Splunk team, I need a search query that can pull data back of successful and unsuccessful login attempts of users login into a server using SAML. I also need to create a dashboard of the results. Any additional information needed, please let me know. Do I need to extract a field of all the users using SAML? v/r cmazurdia
>9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Yes. >Last month in the "Splunk Security Advisories" it said to patch up to 9.4... See more...
>9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Yes. >Last month in the "Splunk Security Advisories" it said to patch up to 9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? I think the new advisory is just telling the fix is in 9.4.0, 9.3.2, 9.2.4 or 9.1.7 and above . However if you are already on 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions and above, you can ignore new email.
What is it you are trying to achieve here? I would just like to know the impact in case I encounter a KV Store status failure. How can I identify which apps, such as ES, might be affected If I remov... See more...
What is it you are trying to achieve here? I would just like to know the impact in case I encounter a KV Store status failure. How can I identify which apps, such as ES, might be affected If I remove or clear kvstore data?
9.3.3 is fine. 9.4.x/9.3.2/9.2.4/9.1.7 and above has the fix.
I have added even more settings to the props.conf: MAX_DAYS_AGO = 10951 MAX_DAYS_HENCE = 10950 MAX_DIFF_SECS_AGO = 2147483646 MAX_DIFF_SECS_HENCE = 2147483646 and checked the _internal but there... See more...
I have added even more settings to the props.conf: MAX_DAYS_AGO = 10951 MAX_DAYS_HENCE = 10950 MAX_DIFF_SECS_AGO = 2147483646 MAX_DIFF_SECS_HENCE = 2147483646 and checked the _internal but there are no warnings. Unfortunately there are no improvements: 2025-04-23 17:06:05 2023-12-01T00:00:00 11557603686635900 2025-04-23 17:06:05 2023-11-01T00:00:00 11341507392715400 2025-04-23 17:06:05 2023-10-01T00:00:00 11116993118051800 2025-04-23 17:06:05 2023-09-01T00:00:00 10521042084168300 2025-04-23 17:06:05 2023-08-01T00:00:00 10017490857052000 2025-04-23 17:06:05 2023-07-01T00:00:00 9691291660267240   Isn't there a workaround to force _time to take its value from a specific field?
The source is the mysql error logfile. The sourcetype is the splunk native "mysqld_error".   I have 12 database servers with universal splunkforwarder indexing the mysql error logfile 7 servers w... See more...
The source is the mysql error logfile. The sourcetype is the splunk native "mysqld_error".   I have 12 database servers with universal splunkforwarder indexing the mysql error logfile 7 servers working fine (instant indexing and correct tz) 5 servers having the same problem.   inputs.conf [default] host = MYSQL01 [monitor:///dblog/errorlog/mysql-error.log] disabled = false sourcetype = mysqld_error index = mysql-errorlog   props.conf [monitor:///dblog/errorlog/mysql-error.log] LEARN_SOURCETYPE = false   
If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for ... See more...
If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for this: 1) The source might be reporting no timezone information or even a wrong one. 2) The sourcetype might not be properly configured for timestamp recognition at all 3) The sourcetype might not assign proper timezone in case there is no timezone information in the original events. So it all depends on details of your particular case. You haven't provided too many details so we can't tell which one it is.
Check your _internal for warnings. Typically when the events are that old, you need to tweak the age/difference related settings (MAX_DAYS_AGO is one of them). Otherwise Splunk decides that it must h... See more...
Check your _internal for warnings. Typically when the events are that old, you need to tweak the age/difference related settings (MAX_DAYS_AGO is one of them). Otherwise Splunk decides that it must have parsed the time wrongly because the timestamp doesn't make sense and - depending on situation - assumes current timestamp or copies over the last one.
Remember that searches might query all indexes even if they don't have verbatim "index=*" in them. There are several possible cases which might cause that behaviour: 1) Default indexes defined for a... See more...
Remember that searches might query all indexes even if they don't have verbatim "index=*" in them. There are several possible cases which might cause that behaviour: 1) Default indexes defined for a role (you should not do that but it is possible) 2) Eventtype 3) index IN (*) 4) macro 5) Data model And please try to set a more descriptive topic for the thread next time.
I forgot to say when I was doing the spl, I did the mvexpand on the field column so I can just look at each field individually for that line in the log. Then I can alert only on something that is bad... See more...
I forgot to say when I was doing the spl, I did the mvexpand on the field column so I can just look at each field individually for that line in the log. Then I can alert only on something that is bad. But having the host and the value to compare was where I have issues.  
What should be plan for customers who recently upgraded to 9.3.3?
Thanks for reply. it will helpful information, currently working with this, lets update once its been done. I am looking Citrix WAF logs onboard into the splunk. do you have any suggestions?
Hi @chrisitanmoleck , did you try to configure the Default Timezone for your User (in Account Setings)? Ciao. Giuseppe
Sorry it took so long to get back.  The second Option is starting to get where I need to be. I appreciate the code. How do I keep the host from the original log and have the second column in that has... See more...
Sorry it took so long to get back.  The second Option is starting to get where I need to be. I appreciate the code. How do I keep the host from the original log and have the second column in that has the value I want to compare the columns too.  I am using ITSI but I Originally I thought if I were looking at the event in this custom log using things we all know. LOG: host                    CPU     MeM          UsePct       Swapused  Apple1               5            3                  2                      7 Apple2               4             1                12                     9 Apple3               1              2                4                      8         Lookup host     fieldName     Comparefield *              CPU                   7 *             MEM                   4 *             Swapused        2   Code I thought I could do the foreach line in the log If (<field> Log<=<fieldName>Lookup, "OK", <fieldName>"Error")      
Hello @livehybrid , First, thanks for your help. I tried the query, but it didn't work. I mean, I got no information. I even tested the HEC via curl, and everything seems normal.  
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doe... See more...
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doesn't solve the problem.  
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't ... See more...
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't work is the _time. I tried as you suggested to increase the MAX_DAYS_AGO to the maximum (10951), but I am always getting the same results: index=lts sourcetype=size_summaries | table _time _raw 2025-04-23 14:47:13 2019-02-01T00:00:00 3830390070938120 2025-04-23 14:47:13 2019-01-01T00:00:00 3682803389795110 2025-04-23 14:47:13 2018-12-01T00:00:00 3583659674663620 2025-04-23 14:47:13 2018-11-01T00:00:00 3500420740998170 2025-04-23 14:47:13 2018-10-01T00:00:00 3439269816258840 2025-04-23 14:47:13 2018-09-01T00:00:00 3365435411968590
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mm... See more...
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1).   Actually they are: [root@splunk-1 splunk]# ll /opt/splunk/var/lib/splunk/kvstore/mongo | grep wt | wc -l 54  But my question is actually about mongo version. Here it is on Splunk 8.1 [root@splunk-1 splunk]# /opt/splunk/bin/splunk cmd mongod --version db version v3.6.17-linux-splunk-v4   So I want it to upgrade to 4.2 via command /opt/splunk/bin/splunk migrate /opt/splunk/bin/splunk migrate but still no luck and no any error info.