The source is the mysql error logfile. The sourcetype is the splunk native "mysqld_error". I have 12 database servers with universal splunkforwarder indexing the mysql error logfile 7 servers w...
See more...
The source is the mysql error logfile. The sourcetype is the splunk native "mysqld_error". I have 12 database servers with universal splunkforwarder indexing the mysql error logfile 7 servers working fine (instant indexing and correct tz) 5 servers having the same problem. inputs.conf [default]
host = MYSQL01
[monitor:///dblog/errorlog/mysql-error.log]
disabled = false
sourcetype = mysqld_error
index = mysql-errorlog props.conf [monitor:///dblog/errorlog/mysql-error.log]
LEARN_SOURCETYPE = false
If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for ...
See more...
If the "delay" is consistent and seems to be rounded to full hours (in some cases smaller subdivisions but that's rare) it's usually the case with timezone problems. There can be multiple causes for this: 1) The source might be reporting no timezone information or even a wrong one. 2) The sourcetype might not be properly configured for timestamp recognition at all 3) The sourcetype might not assign proper timezone in case there is no timezone information in the original events. So it all depends on details of your particular case. You haven't provided too many details so we can't tell which one it is.
Check your _internal for warnings. Typically when the events are that old, you need to tweak the age/difference related settings (MAX_DAYS_AGO is one of them). Otherwise Splunk decides that it must h...
See more...
Check your _internal for warnings. Typically when the events are that old, you need to tweak the age/difference related settings (MAX_DAYS_AGO is one of them). Otherwise Splunk decides that it must have parsed the time wrongly because the timestamp doesn't make sense and - depending on situation - assumes current timestamp or copies over the last one.
Remember that searches might query all indexes even if they don't have verbatim "index=*" in them. There are several possible cases which might cause that behaviour: 1) Default indexes defined for a...
See more...
Remember that searches might query all indexes even if they don't have verbatim "index=*" in them. There are several possible cases which might cause that behaviour: 1) Default indexes defined for a role (you should not do that but it is possible) 2) Eventtype 3) index IN (*) 4) macro 5) Data model And please try to set a more descriptive topic for the thread next time.
I forgot to say when I was doing the spl, I did the mvexpand on the field column so I can just look at each field individually for that line in the log. Then I can alert only on something that is bad...
See more...
I forgot to say when I was doing the spl, I did the mvexpand on the field column so I can just look at each field individually for that line in the log. Then I can alert only on something that is bad. But having the host and the value to compare was where I have issues.
Thanks for reply. it will helpful information, currently working with this, lets update once its been done. I am looking Citrix WAF logs onboard into the splunk. do you have any suggestions?
Sorry it took so long to get back. The second Option is starting to get where I need to be. I appreciate the code. How do I keep the host from the original log and have the second column in that has...
See more...
Sorry it took so long to get back. The second Option is starting to get where I need to be. I appreciate the code. How do I keep the host from the original log and have the second column in that has the value I want to compare the columns too. I am using ITSI but I Originally I thought if I were looking at the event in this custom log using things we all know. LOG: host CPU MeM UsePct Swapused Apple1 5 3 2 7 Apple2 4 1 12 9 Apple3 1 2 4 8 Lookup host fieldName Comparefield * CPU 7 * MEM 4 * Swapused 2 Code I thought I could do the foreach line in the log If (<field> Log<=<fieldName>Lookup, "OK", <fieldName>"Error")
Hello @livehybrid , First, thanks for your help. I tried the query, but it didn't work. I mean, I got no information. I even tested the HEC via curl, and everything seems normal.
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doe...
See more...
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doesn't solve the problem.
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't ...
See more...
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't work is the _time. I tried as you suggested to increase the MAX_DAYS_AGO to the maximum (10951), but I am always getting the same results: index=lts sourcetype=size_summaries | table _time _raw 2025-04-23 14:47:13 2019-02-01T00:00:00 3830390070938120 2025-04-23 14:47:13 2019-01-01T00:00:00 3682803389795110 2025-04-23 14:47:13 2018-12-01T00:00:00 3583659674663620 2025-04-23 14:47:13 2018-11-01T00:00:00 3500420740998170 2025-04-23 14:47:13 2018-10-01T00:00:00 3439269816258840 2025-04-23 14:47:13 2018-09-01T00:00:00 3365435411968590
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mm...
See more...
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1). Actually they are: [root@splunk-1 splunk]# ll /opt/splunk/var/lib/splunk/kvstore/mongo | grep wt | wc -l
54 But my question is actually about mongo version. Here it is on Splunk 8.1 [root@splunk-1 splunk]# /opt/splunk/bin/splunk cmd mongod --version
db version v3.6.17-linux-splunk-v4 So I want it to upgrade to 4.2 via command /opt/splunk/bin/splunk migrate /opt/splunk/bin/splunk migrate but still no luck and no any error info.
Hi @becksyboy , yes probably the Change DM is the best fit, but probably also the Authentication DM is useful, it depends on your Use Cases. Ciao. Giuseppe
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise. I found a helpful article posted by Splunk Support that resolved my issue. Please see the link below. ...
See more...
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise. I found a helpful article posted by Splunk Support that resolved my issue. Please see the link below. http://splunk.my.site.com/customer/s/article/File-Integrity-checks-found-41-files-that-did-not-match-the-system-provided-manifest
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true ...
See more...
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true because in the add-on there isn't tags.conf and eventtypes.conf. Anyway, I usually use to normalize data the SA-CIM_vladiator app ( https://splunkbase.splunk.com/app/2968 ) , that guides you in the normalization activity. Ciao. Giuseppe
Hi @Kimjong9 Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text. Did this answer h...
See more...
Hi @Kimjong9 Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. Fo...
See more...
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. For most Splunk forwarders, the data is displayed in Splunk Web almost simultaneously. The times there also match. Reinstalling the affected forwarders did not help. Do you have a solution?