All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I forgot to say when I was doing the spl, I did the mvexpand on the field column so I can just look at each field individually for that line in the log. Then I can alert only on something that is bad... See more...
I forgot to say when I was doing the spl, I did the mvexpand on the field column so I can just look at each field individually for that line in the log. Then I can alert only on something that is bad. But having the host and the value to compare was where I have issues.  
What should be plan for customers who recently upgraded to 9.3.3?
Thanks for reply. it will helpful information, currently working with this, lets update once its been done. I am looking Citrix WAF logs onboard into the splunk. do you have any suggestions?
Hi @chrisitanmoleck , did you try to configure the Default Timezone for your User (in Account Setings)? Ciao. Giuseppe
Sorry it took so long to get back.  The second Option is starting to get where I need to be. I appreciate the code. How do I keep the host from the original log and have the second column in that has... See more...
Sorry it took so long to get back.  The second Option is starting to get where I need to be. I appreciate the code. How do I keep the host from the original log and have the second column in that has the value I want to compare the columns too.  I am using ITSI but I Originally I thought if I were looking at the event in this custom log using things we all know. LOG: host                    CPU     MeM          UsePct       Swapused  Apple1               5            3                  2                      7 Apple2               4             1                12                     9 Apple3               1              2                4                      8         Lookup host     fieldName     Comparefield *              CPU                   7 *             MEM                   4 *             Swapused        2   Code I thought I could do the foreach line in the log If (<field> Log<=<fieldName>Lookup, "OK", <fieldName>"Error")      
Hello @livehybrid , First, thanks for your help. I tried the query, but it didn't work. I mean, I got no information. I even tested the HEC via curl, and everything seems normal.  
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doe... See more...
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doesn't solve the problem.  
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't ... See more...
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't work is the _time. I tried as you suggested to increase the MAX_DAYS_AGO to the maximum (10951), but I am always getting the same results: index=lts sourcetype=size_summaries | table _time _raw 2025-04-23 14:47:13 2019-02-01T00:00:00 3830390070938120 2025-04-23 14:47:13 2019-01-01T00:00:00 3682803389795110 2025-04-23 14:47:13 2018-12-01T00:00:00 3583659674663620 2025-04-23 14:47:13 2018-11-01T00:00:00 3500420740998170 2025-04-23 14:47:13 2018-10-01T00:00:00 3439269816258840 2025-04-23 14:47:13 2018-09-01T00:00:00 3365435411968590
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mm... See more...
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1).   Actually they are: [root@splunk-1 splunk]# ll /opt/splunk/var/lib/splunk/kvstore/mongo | grep wt | wc -l 54  But my question is actually about mongo version. Here it is on Splunk 8.1 [root@splunk-1 splunk]# /opt/splunk/bin/splunk cmd mongod --version db version v3.6.17-linux-splunk-v4   So I want it to upgrade to 4.2 via command /opt/splunk/bin/splunk migrate /opt/splunk/bin/splunk migrate but still no luck and no any error info.
Hi @becksyboy , yes probably the Change DM is the best fit, but probably also the Authentication DM is useful, it depends on your Use Cases. Ciao. Giuseppe
Hi @gcusello yep we noticed the TA did not do CIM mapping  In terms of FIM monitoring would you say the Change DM is the best fit, seems like it to me.
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise.  I found a helpful article posted by Splunk Support that resolved my issue.  Please see the link below. ... See more...
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise.  I found a helpful article posted by Splunk Support that resolved my issue.  Please see the link below.   http://splunk.my.site.com/customer/s/article/File-Integrity-checks-found-41-files-that-did-not-match-the-system-provided-manifest 
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true ... See more...
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true because in the add-on there isn't tags.conf and eventtypes.conf. Anyway, I usually use to normalize data the SA-CIM_vladiator app ( https://splunkbase.splunk.com/app/2968 ) , that guides you in the normalization activity. Ciao. Giuseppe
Hi @chrisitanmoleck , ad first check, verify the timezone of the forwarder. Ciao. Giuseppe
Hi @Kimjong9  Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text.  Did this answer h... See more...
Hi @Kimjong9  Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. Fo... See more...
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. For most Splunk forwarders, the data is displayed in Splunk Web almost simultaneously. The times there also match. Reinstalling the affected forwarders did not help. Do you have a solution?        
Hi @Andre_  How are you currently achieving this for event based data? You should be able to set an index-time field for your metric data with INGEST_EVAL or REGEX/WRITE_META. I guess if you need t... See more...
Hi @Andre_  How are you currently achieving this for event based data? You should be able to set an index-time field for your metric data with INGEST_EVAL or REGEX/WRITE_META. I guess if you need to use your lookup then you'll need to use INGEST_EVAL. Check out the following community post for an example of this if you havent already done this: https://community.splunk.com/t5/Getting-Data-In/ingest-eval-lookup-example/m-p/534975 Also worthy of a read is https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L79C2-L79C34  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi , If you are sending logs from On prem Panorama consoles to Splunk and using Palo Alto addon. The logs will go to pan:traffic . However, if you are sending logs from Strata console via HEC the lo... See more...
Hi , If you are sending logs from On prem Panorama consoles to Splunk and using Palo Alto addon. The logs will go to pan:traffic . However, if you are sending logs from Strata console via HEC the logs will be in Json format and the right sourcetype to use is pan:firewall_cloud. 
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: |... See more...
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: | windbag | head 1 | eval _raw="[ { \"email\": \"example@example.com\", \"event\": \"delivered\", \"ip\": \"XXX.XXX.XXX.XX\", \"response\": \"250 mail saved\", \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"tls\": 1, \"twilio:verify\": \"XXXX\" }, { \"email\": \"example@example.com\", \"event\": \"processed\", \"send_at\": 0, \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"twilio:verify\": \"XXXX\" } ]" | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?... See more...
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?  thanks