All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doe... See more...
Hello Giuseppe, the server is connected to a ntp server. There are no timezone-settings in the splunk configurations file $Splunk_Home/etc/system/local. Adding TZ = Europe/Berlin in props.conf doesn't solve the problem.  
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't ... See more...
Many thanks for your answer. Yes, the props.conf is on the indexer and works fine for the other fields (ltsserver and size) so it means the config is taken into account. The only thing which doesn't work is the _time. I tried as you suggested to increase the MAX_DAYS_AGO to the maximum (10951), but I am always getting the same results: index=lts sourcetype=size_summaries | table _time _raw 2025-04-23 14:47:13 2019-02-01T00:00:00 3830390070938120 2025-04-23 14:47:13 2019-01-01T00:00:00 3682803389795110 2025-04-23 14:47:13 2018-12-01T00:00:00 3583659674663620 2025-04-23 14:47:13 2018-11-01T00:00:00 3500420740998170 2025-04-23 14:47:13 2018-10-01T00:00:00 3439269816258840 2025-04-23 14:47:13 2018-09-01T00:00:00 3365435411968590
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mm... See more...
Hi @livehybrid ! Thank for yuou response. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1).   Actually they are: [root@splunk-1 splunk]# ll /opt/splunk/var/lib/splunk/kvstore/mongo | grep wt | wc -l 54  But my question is actually about mongo version. Here it is on Splunk 8.1 [root@splunk-1 splunk]# /opt/splunk/bin/splunk cmd mongod --version db version v3.6.17-linux-splunk-v4   So I want it to upgrade to 4.2 via command /opt/splunk/bin/splunk migrate /opt/splunk/bin/splunk migrate but still no luck and no any error info.
Hi @becksyboy , yes probably the Change DM is the best fit, but probably also the Authentication DM is useful, it depends on your Use Cases. Ciao. Giuseppe
Hi @gcusello yep we noticed the TA did not do CIM mapping  In terms of FIM monitoring would you say the Change DM is the best fit, seems like it to me.
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise.  I found a helpful article posted by Splunk Support that resolved my issue.  Please see the link below. ... See more...
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise.  I found a helpful article posted by Splunk Support that resolved my issue.  Please see the link below.   http://splunk.my.site.com/customer/s/article/File-Integrity-checks-found-41-files-that-did-not-match-the-system-provided-manifest 
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true ... See more...
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true because in the add-on there isn't tags.conf and eventtypes.conf. Anyway, I usually use to normalize data the SA-CIM_vladiator app ( https://splunkbase.splunk.com/app/2968 ) , that guides you in the normalization activity. Ciao. Giuseppe
Hi @chrisitanmoleck , ad first check, verify the timezone of the forwarder. Ciao. Giuseppe
Hi @Kimjong9  Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text.  Did this answer h... See more...
Hi @Kimjong9  Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. Fo... See more...
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. For most Splunk forwarders, the data is displayed in Splunk Web almost simultaneously. The times there also match. Reinstalling the affected forwarders did not help. Do you have a solution?        
Hi @Andre_  How are you currently achieving this for event based data? You should be able to set an index-time field for your metric data with INGEST_EVAL or REGEX/WRITE_META. I guess if you need t... See more...
Hi @Andre_  How are you currently achieving this for event based data? You should be able to set an index-time field for your metric data with INGEST_EVAL or REGEX/WRITE_META. I guess if you need to use your lookup then you'll need to use INGEST_EVAL. Check out the following community post for an example of this if you havent already done this: https://community.splunk.com/t5/Getting-Data-In/ingest-eval-lookup-example/m-p/534975 Also worthy of a read is https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L79C2-L79C34  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi , If you are sending logs from On prem Panorama consoles to Splunk and using Palo Alto addon. The logs will go to pan:traffic . However, if you are sending logs from Strata console via HEC the lo... See more...
Hi , If you are sending logs from On prem Panorama consoles to Splunk and using Palo Alto addon. The logs will go to pan:traffic . However, if you are sending logs from Strata console via HEC the logs will be in Json format and the right sourcetype to use is pan:firewall_cloud. 
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: |... See more...
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: | windbag | head 1 | eval _raw="[ { \"email\": \"example@example.com\", \"event\": \"delivered\", \"ip\": \"XXX.XXX.XXX.XX\", \"response\": \"250 mail saved\", \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"tls\": 1, \"twilio:verify\": \"XXXX\" }, { \"email\": \"example@example.com\", \"event\": \"processed\", \"send_at\": 0, \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"twilio:verify\": \"XXXX\" } ]" | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?... See more...
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?  thanks
Hi @ribentrop  Based on your kvstore status output it looks like the upgrade has already been completed. I think you would see that message if there no collections to be converted to wiredTiger. A... See more...
Hi @ribentrop  Based on your kvstore status output it looks like the upgrade has already been completed. I think you would see that message if there no collections to be converted to wiredTiger. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  It sounds like you want to maintain a lookup of alarms which you have dealt with.  Its hard to say exactly without your existing search but I would do the following: U se a lookup comman... See more...
Hi @SN1  It sounds like you want to maintain a lookup of alarms which you have dealt with.  Its hard to say exactly without your existing search but I would do the following: U se a lookup command to match the event - use the OUTPUTNEW capability to output a field in the lookup as a new fieldname (e.g. | lookup myLookup myField1 myField2 OUTPUTNEW myField1 AS matchedField) Use the where command to filter out those where matchedField is empty/null This should result in just a list of events that were NOT in the lookup.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @punkle64  Please can you confirm that your props.conf is on your HF or Indexer - not the UF? The index time parsing will be done on the first "full" instance if Splunk if reaches (Heavy Forwarde... See more...
Hi @punkle64  Please can you confirm that your props.conf is on your HF or Indexer - not the UF? The index time parsing will be done on the first "full" instance if Splunk if reaches (Heavy Forwarder / Indexer).  The other thing you might need to check is increasing the MAX_DAYS_AGO value - it could be that the date detected is too far away and Splunk is defaulting to the modified time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @newnew20241018  I think your print statement is going to corrupt the response fed back and will prevent valid JSON/XML being rendered. Try removing this and see if that resolves the issue. ... See more...
Hi @newnew20241018  I think your print statement is going to corrupt the response fed back and will prevent valid JSON/XML being rendered. Try removing this and see if that resolves the issue. print(results_list)  Note - Persistent endpoints are...persistent...so if you edit the file you might need to kill the persistent process if its still running before you get a clean rendering of the output again.  If you're using linux then you can check with ps -aux | grep persistent  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @WorapongJ  Both of these will result in an empty KV Store, although with the first you will have a copy of it to wherever you moved it to. What is it you are trying to achieve here? For KV Sto... See more...
Hi @WorapongJ  Both of these will result in an empty KV Store, although with the first you will have a copy of it to wherever you moved it to. What is it you are trying to achieve here? For KV Store troubleshooting check out https://docs.splunk.com/Documentation/Splunk/latest/Admin/TroubleshootKVstore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sverdhan  You can use the _audit index to find these, its not possible to search for a literal asterisk in Splunk but you can use a match command within where to filter as below. Note, the NOT "... See more...
Hi @sverdhan  You can use the _audit index to find these, its not possible to search for a literal asterisk in Splunk but you can use a match command within where to filter as below. Note, the NOT "index=_audit" is to stop your own searches for asterisks searches from coming back! index=_audit info=granted NOT "index=_audit" NOT typeahead | where match(search, ",*index\s?=\s?\*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing