All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise.  I found a helpful article posted by Splunk Support that resolved my issue.  Please see the link below. ... See more...
Hello, I received this same error in upgrading from 9.3 to 9.4 versions of Splunk Enterprise.  I found a helpful article posted by Splunk Support that resolved my issue.  Please see the link below.   http://splunk.my.site.com/customer/s/article/File-Integrity-checks-found-41-files-that-did-not-match-the-system-provided-manifest 
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true ... See more...
hi @becksyboy , are you using the CrowdStrike Falcon FileVantage Technical Add-On ( https://splunkbase.splunk.com/app/7090 )? if yes, this add-on should be already CIM compliant, but it isn.t true because in the add-on there isn't tags.conf and eventtypes.conf. Anyway, I usually use to normalize data the SA-CIM_vladiator app ( https://splunkbase.splunk.com/app/2968 ) , that guides you in the normalization activity. Ciao. Giuseppe
Hi @chrisitanmoleck , ad first check, verify the timezone of the forwarder. Ciao. Giuseppe
Hi @Kimjong9  Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text.  Did this answer h... See more...
Hi @Kimjong9  Yes - you can use things like $result.yourFieldName$ in the payload of the message, however it cannot contain markdown or HTML - it will just be rendered as text.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. Fo... See more...
Hello, Some of the forwarder installations are behaving strangely. They take an hour for the data to be indexed and displayed in Splunk Web. Additionally, the timestamp is offset by 60 minutes. For most Splunk forwarders, the data is displayed in Splunk Web almost simultaneously. The times there also match. Reinstalling the affected forwarders did not help. Do you have a solution?        
Hi @Andre_  How are you currently achieving this for event based data? You should be able to set an index-time field for your metric data with INGEST_EVAL or REGEX/WRITE_META. I guess if you need t... See more...
Hi @Andre_  How are you currently achieving this for event based data? You should be able to set an index-time field for your metric data with INGEST_EVAL or REGEX/WRITE_META. I guess if you need to use your lookup then you'll need to use INGEST_EVAL. Check out the following community post for an example of this if you havent already done this: https://community.splunk.com/t5/Getting-Data-In/ingest-eval-lookup-example/m-p/534975 Also worthy of a read is https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L79C2-L79C34  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi , If you are sending logs from On prem Panorama consoles to Splunk and using Palo Alto addon. The logs will go to pan:traffic . However, if you are sending logs from Strata console via HEC the lo... See more...
Hi , If you are sending logs from On prem Panorama consoles to Splunk and using Palo Alto addon. The logs will go to pan:traffic . However, if you are sending logs from Strata console via HEC the logs will be in Json format and the right sourcetype to use is pan:firewall_cloud. 
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: |... See more...
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: | windbag | head 1 | eval _raw="[ { \"email\": \"example@example.com\", \"event\": \"delivered\", \"ip\": \"XXX.XXX.XXX.XX\", \"response\": \"250 mail saved\", \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"tls\": 1, \"twilio:verify\": \"XXXX\" }, { \"email\": \"example@example.com\", \"event\": \"processed\", \"send_at\": 0, \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"twilio:verify\": \"XXXX\" } ]" | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?... See more...
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?  thanks
Hi @ribentrop  Based on your kvstore status output it looks like the upgrade has already been completed. I think you would see that message if there no collections to be converted to wiredTiger. A... See more...
Hi @ribentrop  Based on your kvstore status output it looks like the upgrade has already been completed. I think you would see that message if there no collections to be converted to wiredTiger. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  It sounds like you want to maintain a lookup of alarms which you have dealt with.  Its hard to say exactly without your existing search but I would do the following: U se a lookup comman... See more...
Hi @SN1  It sounds like you want to maintain a lookup of alarms which you have dealt with.  Its hard to say exactly without your existing search but I would do the following: U se a lookup command to match the event - use the OUTPUTNEW capability to output a field in the lookup as a new fieldname (e.g. | lookup myLookup myField1 myField2 OUTPUTNEW myField1 AS matchedField) Use the where command to filter out those where matchedField is empty/null This should result in just a list of events that were NOT in the lookup.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @punkle64  Please can you confirm that your props.conf is on your HF or Indexer - not the UF? The index time parsing will be done on the first "full" instance if Splunk if reaches (Heavy Forwarde... See more...
Hi @punkle64  Please can you confirm that your props.conf is on your HF or Indexer - not the UF? The index time parsing will be done on the first "full" instance if Splunk if reaches (Heavy Forwarder / Indexer).  The other thing you might need to check is increasing the MAX_DAYS_AGO value - it could be that the date detected is too far away and Splunk is defaulting to the modified time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @newnew20241018  I think your print statement is going to corrupt the response fed back and will prevent valid JSON/XML being rendered. Try removing this and see if that resolves the issue. ... See more...
Hi @newnew20241018  I think your print statement is going to corrupt the response fed back and will prevent valid JSON/XML being rendered. Try removing this and see if that resolves the issue. print(results_list)  Note - Persistent endpoints are...persistent...so if you edit the file you might need to kill the persistent process if its still running before you get a clean rendering of the output again.  If you're using linux then you can check with ps -aux | grep persistent  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @WorapongJ  Both of these will result in an empty KV Store, although with the first you will have a copy of it to wherever you moved it to. What is it you are trying to achieve here? For KV Sto... See more...
Hi @WorapongJ  Both of these will result in an empty KV Store, although with the first you will have a copy of it to wherever you moved it to. What is it you are trying to achieve here? For KV Store troubleshooting check out https://docs.splunk.com/Documentation/Splunk/latest/Admin/TroubleshootKVstore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sverdhan  You can use the _audit index to find these, its not possible to search for a literal asterisk in Splunk but you can use a match command within where to filter as below. Note, the NOT "... See more...
Hi @sverdhan  You can use the _audit index to find these, its not possible to search for a literal asterisk in Splunk but you can use a match command within where to filter as below. Note, the NOT "index=_audit" is to stop your own searches for asterisks searches from coming back! index=_audit info=granted NOT "index=_audit" NOT typeahead | where match(search, ",*index\s?=\s?\*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You could look through the _internal index to see what searches have been performed. This only tells you what have been executed, not what could potentially execute i.e. there could still be alerts w... See more...
You could look through the _internal index to see what searches have been performed. This only tells you what have been executed, not what could potentially execute i.e. there could still be alerts which haven't run but may run in the future which use index=*
Please explain where the data for this table comes from e.g. the search used. Also, how do you "solve" a "severity" and how does this mean it is removed from this table. Please explain where "somewhe... See more...
Please explain where the data for this table comes from e.g. the search used. Also, how do you "solve" a "severity" and how does this mean it is removed from this table. Please explain where "somewhere else" is and how you "confirmation" is performed. Please explain how rollback works (or is expected to work).
Hello guys,   I need a splunk query that list out all the alerts that have index=* in their query. Unfortunately, I can't use rest services so kindly suggest me how can i do it without using rest.
Hi @Zoe_  You may find the Webtools Add-on helpful here, you can use the custom curl command in the app to request your data and then parse it into a table, then use outputlookup to save it. Here i... See more...
Hi @Zoe_  You may find the Webtools Add-on helpful here, you can use the custom curl command in the app to request your data and then parse it into a table, then use outputlookup to save it. Here is an example I have used previously: The SPL for this is: | curl uri=https://raw.githubusercontent.com/livehybrid/TA-aws-trusted-advisor/refs/heads/main/package/lookups/trusted_advisor_checks.csv | rex field=curl_message max_match=1000 "(?<data>.+)\n?" | mvexpand data | fields data | rex field=data "^(?<id>[^,]+),(?<name>\"[^\"]+\"|[^,]+),(?<category>\"[^\"]+\"|[^,]+),(?<description>\".*\"|[^,]+)$" | fields - data  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either co... See more...
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either command will delete all data in the KV Store? Solution1: - ./splunk stop - mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo /path/to/copy/kvstore/mongo_old -./splunk start   Solution2: - ./splunk stop - ./splunk clean kvstore --local -./splunk start