All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: |... See more...
Hi @amitrinx  You can use the following to split them into single events: | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw     Full example with sample data: | windbag | head 1 | eval _raw="[ { \"email\": \"example@example.com\", \"event\": \"delivered\", \"ip\": \"XXX.XXX.XXX.XX\", \"response\": \"250 mail saved\", \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"tls\": 1, \"twilio:verify\": \"XXXX\" }, { \"email\": \"example@example.com\", \"event\": \"processed\", \"send_at\": 0, \"sg_event_id\": \"XXXX\", \"sg_message_id\": \"XXXX\", \"sg_template_id\": \"XXXX\", \"sg_template_name\": \"en\", \"smtp-id\": \"XXXX\", \"timestamp\": \"XXXX\", \"twilio:verify\": \"XXXX\" } ]" | eval events=json_array_to_mv(_raw) | mvexpand events | rename events as _raw  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?... See more...
Hi All, Has anyone managed to map CrowdStrike Falcon FileVantage (FIM) logs to a Datamodel; if so could you share your field mappings? We were looking at he Change DM, would this be the best option?  thanks
Hi @ribentrop  Based on your kvstore status output it looks like the upgrade has already been completed. I think you would see that message if there no collections to be converted to wiredTiger. A... See more...
Hi @ribentrop  Based on your kvstore status output it looks like the upgrade has already been completed. I think you would see that message if there no collections to be converted to wiredTiger. Are there subdirectories and files in $SPLUNK_HOME/var/lib/splunk/kvstore/mongo? Look for .wt files (WiredTiger), or collection*, index* files (old mmapv1).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  It sounds like you want to maintain a lookup of alarms which you have dealt with.  Its hard to say exactly without your existing search but I would do the following: U se a lookup comman... See more...
Hi @SN1  It sounds like you want to maintain a lookup of alarms which you have dealt with.  Its hard to say exactly without your existing search but I would do the following: U se a lookup command to match the event - use the OUTPUTNEW capability to output a field in the lookup as a new fieldname (e.g. | lookup myLookup myField1 myField2 OUTPUTNEW myField1 AS matchedField) Use the where command to filter out those where matchedField is empty/null This should result in just a list of events that were NOT in the lookup.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @punkle64  Please can you confirm that your props.conf is on your HF or Indexer - not the UF? The index time parsing will be done on the first "full" instance if Splunk if reaches (Heavy Forwarde... See more...
Hi @punkle64  Please can you confirm that your props.conf is on your HF or Indexer - not the UF? The index time parsing will be done on the first "full" instance if Splunk if reaches (Heavy Forwarder / Indexer).  The other thing you might need to check is increasing the MAX_DAYS_AGO value - it could be that the date detected is too far away and Splunk is defaulting to the modified time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @newnew20241018  I think your print statement is going to corrupt the response fed back and will prevent valid JSON/XML being rendered. Try removing this and see if that resolves the issue. ... See more...
Hi @newnew20241018  I think your print statement is going to corrupt the response fed back and will prevent valid JSON/XML being rendered. Try removing this and see if that resolves the issue. print(results_list)  Note - Persistent endpoints are...persistent...so if you edit the file you might need to kill the persistent process if its still running before you get a clean rendering of the output again.  If you're using linux then you can check with ps -aux | grep persistent  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @WorapongJ  Both of these will result in an empty KV Store, although with the first you will have a copy of it to wherever you moved it to. What is it you are trying to achieve here? For KV Sto... See more...
Hi @WorapongJ  Both of these will result in an empty KV Store, although with the first you will have a copy of it to wherever you moved it to. What is it you are trying to achieve here? For KV Store troubleshooting check out https://docs.splunk.com/Documentation/Splunk/latest/Admin/TroubleshootKVstore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sverdhan  You can use the _audit index to find these, its not possible to search for a literal asterisk in Splunk but you can use a match command within where to filter as below. Note, the NOT "... See more...
Hi @sverdhan  You can use the _audit index to find these, its not possible to search for a literal asterisk in Splunk but you can use a match command within where to filter as below. Note, the NOT "index=_audit" is to stop your own searches for asterisks searches from coming back! index=_audit info=granted NOT "index=_audit" NOT typeahead | where match(search, ",*index\s?=\s?\*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You could look through the _internal index to see what searches have been performed. This only tells you what have been executed, not what could potentially execute i.e. there could still be alerts w... See more...
You could look through the _internal index to see what searches have been performed. This only tells you what have been executed, not what could potentially execute i.e. there could still be alerts which haven't run but may run in the future which use index=*
Please explain where the data for this table comes from e.g. the search used. Also, how do you "solve" a "severity" and how does this mean it is removed from this table. Please explain where "somewhe... See more...
Please explain where the data for this table comes from e.g. the search used. Also, how do you "solve" a "severity" and how does this mean it is removed from this table. Please explain where "somewhere else" is and how you "confirmation" is performed. Please explain how rollback works (or is expected to work).
Hello guys,   I need a splunk query that list out all the alerts that have index=* in their query. Unfortunately, I can't use rest services so kindly suggest me how can i do it without using rest.
Hi @Zoe_  You may find the Webtools Add-on helpful here, you can use the custom curl command in the app to request your data and then parse it into a table, then use outputlookup to save it. Here i... See more...
Hi @Zoe_  You may find the Webtools Add-on helpful here, you can use the custom curl command in the app to request your data and then parse it into a table, then use outputlookup to save it. Here is an example I have used previously: The SPL for this is: | curl uri=https://raw.githubusercontent.com/livehybrid/TA-aws-trusted-advisor/refs/heads/main/package/lookups/trusted_advisor_checks.csv | rex field=curl_message max_match=1000 "(?<data>.+)\n?" | mvexpand data | fields data | rex field=data "^(?<id>[^,]+),(?<name>\"[^\"]+\"|[^,]+),(?<category>\"[^\"]+\"|[^,]+),(?<description>\".*\"|[^,]+)$" | fields - data  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either co... See more...
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either command will delete all data in the KV Store? Solution1: - ./splunk stop - mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo /path/to/copy/kvstore/mongo_old -./splunk start   Solution2: - ./splunk stop - ./splunk clean kvstore --local -./splunk start
Hi,    I installed Python SDK in App. I registered endpoint in the file restmap.conf . I'd like to receive an answer in json format for the lookup file through search. and I'd like to use this r... See more...
Hi,    I installed Python SDK in App. I registered endpoint in the file restmap.conf . I'd like to receive an answer in json format for the lookup file through search. and I'd like to use this response data at another splunk app.   But, The following error message is returned... 'bad character (49) in reply size'   If I print a simple search without using a class SearchHandler(PersistentServerConnectionApplication), the result is good. But, If I use endpoint, the following errors always occur.       Why is this error occurring?   this is my code. my restmap.con code [script:search-number] match = /search-number script = search_handler.py scripttype = persist handler = search_handler.SearchHandler   my search_handler.py code # import .env from config import search_env env = search_env() HOST = env['HOST'] PORT = env['PORT'] USERNAME = env['USERNAME'] PASSWORD = env['PASSWORD'] import json import time from splunk.persistconn.application import PersistentServerConnectionApplication import splunklib.client as client import splunklib.results as results from splunklib.results import JSONResultsReader class SearchHandler(PersistentServerConnectionApplication): def __init__(self, command_line, command_arg): super(SearchHandler, self).__init__() def handle(self, args): try: service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD, ) search_query = '| inputlookup search-numbers.csv' jobs = service.jobs job = jobs.create(search_query) while not job.is_done(): time.sleep(1) reader = JSONResultsReader(job.results(output_mode='json')) results_list = [item for item in reader if isinstance(item, dict)] print(results_list) return { 'payload': results_list, 'status': 200 } except Exception as e: return { 'payload': {'error': str(e)}, 'status': 500 }     Is there an example code to search for csv files using 'endpoint'?   https://github.com/splunk/splunk-app-examples/tree/master/custom_endpoints/hello-world this example is not using search.   I'm a front-end developer who doesn't know Python very well.....    
I have the following source log files: [root@lts-reporting ~]# head /nfs/LTS/splunk/lts12_summary.log 2014-07-01T00:00:00 78613376660548 2014-08-01T00:00:00 94340587484234 2014-09-01T00:00:00 1... See more...
I have the following source log files: [root@lts-reporting ~]# head /nfs/LTS/splunk/lts12_summary.log 2014-07-01T00:00:00 78613376660548 2014-08-01T00:00:00 94340587484234 2014-09-01T00:00:00 105151971182496 2014-10-01T00:00:00 104328846250489 2014-11-01T00:00:00 124100293157039 2014-12-01T00:00:00 150823795700989 2015-01-01T00:00:00 178786111756322 2015-02-01T00:00:00 225445840948631 2015-03-01T00:00:00 248963904047438 2015-04-01T00:00:00 274070504403562 [root@lts-reporting ~]# head /nfs/LTS/splunk/lts22_summary.log 2014-07-01T00:00:00 87011545030617 2014-08-01T00:00:00 112491174858354 2014-09-01T00:00:00 114655842870462 2014-10-01T00:00:00 102729950441541 2014-11-01T00:00:00 124021498471043 2014-12-01T00:00:00 147319995334181 2015-01-01T00:00:00 182983059554298 2015-02-01T00:00:00 234679634668451 2015-03-01T00:00:00 252420788862798 2015-04-01T00:00:00 288156185998535 On the universal forwarder I have the following inputs.conf stanzas: ## LTS summaries [monitor:///nfs/LTS/splunk/lts12_summary.log] _TCP_ROUTING = druid index = lts sourcetype = size_summaries ## LTS summaries [monitor:///nfs/LTS/splunk/lts22_summary.log] _TCP_ROUTING = druid index = lts sourcetype = size_summaries I have the following splunk props stanza: ## LTS Size Summaries [size_summaries] SHOULD_LINEMERGE = false TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 19 TIME_FORMAT = %Y-%m-%dT%H:%M:%S DATETIME_CONFIG = NONE EXTRACT-ltsserver = /nfs/LTS/splunk/(?<ltsserver>\w+)_summary.log in source EXTRACT-size = (?m)^\S+\s+(?<size>\d+) When indexing the files for the first time the events get parsed with the correct _time (the first field in every line of the log), but when a new event gets logged all the events get assigned the latest modification time of the log file. I have tried deleting the events by sourcetype on the indexer and restarted splunk to see if anything changes, but I get exactly the same behaviour. Unfortunately I cannot delete the full fishbucket of the index as I have other source types in the same index which would be lost. Is there a way to force the indexer to parse the first field of the events as _time?
So  i have a dashboard and in drilldown i am showing severity in the servers now i want whenever the severity is solved that severity is removed from the drilldown and store somewhere else for confir... See more...
So  i have a dashboard and in drilldown i am showing severity in the servers now i want whenever the severity is solved that severity is removed from the drilldown and store somewhere else for confirmation. from this table if i solve any severity i should be able to remove it from here and store it somewhere else. and if by mistake i have removed it , i can rollback .
anybody have experience for building an automation to import CSV from  github location into Splunk lookup file, CSV files are constantly changing, and I need to automate daily updates
Missprint here: In normal cases Splunk replies in something like "[App Key Value Store migration] Starting migrate-kvstore." .
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]#... See more...
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]# /opt/splunk/bin/splunk show kvstore-status --verbose This member: backupRestoreStatus : Ready date : Wed Apr 23 09:56:56 2025 dateSec : 1745391416.331 disabled : 0 guid : 3FA11F27-42E0-400A-BF69-D15F6B534708 oplogEndTimestamp : Wed Apr 23 09:56:55 2025 oplogEndTimestampSec : 1745391415 oplogStartTimestamp : Wed Apr 23 09:50:13 2025 oplogStartTimestampSec : 1745391013 port : 8191 replicaSet : 3FA11F27-42E0-400A-BF69-D15F6B534708 replicationStatus : KV store captain standalone : 1 status : ready storageEngine : wiredTiger KV store members: 127.0.0.1:8191 configVersion : 1 electionDate : Wed Apr 23 09:55:23 2025 electionDateSec : 1745391323 hostAndPort : 127.0.0.1:8191 optimeDate : Wed Apr 23 09:56:55 2025 optimeDateSec : 1745391415 replicationStatus : KV store captain uptime : 95   Now I'm trying to upgrade mongo version from 3.6 to version v4.2. According to mongod.log my current version is: 2025-04-23T06:55:21.374Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4   Now according to docs I'm trying to migrate to another version of mongo manually but get the following message: [root@splunk-1 opt]# /opt/splunk/bin/splunk migrate migrate-kvstore [App Key Value Store migration] Collection data is not available.   Whst Splunk means by that? "Collection data is not available". I have several collections in my Splunk. I haven't found any case in Community. In normal cases Splunk replies in something like "Collection data is not available" . It seems that I do something wrong in genelal Thanks  
Thanks all for the help, I will try a regex that match both. I learn a lot with you guys thanks !!!!