All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either co... See more...
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either command will delete all data in the KV Store? Solution1: - ./splunk stop - mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo /path/to/copy/kvstore/mongo_old -./splunk start   Solution2: - ./splunk stop - ./splunk clean kvstore --local -./splunk start
Hi,    I installed Python SDK in App. I registered endpoint in the file restmap.conf . I'd like to receive an answer in json format for the lookup file through search. and I'd like to use this r... See more...
Hi,    I installed Python SDK in App. I registered endpoint in the file restmap.conf . I'd like to receive an answer in json format for the lookup file through search. and I'd like to use this response data at another splunk app.   But, The following error message is returned... 'bad character (49) in reply size'   If I print a simple search without using a class SearchHandler(PersistentServerConnectionApplication), the result is good. But, If I use endpoint, the following errors always occur.       Why is this error occurring?   this is my code. my restmap.con code [script:search-number] match = /search-number script = search_handler.py scripttype = persist handler = search_handler.SearchHandler   my search_handler.py code # import .env from config import search_env env = search_env() HOST = env['HOST'] PORT = env['PORT'] USERNAME = env['USERNAME'] PASSWORD = env['PASSWORD'] import json import time from splunk.persistconn.application import PersistentServerConnectionApplication import splunklib.client as client import splunklib.results as results from splunklib.results import JSONResultsReader class SearchHandler(PersistentServerConnectionApplication): def __init__(self, command_line, command_arg): super(SearchHandler, self).__init__() def handle(self, args): try: service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD, ) search_query = '| inputlookup search-numbers.csv' jobs = service.jobs job = jobs.create(search_query) while not job.is_done(): time.sleep(1) reader = JSONResultsReader(job.results(output_mode='json')) results_list = [item for item in reader if isinstance(item, dict)] print(results_list) return { 'payload': results_list, 'status': 200 } except Exception as e: return { 'payload': {'error': str(e)}, 'status': 500 }     Is there an example code to search for csv files using 'endpoint'?   https://github.com/splunk/splunk-app-examples/tree/master/custom_endpoints/hello-world this example is not using search.   I'm a front-end developer who doesn't know Python very well.....    
I have the following source log files: [root@lts-reporting ~]# head /nfs/LTS/splunk/lts12_summary.log 2014-07-01T00:00:00 78613376660548 2014-08-01T00:00:00 94340587484234 2014-09-01T00:00:00 1... See more...
I have the following source log files: [root@lts-reporting ~]# head /nfs/LTS/splunk/lts12_summary.log 2014-07-01T00:00:00 78613376660548 2014-08-01T00:00:00 94340587484234 2014-09-01T00:00:00 105151971182496 2014-10-01T00:00:00 104328846250489 2014-11-01T00:00:00 124100293157039 2014-12-01T00:00:00 150823795700989 2015-01-01T00:00:00 178786111756322 2015-02-01T00:00:00 225445840948631 2015-03-01T00:00:00 248963904047438 2015-04-01T00:00:00 274070504403562 [root@lts-reporting ~]# head /nfs/LTS/splunk/lts22_summary.log 2014-07-01T00:00:00 87011545030617 2014-08-01T00:00:00 112491174858354 2014-09-01T00:00:00 114655842870462 2014-10-01T00:00:00 102729950441541 2014-11-01T00:00:00 124021498471043 2014-12-01T00:00:00 147319995334181 2015-01-01T00:00:00 182983059554298 2015-02-01T00:00:00 234679634668451 2015-03-01T00:00:00 252420788862798 2015-04-01T00:00:00 288156185998535 On the universal forwarder I have the following inputs.conf stanzas: ## LTS summaries [monitor:///nfs/LTS/splunk/lts12_summary.log] _TCP_ROUTING = druid index = lts sourcetype = size_summaries ## LTS summaries [monitor:///nfs/LTS/splunk/lts22_summary.log] _TCP_ROUTING = druid index = lts sourcetype = size_summaries I have the following splunk props stanza: ## LTS Size Summaries [size_summaries] SHOULD_LINEMERGE = false TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 19 TIME_FORMAT = %Y-%m-%dT%H:%M:%S DATETIME_CONFIG = NONE EXTRACT-ltsserver = /nfs/LTS/splunk/(?<ltsserver>\w+)_summary.log in source EXTRACT-size = (?m)^\S+\s+(?<size>\d+) When indexing the files for the first time the events get parsed with the correct _time (the first field in every line of the log), but when a new event gets logged all the events get assigned the latest modification time of the log file. I have tried deleting the events by sourcetype on the indexer and restarted splunk to see if anything changes, but I get exactly the same behaviour. Unfortunately I cannot delete the full fishbucket of the index as I have other source types in the same index which would be lost. Is there a way to force the indexer to parse the first field of the events as _time?
So  i have a dashboard and in drilldown i am showing severity in the servers now i want whenever the severity is solved that severity is removed from the drilldown and store somewhere else for confir... See more...
So  i have a dashboard and in drilldown i am showing severity in the servers now i want whenever the severity is solved that severity is removed from the drilldown and store somewhere else for confirmation. from this table if i solve any severity i should be able to remove it from here and store it somewhere else. and if by mistake i have removed it , i can rollback .
anybody have experience for building an automation to import CSV from  github location into Splunk lookup file, CSV files are constantly changing, and I need to automate daily updates
Missprint here: In normal cases Splunk replies in something like "[App Key Value Store migration] Starting migrate-kvstore." .
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]#... See more...
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]# /opt/splunk/bin/splunk show kvstore-status --verbose This member: backupRestoreStatus : Ready date : Wed Apr 23 09:56:56 2025 dateSec : 1745391416.331 disabled : 0 guid : 3FA11F27-42E0-400A-BF69-D15F6B534708 oplogEndTimestamp : Wed Apr 23 09:56:55 2025 oplogEndTimestampSec : 1745391415 oplogStartTimestamp : Wed Apr 23 09:50:13 2025 oplogStartTimestampSec : 1745391013 port : 8191 replicaSet : 3FA11F27-42E0-400A-BF69-D15F6B534708 replicationStatus : KV store captain standalone : 1 status : ready storageEngine : wiredTiger KV store members: 127.0.0.1:8191 configVersion : 1 electionDate : Wed Apr 23 09:55:23 2025 electionDateSec : 1745391323 hostAndPort : 127.0.0.1:8191 optimeDate : Wed Apr 23 09:56:55 2025 optimeDateSec : 1745391415 replicationStatus : KV store captain uptime : 95   Now I'm trying to upgrade mongo version from 3.6 to version v4.2. According to mongod.log my current version is: 2025-04-23T06:55:21.374Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4   Now according to docs I'm trying to migrate to another version of mongo manually but get the following message: [root@splunk-1 opt]# /opt/splunk/bin/splunk migrate migrate-kvstore [App Key Value Store migration] Collection data is not available.   Whst Splunk means by that? "Collection data is not available". I have several collections in my Splunk. I haven't found any case in Community. In normal cases Splunk replies in something like "Collection data is not available" . It seems that I do something wrong in genelal Thanks  
Thanks all for the help, I will try a regex that match both. I learn a lot with you guys thanks !!!!
It's a KV store collection and can be found at $SPLUNK_HOME/etc/apps/TA-Akamai_SIEM/default/collections.conf
@DaltonCarmon  When you change the Splunk password, either via the GUI or via the CLI, the $SPLUNK_HOME\etc\passwd file is updated and thereafter user-seed.conf is ignored. However, if $SPLUNK_HOM... See more...
@DaltonCarmon  When you change the Splunk password, either via the GUI or via the CLI, the $SPLUNK_HOME\etc\passwd file is updated and thereafter user-seed.conf is ignored. However, if $SPLUNK_HOME\etc\passwd is ever deleted, user-seed.conf will again specify the default admin login password. Place user-seed.conf in C:\Program Files\Splunk\etc\system\local (not default). Files in local override default and are meant for custom configurations.   https://docs.splunk.com/Documentation/Splunk/latest/Admin/User-seedconf    To configure the default username and password, place the user-seed.conf file in $SPLUNK_HOME\etc\system\local. You must restart Splunk for these settings to take effect.   Note: If the $SPLUNK_HOME\etc\passwd file exists, the configurations in user-seed.conf will be ignored.
@amitrinx  Pls check this, I used makeresults command for dummydata.  | makeresults | eval raw_json="[ {\"user\":\"user1@example.com\",\"status\":\"sent\",\"ip_address\":\"192.168.1.10\",\"reply... See more...
@amitrinx  Pls check this, I used makeresults command for dummydata.  | makeresults | eval raw_json="[ {\"user\":\"user1@example.com\",\"status\":\"sent\",\"ip_address\":\"192.168.1.10\",\"reply\":\"Message accepted\",\"event_id\":\"EVT001\",\"message_id\":\"MSG001\",\"template_id\":\"TPL001\",\"template_name\":\"welcome\",\"smtp_code\":\"250\",\"time\":\"2025-04-23T10:00:00Z\",\"encryption\":true,\"service\":\"email_service\"}, {\"user\":\"user2@example.com\",\"status\":\"queued\",\"ip_address\":\"192.168.1.20\",\"reply\":\"Queued for delivery\",\"event_id\":\"EVT002\",\"message_id\":\"MSG002\",\"template_id\":\"TPL002\",\"template_name\":\"reset_password\",\"smtp_code\":\"451\",\"time\":\"2025-04-23T10:05:00Z\",\"encryption\":false,\"service\":\"notification_service\"}, {\"user\":\"user3@example.com\",\"status\":\"failed\",\"ip_address\":\"192.168.1.30\",\"reply\":\"Mailbox not found\",\"event_id\":\"EVT003\",\"message_id\":\"MSG003\",\"template_id\":\"TPL003\",\"template_name\":\"alert\",\"smtp_code\":\"550\",\"time\":\"2025-04-23T10:10:00Z\",\"encryption\":true,\"service\":\"security_service\"}, {\"user\":\"user4@example.com\",\"status\":\"opened\",\"ip_address\":\"192.168.1.40\",\"reply\":\"Email opened\",\"event_id\":\"EVT004\",\"message_id\":\"MSG004\",\"template_id\":\"TPL004\",\"template_name\":\"newsletter\",\"smtp_code\":\"200\",\"time\":\"2025-04-23T10:15:00Z\",\"encryption\":true,\"service\":\"marketing_service\"} ]" | spath input=raw_json path={} output=event | mvexpand event | spath input=event | table user status reply service    
I am currently working with data from SendGrid Event API that is being ingested into Splunk. The data includes multiple email events (e.g., delivered, processed) wrapped into a single event, and this... See more...
I am currently working with data from SendGrid Event API that is being ingested into Splunk. The data includes multiple email events (e.g., delivered, processed) wrapped into a single event, and this wrapping seems to happen randomly.   Here is a sample of the data structure:     [ { "email": "example@example.com", "event": "delivered", "ip": "XXX.XXX.XXX.XX", "response": "250 mail saved", "sg_event_id": "XXXX", "sg_message_id": "XXXX", "sg_template_id": "XXXX", "sg_template_name": "en", "smtp-id": "XXXX", "timestamp": "XXXX", "tls": 1, "twilio:verify": "XXXX" }, { "email": "example@example.com", "event": "processed", "send_at": 0, "sg_event_id": "XXXX", "sg_message_id": "XXXX", "sg_template_id": "XXXX", "sg_template_name": "en", "smtp-id": "XXXX", "timestamp": "XXXX", "twilio:verify": "XXXX" } ] I am looking for a query that can help me extract the email, event, and response (reason) fields from this data, even when multiple events are wrapped into a single event entry.   Could anyone please provide guidance on the appropriate Splunk query to achieve this?
Hello, We have a few hundred hosts and a handful of customers. I have a csv file with serverName,customerID. I've been able to add the customerID to incoming events using props.conf/transforms.conf... See more...
Hello, We have a few hundred hosts and a handful of customers. I have a csv file with serverName,customerID. I've been able to add the customerID to incoming events using props.conf/transforms.conf on the HF but I have no luck with metric data. Background - I like to use the customerID later for search restriction in roles. any suggestions where to start troubleshooting? Kind Regards Andre  
@PickleRick thanks for your response. Yes, it's configured properly but tcpdump showed nothing coming to port 514. It seems the problem might be on the UCS side. As someone on the Cisco community sug... See more...
@PickleRick thanks for your response. Yes, it's configured properly but tcpdump showed nothing coming to port 514. It seems the problem might be on the UCS side. As someone on the Cisco community suggested, tried to run on UCS side "ethanalyzer local interface mgmt capture-filter "port 514" limit-captured-frames 0 detail" but looks like it's not generating any traffic to send out port 514 on UCS itself and hence no data on the rsyso
I was sending a alert using the teams app on the splunk base, which posts a card message to the teams. I want to send a plaintext message using webhook because the customer wants to receive a plainte... See more...
I was sending a alert using the teams app on the splunk base, which posts a card message to the teams. I want to send a plaintext message using webhook because the customer wants to receive a plaintext message rather than a card message. Can I use the $result.field$ token for the message content in the payload? I should use the fields in the search results table. Goals 1. Post a plaintext message to msteams as a notification feature 2. Use the fields in the table of the notification search results as tokens
Hi @davidco  It'd be worth validating the Splunk receiving end and the logs available. Please could you check for HEC errors using: index=_internal reply!=0 HttpInputDataHandler For more info on ... See more...
Hi @davidco  It'd be worth validating the Splunk receiving end and the logs available. Please could you check for HEC errors using: index=_internal reply!=0 HttpInputDataHandler For more info on reply codes see https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/TroubleshootHTTPEventCollector Any error reply codes here may provide more insights.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ganesanvc  Looking at the square braces there, it looks like you're running the sub-search part in the SPL search box, try removing the [ and ] so that we can see if that works independetly. ... See more...
Hi @ganesanvc  Looking at the square braces there, it looks like you're running the sub-search part in the SPL search box, try removing the [ and ] so that we can see if that works independetly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @addOnGuy  I think the target would be <yourApp>/bin/ta_ignio_integration_add_on/ If you look in that folder - is the previous version of splunk-sdk in there? pip install --upgrade splunk-sdk -... See more...
Hi @addOnGuy  I think the target would be <yourApp>/bin/ta_ignio_integration_add_on/ If you look in that folder - is the previous version of splunk-sdk in there? pip install --upgrade splunk-sdk --target <yourAppLocation>/bin/ta_ignio_integration_add_on/    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @JoaoGuiNovaes  I think every 30 days is way too infrequent for this - You would want the service accounts adding fairly soon after they're first seen so the info can be used in other searches. ... See more...
Hi @JoaoGuiNovaes  I think every 30 days is way too infrequent for this - You would want the service accounts adding fairly soon after they're first seen so the info can be used in other searches. Personally I would run it more frequently, e.g. hourly, or every 4 hours. I usually look back (earliest) equiv to the time since the previous run minus an extra 10 mins to account for lag, so something like earliest=-70m latest=-10m (60 minute period, running every hour).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @danielbb  It could be something like a field extraction happening after the line breaking which is causing this, or something else. Without access to your instance we could do with seeing some s... See more...
Hi @danielbb  It could be something like a field extraction happening after the line breaking which is causing this, or something else. Without access to your instance we could do with seeing some sample logs along with a btool output ($SPLUNK_HOME/bin/splunk btool props list <sourceTypeName>) for your event's sourcetype.  The thread you posted from 2013 looks like could have been related to the events having a line-break in. Please let us know if you're able to provide a sample + props output.  Thanks