All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello guys,   I need a splunk query that list out all the alerts that have index=* in their query. Unfortunately, I can't use rest services so kindly suggest me how can i do it without using rest.
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either co... See more...
I’m trying to understand Splunk KV Store to determine what happens when it fails to start or shows a "failure to restore" status. I’ve found two possible solutions, but I'm not sure whether either command will delete all data in the KV Store? Solution1: - ./splunk stop - mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo /path/to/copy/kvstore/mongo_old -./splunk start   Solution2: - ./splunk stop - ./splunk clean kvstore --local -./splunk start
Hi,    I installed Python SDK in App. I registered endpoint in the file restmap.conf . I'd like to receive an answer in json format for the lookup file through search. and I'd like to use this r... See more...
Hi,    I installed Python SDK in App. I registered endpoint in the file restmap.conf . I'd like to receive an answer in json format for the lookup file through search. and I'd like to use this response data at another splunk app.   But, The following error message is returned... 'bad character (49) in reply size'   If I print a simple search without using a class SearchHandler(PersistentServerConnectionApplication), the result is good. But, If I use endpoint, the following errors always occur.       Why is this error occurring?   this is my code. my restmap.con code [script:search-number] match = /search-number script = search_handler.py scripttype = persist handler = search_handler.SearchHandler   my search_handler.py code # import .env from config import search_env env = search_env() HOST = env['HOST'] PORT = env['PORT'] USERNAME = env['USERNAME'] PASSWORD = env['PASSWORD'] import json import time from splunk.persistconn.application import PersistentServerConnectionApplication import splunklib.client as client import splunklib.results as results from splunklib.results import JSONResultsReader class SearchHandler(PersistentServerConnectionApplication): def __init__(self, command_line, command_arg): super(SearchHandler, self).__init__() def handle(self, args): try: service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD, ) search_query = '| inputlookup search-numbers.csv' jobs = service.jobs job = jobs.create(search_query) while not job.is_done(): time.sleep(1) reader = JSONResultsReader(job.results(output_mode='json')) results_list = [item for item in reader if isinstance(item, dict)] print(results_list) return { 'payload': results_list, 'status': 200 } except Exception as e: return { 'payload': {'error': str(e)}, 'status': 500 }     Is there an example code to search for csv files using 'endpoint'?   https://github.com/splunk/splunk-app-examples/tree/master/custom_endpoints/hello-world this example is not using search.   I'm a front-end developer who doesn't know Python very well.....    
I have the following source log files: [root@lts-reporting ~]# head /nfs/LTS/splunk/lts12_summary.log 2014-07-01T00:00:00 78613376660548 2014-08-01T00:00:00 94340587484234 2014-09-01T00:00:00 1... See more...
I have the following source log files: [root@lts-reporting ~]# head /nfs/LTS/splunk/lts12_summary.log 2014-07-01T00:00:00 78613376660548 2014-08-01T00:00:00 94340587484234 2014-09-01T00:00:00 105151971182496 2014-10-01T00:00:00 104328846250489 2014-11-01T00:00:00 124100293157039 2014-12-01T00:00:00 150823795700989 2015-01-01T00:00:00 178786111756322 2015-02-01T00:00:00 225445840948631 2015-03-01T00:00:00 248963904047438 2015-04-01T00:00:00 274070504403562 [root@lts-reporting ~]# head /nfs/LTS/splunk/lts22_summary.log 2014-07-01T00:00:00 87011545030617 2014-08-01T00:00:00 112491174858354 2014-09-01T00:00:00 114655842870462 2014-10-01T00:00:00 102729950441541 2014-11-01T00:00:00 124021498471043 2014-12-01T00:00:00 147319995334181 2015-01-01T00:00:00 182983059554298 2015-02-01T00:00:00 234679634668451 2015-03-01T00:00:00 252420788862798 2015-04-01T00:00:00 288156185998535 On the universal forwarder I have the following inputs.conf stanzas: ## LTS summaries [monitor:///nfs/LTS/splunk/lts12_summary.log] _TCP_ROUTING = druid index = lts sourcetype = size_summaries ## LTS summaries [monitor:///nfs/LTS/splunk/lts22_summary.log] _TCP_ROUTING = druid index = lts sourcetype = size_summaries I have the following splunk props stanza: ## LTS Size Summaries [size_summaries] SHOULD_LINEMERGE = false TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 19 TIME_FORMAT = %Y-%m-%dT%H:%M:%S DATETIME_CONFIG = NONE EXTRACT-ltsserver = /nfs/LTS/splunk/(?<ltsserver>\w+)_summary.log in source EXTRACT-size = (?m)^\S+\s+(?<size>\d+) When indexing the files for the first time the events get parsed with the correct _time (the first field in every line of the log), but when a new event gets logged all the events get assigned the latest modification time of the log file. I have tried deleting the events by sourcetype on the indexer and restarted splunk to see if anything changes, but I get exactly the same behaviour. Unfortunately I cannot delete the full fishbucket of the index as I have other source types in the same index which would be lost. Is there a way to force the indexer to parse the first field of the events as _time?
So  i have a dashboard and in drilldown i am showing severity in the servers now i want whenever the severity is solved that severity is removed from the drilldown and store somewhere else for confir... See more...
So  i have a dashboard and in drilldown i am showing severity in the servers now i want whenever the severity is solved that severity is removed from the drilldown and store somewhere else for confirmation. from this table if i solve any severity i should be able to remove it from here and store it somewhere else. and if by mistake i have removed it , i can rollback .
anybody have experience for building an automation to import CSV from  github location into Splunk lookup file, CSV files are constantly changing, and I need to automate daily updates
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]#... See more...
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]# /opt/splunk/bin/splunk show kvstore-status --verbose This member: backupRestoreStatus : Ready date : Wed Apr 23 09:56:56 2025 dateSec : 1745391416.331 disabled : 0 guid : 3FA11F27-42E0-400A-BF69-D15F6B534708 oplogEndTimestamp : Wed Apr 23 09:56:55 2025 oplogEndTimestampSec : 1745391415 oplogStartTimestamp : Wed Apr 23 09:50:13 2025 oplogStartTimestampSec : 1745391013 port : 8191 replicaSet : 3FA11F27-42E0-400A-BF69-D15F6B534708 replicationStatus : KV store captain standalone : 1 status : ready storageEngine : wiredTiger KV store members: 127.0.0.1:8191 configVersion : 1 electionDate : Wed Apr 23 09:55:23 2025 electionDateSec : 1745391323 hostAndPort : 127.0.0.1:8191 optimeDate : Wed Apr 23 09:56:55 2025 optimeDateSec : 1745391415 replicationStatus : KV store captain uptime : 95   Now I'm trying to upgrade mongo version from 3.6 to version v4.2. According to mongod.log my current version is: 2025-04-23T06:55:21.374Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4   Now according to docs I'm trying to migrate to another version of mongo manually but get the following message: [root@splunk-1 opt]# /opt/splunk/bin/splunk migrate migrate-kvstore [App Key Value Store migration] Collection data is not available.   Whst Splunk means by that? "Collection data is not available". I have several collections in my Splunk. I haven't found any case in Community. In normal cases Splunk replies in something like "Collection data is not available" . It seems that I do something wrong in genelal Thanks  
I am currently working with data from SendGrid Event API that is being ingested into Splunk. The data includes multiple email events (e.g., delivered, processed) wrapped into a single event, and this... See more...
I am currently working with data from SendGrid Event API that is being ingested into Splunk. The data includes multiple email events (e.g., delivered, processed) wrapped into a single event, and this wrapping seems to happen randomly.   Here is a sample of the data structure:     [ { "email": "example@example.com", "event": "delivered", "ip": "XXX.XXX.XXX.XX", "response": "250 mail saved", "sg_event_id": "XXXX", "sg_message_id": "XXXX", "sg_template_id": "XXXX", "sg_template_name": "en", "smtp-id": "XXXX", "timestamp": "XXXX", "tls": 1, "twilio:verify": "XXXX" }, { "email": "example@example.com", "event": "processed", "send_at": 0, "sg_event_id": "XXXX", "sg_message_id": "XXXX", "sg_template_id": "XXXX", "sg_template_name": "en", "smtp-id": "XXXX", "timestamp": "XXXX", "twilio:verify": "XXXX" } ] I am looking for a query that can help me extract the email, event, and response (reason) fields from this data, even when multiple events are wrapped into a single event entry.   Could anyone please provide guidance on the appropriate Splunk query to achieve this?
Hello, We have a few hundred hosts and a handful of customers. I have a csv file with serverName,customerID. I've been able to add the customerID to incoming events using props.conf/transforms.conf... See more...
Hello, We have a few hundred hosts and a handful of customers. I have a csv file with serverName,customerID. I've been able to add the customerID to incoming events using props.conf/transforms.conf on the HF but I have no luck with metric data. Background - I like to use the customerID later for search restriction in roles. any suggestions where to start troubleshooting? Kind Regards Andre  
I was sending a alert using the teams app on the splunk base, which posts a card message to the teams. I want to send a plaintext message using webhook because the customer wants to receive a plainte... See more...
I was sending a alert using the teams app on the splunk base, which posts a card message to the teams. I want to send a plaintext message using webhook because the customer wants to receive a plaintext message rather than a card message. Can I use the $result.field$ token for the message content in the payload? I should use the fields in the search results table. Goals 1. Post a plaintext message to msteams as a notification feature 2. Use the fields in the table of the notification search results as tokens
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/ma... See more...
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/manager/control/default/validate_bundle?output_mode=json check-restart=true in body and check for json.entry[0].content.checksum to get the checksum of the new bundle. If there is no checksum, there is no new bundle. Second we check the checksum against GET: https://CLM:8089/services/cluster/manager/info?output_mode=json json.entry[0].content.last_validated_bundle.checksum json.entry[0].content.last_dry_run_bundle.checksum to verify if the bundle check and test of the restart is completed and consider json.entry[0].content.last_check_restart_bundle_result if the restart is nessary or not. Unfurtunatly we see that the value of  json.entry[0].content.last_check_restart_bundle_result changes, even if last_dry_run_bundle.checksum and last_dry_run_bundle.checksum are set to the correct values.   to make a long story short we see that the red value is changing, while green is not changing. which is unexprected for us. tested against v9.2.5 and v9.4.1. At the moment is looks like a timing issue for me and i want to avoid sleep() code.    Is there a more solid way to check if restart is necessary or not?  best regards,   Andreas  
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml v... See more...
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml version="1.0" encoding="UTF-8"?><Configuration status="INFO" packages="com.splunk.logging,com.databricks.logging.log4j" shutdownHook="disable"> <Appenders> ... <SplunkHttp name="http-input" url="https://url-service" token="xxxx-xxxx-xxxx-xxx-xxx--xxxx" host="" index="my-index" source="spark-work" sourcetype="httpevent" messageFormat="text" middleware="HttpEventCollectorUnitTestMiddleware" connect_timeout="5000" termination_timeout="1000" disableCertificateValidation="true"> <PatternLayout pattern="%m%n"/> </SplunkHttp> </Appenders> <Loggers> <Root level="INFO"> ... </Root> ... <Logger name="splunk.log4j" level="DEBUG"> <AppenderRef ref="http-input"/> </Logger> </Loggers> </Configuration>   We use the library splunk-library-javalogging: splunk-library-javalogging-1.11.8.jar with okhttp-4.11.0.jar okio-3.5.0.jar okio-jvm-3.5.0.jar Currently we based the configuration from this example: https://github.com/splunk/splunk-library-javalogging/blob/main/src/test/resources/log4j2.xml Currently it doesn't work. We checked HEC via curl send a message from  Databricks to Splunk HEC and receive without problem. Does anyone have any experience or can help us with some guidance or advice? Thanks
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4":... See more...
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4": "value1" } As you can see, the value for "key4" is the same as "key1". So, in my example, I don't want to ingest the complete JSON payload, but only: { "key1": "value1", "key2": "value2", "key3": "value3" }   Can this be done?
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able t... See more...
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able to login. It said my login info was incorrect.  I figured I had messed the password somehow, and reset it by going to Command Line and using command "del /f /q "C:\Program Files\Splunk\etc\passwd" Now on the Splunk Enterprise Page there is a note saying 'No users exist. Please set up a user'.  How? And have I lost the demo date?
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows client... See more...
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows clients and also from network switch, VMWare server and hosts.   I am able to send logs from network switch and VMWare hosts directly into Splunk using using "Data Inputs->TCP" and by picking different ports for each service but for Cisco UCS Chassis, to send logs, I can't configure other than syslog server name and log level.  So I setup a rsyslog server on the same machine as Splunk Enterprise. It seems to be running but I don't logs from Cisco UCS. I have check firewall rules as well and all seems to be configured properly.  Any tips about running rsyslog and Splunk server on the same machine and about sending Cisco UCS logs to rsyslog/splunk would be appreciated.  Unfortunately, I can't provide much info as this is an air gapped lab. 
Hello, I setup 2 reports to run early this AM.  Looks like both reports ran according to splunk.  The problem I have now is finding the actual .csv files on the splunk server so I can scp them. Tha... See more...
Hello, I setup 2 reports to run early this AM.  Looks like both reports ran according to splunk.  The problem I have now is finding the actual .csv files on the splunk server so I can scp them. Thank...
Hi team, I have a question related to Splunk SOAR. I'm working on a new community app that will include an on-poll action. This action will ingest a large number of events into SOAR. I came across a... See more...
Hi team, I have a question related to Splunk SOAR. I'm working on a new community app that will include an on-poll action. This action will ingest a large number of events into SOAR. I came across a document that mentions a few limits, including that 61k events were tested. I just wanted to check if anyone knows what configuration was used for that test? (For example, what environment or specs were in place when they tested the 61k ingestion?)
Hi, Unsure what is the root cause as i was trying to do some minor adjustment to ignore the [ ] at the transforms.conf. Previously I'm able to view the fields like Id Name and their value but curre... See more...
Hi, Unsure what is the root cause as i was trying to do some minor adjustment to ignore the [ ] at the transforms.conf. Previously I'm able to view the fields like Id Name and their value but currently nothing shows. I tried to re-do the props.conf, transforms.conf and inputs.conf by adding parameter by parameter and it still didn't work.    
Hi Team, Proxy connectivity test for WHOIS RDP is failing on SPLUNK SOAR UI.  Testing Connectivity   App 'WHOIS RDAP' started successfully (id: 1745296324951) on asset: 'asset_whoisrdp'(id: 22) ... See more...
Hi Team, Proxy connectivity test for WHOIS RDP is failing on SPLUNK SOAR UI.  Testing Connectivity   App 'WHOIS RDAP' started successfully (id: 1745296324951) on asset: 'asset_whoisrdp'(id: 22) Loaded action execution configuration Querying... Whois query failed. Error: HTTP lookup failed for https://rdap.arin.net/registry/ip/8.8.8.8. No action executions found.   I have configured proxy setting at GLBOBAL Environment level. how to fix this issue.  
Use iplocation or geostats to display within a range of 100 kilometers (with longitude of 0.89 degrees and latitude of 0.91 degrees) which regions' people have logged in, the login time, IP address, ... See more...
Use iplocation or geostats to display within a range of 100 kilometers (with longitude of 0.89 degrees and latitude of 0.91 degrees) which regions' people have logged in, the login time, IP address, and the login method.