All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have created an email alert with cron schedule of every 4 hours, though I can see that even if there is search result, randomly email triggering is not happening. Also, I made sure to use s... See more...
Hi, I have created an email alert with cron schedule of every 4 hours, though I can see that even if there is search result, randomly email triggering is not happening. Also, I made sure to use simpler splunk commands which will be a bit faster in terms of execution. Can someone please suggest what could be the reason in such skipping of an email.
When I click on some correlation rules in content management in Splunk ES, I get the following error and it does not open:        Cannot read properties of undefined(reading 'entry') Can you please... See more...
When I click on some correlation rules in content management in Splunk ES, I get the following error and it does not open:        Cannot read properties of undefined(reading 'entry') Can you please tell me what might be causing this issue and how can I solve it.
We have some staging servers in the cloud and are turned down after business hours. Is there any method in With the Deployment server to ignore and not report on the node as being missing ?
Hello community, like to ask for support to get over conditional formatting. I have 3 different products in a group. Product A, B and C and I need to add for each of them a different formula (compe... See more...
Hello community, like to ask for support to get over conditional formatting. I have 3 different products in a group. Product A, B and C and I need to add for each of them a different formula (compensation factor) e.g.. PRODUCT A = group/3.33*100  PRODUCT B = group/3.061*100 PRODUCT C = group/3.0*100 I could only do this when I create search only for one PRODUCT. But how to include all the PRODUCTS with different formulas (compensation factors) ? | where group="PRODUCT_A" | eval ProductGroup=group/3.33*100 Thanks
Hi, We are trying to integrate Splunk Cloud with our Atlassian Jira cloud instance. We have configured the app 'Jira Service Desk Simple Add-On'(https://splunkbase.splunk.com/app/4958/) and under 'Tr... See more...
Hi, We are trying to integrate Splunk Cloud with our Atlassian Jira cloud instance. We have configured the app 'Jira Service Desk Simple Add-On'(https://splunkbase.splunk.com/app/4958/) and under 'Trigger Actions' I am able to see this action and also able to create/open ticket in Jira via this option. But I want to create ticket in Jira manually via splunk query using 'sendalert' command. When I tried to do, I'm getting the error 'Error in 'sendalert' command: Alert script returned error code 3.'  May be the fields that I'm providing is not correct. Could someone help me in fixing the issue that I'm facing. |sendalert jira_service_desk jira_account="JiraCloud" projectKey=“SOR” summary=“My Header” issueTypeName=“Task” priority=“Medium”  labels="Security"  It would be great if someone could provide me the fields that I should mention as part of this query inorder to create a ticket in Jira cloud.
Hello, we have a problem with persistent queue's in our infrastructure. We have TCP inputs sending SSL traffic to a heavy forwarder which acts as an intermediate forwarder. We do not parse on the h... See more...
Hello, we have a problem with persistent queue's in our infrastructure. We have TCP inputs sending SSL traffic to a heavy forwarder which acts as an intermediate forwarder. We do not parse on the hf! All we do is putting the data from TCP directly into the index queue. That mostly works perfectly fine for nearly 1 TB data per day. But sometimes the source pushes nearly 1 TB per hour which obviously overwhelms the HF, hence the persistent queue. We have the following inputs.conf:   [tcp-ssl:4444] index = xxx persistentQueueSize=378000MB sourcetype = xxx disabled=false queue = indexQueue   I expect all files in "/opt/splunk/var/run/splunk/tcpin/" for port 4444 to not exceed the allocated size of 378GB. But as can seen below, the total size of all files for port 4444 is 474GB! Way more than the allocated 378GB. Some files say corrupted probably because we hit our disk limit on the server and Splunk couldn't write to those files anymore. Did someone else experienced this behavior before? Thanks in advance and best regards, Eric
Has anyone developed eventtypes and tags for the sourcetype defined by the Proofpoint TAP Modular Input ([proofpoint_tap_siem])? I was surprised the addon doesn't include them.
Hello, I have a Splunk Cloud deployment and the alerts are not firing. I have searched for information and using the search index=_internal sourcetype=scheduler status="skipped" savedsearch_name="s... See more...
Hello, I have a Splunk Cloud deployment and the alerts are not firing. I have searched for information and using the search index=_internal sourcetype=scheduler status="skipped" savedsearch_name="search_name" you can see why the alerts are not going off. It says that the maximum disk usage quota for this user has been reached. The thing is that these alerts have no owner, the owner is "nobody", so if I am not mistaken the maximum disk usage quota is the default one. I think they don't recommend to change the default maximum disk usage quota. I need these alerts to trigger, what can I do to fix this problem? Thanks in advance and best regards.
I'm trying to run the below command on my search head cluster deployer:  splunk start-shcluster-migration kvstore -storageEngine wiredTiger -isDryRun true I receive the following message:  Admin... See more...
I'm trying to run the below command on my search head cluster deployer:  splunk start-shcluster-migration kvstore -storageEngine wiredTiger -isDryRun true I receive the following message:  Admin handler 'shclustercaptainkvstoremigrate' not found. This is after I have edited $SPLUNK_HOME/etc/system/local/ on each search head in the cluster, following: Migrate the KV store storage engine - Splunk Documentation   [kvstore] storageEngineMigration=true Please advise.
We use Siemplify add-on to ingest alerts from Splunk to Siemplify however, the fields in Siemplify come really horribly and are impossible to read. Does anyone knows how to map the field values fro... See more...
We use Siemplify add-on to ingest alerts from Splunk to Siemplify however, the fields in Siemplify come really horribly and are impossible to read. Does anyone knows how to map the field values from Splunk to Siemplify?  
Hello, we are using splunk http appender in our mulesoft applications and we use index sb-xylem, so we actually observed that some of our worker nodes were hung in PROD, then mule support mentioned i... See more...
Hello, we are using splunk http appender in our mulesoft applications and we use index sb-xylem, so we actually observed that some of our worker nodes were hung in PROD, then mule support mentioned its due to splunk threads hung then to reproduce the issue, we are running some high load which is 5000 requests in 30 seconds, without using splunk appender everything works fine, but as soon as we enable splunk logging and run the load test again, all are failing and our mule apps capacity is not able to handle the load, we even tried by increasing capacity still we could not even pass 2000 requests, mulesoft support said based on logs and thread dumps it appears splunk appender is causing issue and lot of threads are waiting, may be for a response from splunk.  Hoping to get some insights for any odd behavior like slow requests or something?
In a playbook, I have a decision tree. If option A -> Check List -> If Value Exists in custom list -> Do Nothing Else If Option b -> Check list -> If Value Exists in custom list -> Delete that li... See more...
In a playbook, I have a decision tree. If option A -> Check List -> If Value Exists in custom list -> Do Nothing Else If Option b -> Check list -> If Value Exists in custom list -> Delete that list entry. Checking in the SOAR Phantom app actions, I see several options for lists, but no option to "remove/delete listitem" (see attached pic) How do I go about deleting items from a Custom List? Thanks! (SOAR Cloud 5.3.1)    
Is anyone else running into boot-start/permissions issues with the 9.0.0 UF running on Linux using init.d scripts for bootstart? Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Exec... See more...
Is anyone else running into boot-start/permissions issues with the 9.0.0 UF running on Linux using init.d scripts for bootstart? Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" I am also finding that "./splunk disable boot-start" does not correctly remove the /etc/init.d/splunk script and, contrary to documentation, splunk UF 9.0.0 uses systemd as default. https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/ConfigureSplunktostartatboottime Also systemd scripts seem to fail getting the permissions needed even when trying to enable-boot as root. A key error I am seeing is "Failed to create the unit file" when running the install. But it seems to be a total fail.     ## When upgrading (from 8.2.5) runuser -l splunk -c "/opt/splunkforwarder/bin/splunk stop" tar -xzvf /tmp/splunkforwarder-9.0.0-6818ac46f2ec-Linux-x86_64.tgz -C /opt chown -R splunk:splunk /opt/splunkforwarder/ runuser -l splunk -c "/opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt" runuser -l splunk -c "/opt/splunkforwarder/bin/splunk status" Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" (NOTE: Seems to be non-impacting)   ### When doing a new install tar -xzvf /tmp/splunkforwarder-9.0.0-6818ac46f2ec-Linux-x86_64.tgz -C /opt chown -R splunk:splunk /opt/splunkforwarder [root]# sudo -H -u splunk /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" This appears to be your first time running this version of Splunk. IMPORTANT: Because an admin password was not provided, the admin user will not be created. You will have to set up an admin username/password later using user-seed.conf. Creating unit file... Current splunk is running as non root, which cannot operate systemd unit files. Please create it manually by 'sudo splunk enable boot-start' later. Failed to create the unit file. Please do it manually later. Splunk> Now with more code! sudo -H -u splunk /opt/splunkforwarder/bin/splunk status Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" splunkd is running (PID: 3132350). splunk helpers are running (PIDs: 3132354).   # sudo -H -u splunk /opt/splunkforwarder/bin/splunk stop Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforwarder" Stopping splunkd... Shutting down. Please wait, as this may take a few minutes. . [ OK ] Stopping splunk helpers... [ OK ] Done. # /opt/splunkforwarder/bin/splunk enable boot-start -user splunk Systemd unit file installed by user at /etc/systemd/system/SplunkForwarder.service. Configured as systemd managed service. systemctl start SplunkForwarder.service Job for SplunkForwarder.service failed because the control process exited with error code. See "systemctl status SplunkForwarder.service" and "journalctl -xe" for details. systemctl status SplunkForwarder.service ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2022-06-21 12:58:55 UTC; 27s ago Process: 3141480 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/SplunkForwarder.service (code=exited, status=0/SUCCES> Process: 3141478 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/SplunkForwarder.service (code=exited, status=0/SUCCESS) Process: 3141477 ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd (code=exited, status=203/EXEC) Process: 3141475 ExecStartPre=/bin/bash -c chown -R splunk:splunk /opt/splunkforwarder (code=exited, status=0/SUCCESS) Main PID: 3141477 (code=exited, status=203/EXEC) Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Failed with result 'exit-code'. Jun 21 12:58:55 <host> systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Service RestartSec=100ms expired, scheduling restart. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Scheduled restart job, restart counter is at 5. Jun 21 12:58:55 <host> systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Start request repeated too quickly. Jun 21 12:58:55 <host> systemd[1]: SplunkForwarder.service: Failed with result 'exit-code'. Jun 21 12:58:55 <host> systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'.
Hi All, I am new to splunk and not a developer so first up apologies for any poor syntax or coding practices. What am I trying to do? The information that i need to show when a batch starts a... See more...
Hi All, I am new to splunk and not a developer so first up apologies for any poor syntax or coding practices. What am I trying to do? The information that i need to show when a batch starts and ends is in different formats in different logs I am trying to come up with a table that shows how long it takes to run each batch of transactions.   What is in the logs? There is a batch id in each of the logs but in a different format so i use regex to extract it. This is what I want to group on There is a unique string in 1 log per batch which contains "Found the last" which is my end time  For each transaction in the batch there is a log which contains ""After payload". If there are 100 entries in the batch there are 100 logs with this message. I want to use the first of these logs as my start time.   How am I trying to do it? I am filtering out any unneccesary logs by only looking for logs that have the message that I want which works source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload "  I then want to use transaction to group by batch. This works but because I have multiple entries per batch it takes the last entry not the first so my duration is much smaller than expected source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload " | transaction Batchid startswith="After payload conversion" endswith="Found the last message of the batch" mvlist=true| table Batchid duration   I then try to dedup but get no values returned source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload " | dedup info Batchid sortby +_time | table Merchantid Batchid _time info _raw | transaction Batchid startswith="After payload conversion" endswith="Found the last message of the batch" mvlist=true| table Batchid duration If I remove the transaction but keep the dedup I get only two messages per batchid (what I want) so I am not sure what is going wrong . It appears that I can't do a transaction after a dedup but it is probably something else I am not aware of. Any help would be appreciated. source="batch-queue-receiver.log" | rex field=_raw "[\? ]Batch\s?[iI][dD]\s?: (?<Batchid>.{7}+)" | rex field=_raw "[\? ]MerchantId: (?<Merchantid>.{7}+)" | rex field=_raw "[\? ]INFO a.c.s.b.q..+- (?<info>.{14}+)" | where Batchid != "" | where info = "Found the last" OR info = "After payload " | dedup info Batchid sortby +_time | table Batchid _time info      
Hi all, Has the old "searchWhenChanged" parameter been brought over into the new Dashboards? If not, is there an alternative I can use to get my dashboard to refresh/search when an input changes ... See more...
Hi all, Has the old "searchWhenChanged" parameter been brought over into the new Dashboards? If not, is there an alternative I can use to get my dashboard to refresh/search when an input changes or if I hit the carriage return/enter key? Thanks in advance.
Hi, Is there a way to target which application lookup you want to use? Lets say there are 3 applications, A, B and C,  where A and B each has device.csv but they have different data in them. De... See more...
Hi, Is there a way to target which application lookup you want to use? Lets say there are 3 applications, A, B and C,  where A and B each has device.csv but they have different data in them. Depending on a requirement Application C needs to use device.csv from A and other times it needs it from B.  That is, I cant use permissions to restrict the lookup, as application C needs access to both. Is it possible to prepend the application to the lookup or csv at search time - so that I know define which lookup I want to access? Something like: | inputlookup A::device.csv - I tried this and it didnt work   regards -brett
Hello, I've been running splunk on VMware since March this year, which is connected to a pfSense. I recently checked on it and I'm not able to acces the web interface anymore (http://localhost:800... See more...
Hello, I've been running splunk on VMware since March this year, which is connected to a pfSense. I recently checked on it and I'm not able to acces the web interface anymore (http://localhost:8000) I've been reading some posts about how some ports need to be opened. I'm not sure if I would have to do it on the pfSense since it was already working early on without needing to open the ports. How should I continue with the troubleshooting? Thanks! Best regards, Sara
when i am trying to create new app in deployment server through bin directory i am getting the following error WARNING: Server Certificate Hostname Validation is disabled. Please see server.c onf/[... See more...
when i am trying to create new app in deployment server through bin directory i am getting the following error WARNING: Server Certificate Hostname Validation is disabled. Please see server.c onf/[sslConfig]/cliVerifyServerName for details. An error occurred: Could not create Splunk settings directory at '/root/.splunk' we recently upgraded to 9.0.0 could you please provide me best solution to resolve this .
Hello, I am working on the back of this question from 2021 that has no answer. I have created a new custom entity type with one vital metric and I can see the information in the Infrastructure Over... See more...
Hello, I am working on the back of this question from 2021 that has no answer. I have created a new custom entity type with one vital metric and I can see the information in the Infrastructure Overview dashboard with no issues for the entities of that type.  When I select a single entity and drill down I do not seem to have an entity dashboard associated as with the OOTB entity types. What are the steps to create an entity dashboard for a specific entity type so that I can see metric data trend when I drill down to a single entity? Thank you! Andrew
we have question once we need to forward the Tripwire logs to Splunk  and I already enable the syslogs on the tripwire and opened the connection but still, nothing found