All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, I am creating index from from Splunk Cloud REST API's, it is getting created and it is not visible to me from Splunk Cloud web. is there access issue to my account? I am having followi... See more...
Hello Team, I am creating index from from Splunk Cloud REST API's, it is getting created and it is not visible to me from Splunk Cloud web. is there access issue to my account? I am having following roles. 1)apps, 2)can_delete, 3)enable_automatic_ui_updates, 4)ite_internal_admin, 5)power, 6)sc_admin, 7)tokens_auth, 8)user Thanks, Venkata          
The AppDynamics Java Agent is one type of bytecode injection (BCI) agent.  So, what does it do with the java application? Actually from what I have seen it intercept the http request and modify it ... See more...
The AppDynamics Java Agent is one type of bytecode injection (BCI) agent.  So, what does it do with the java application? Actually from what I have seen it intercept the http request and modify it by adding the header for local proxy authentication.
I have a sub query that gives the output example below  Sub Query [ search index=prod_diamond sourcetype=CloudWatch_logs source=*downloadInvoice* AND *error* NOT ("lambda-warmer") | fields erro... See more...
I have a sub query that gives the output example below  Sub Query [ search index=prod_diamond sourcetype=CloudWatch_logs source=*downloadInvoice* AND *error* NOT ("lambda-warmer") | fields error.requestId | rename error.requestId as requestId | dedup requestId | format ] Output ( requestId="jjadjdfjjedd_jehdfjdjfhj" ) OR ( requestId="jgjfnfdn_jrhfjdbfd" ).... I need to edit the format that is returned from the first query.    Is there a way to change the search to something less specific? Such as (*jjadjdfjjedd_jehdfjdjfhj*) OR (*jgjfnfdn_jrhfjdbfd*) ..... As I need to find all events that include the requestId, not just when it is specific to that variable.
I am working producing a table that calculates the number of incidents resolved by each analyst.  What my query does is produces a table with three columns with a count of 'Class Names': Analyst ... See more...
I am working producing a table that calculates the number of incidents resolved by each analyst.  What my query does is produces a table with three columns with a count of 'Class Names': Analyst / Class Name / count What I am trying to do is produce an output with (4) columns with a count under the descrete columns and a total of the row to the right: Analyst / Class Name 1 / Class Name 2 / Total Count (of all class names)     <query> | table "Analyst" "Class Name" | stats count by Analyst "Class Name"      
Is there  way to connect with help in real time please assist
I have a Splunk server which runs both the Deployment Server and the License Master running on version 8.2.4. Due to the CVE released related to the Deployment Server, I would like to upgrade this se... See more...
I have a Splunk server which runs both the Deployment Server and the License Master running on version 8.2.4. Due to the CVE released related to the Deployment Server, I would like to upgrade this server to 9.0 and I don't expect any issues between the DS and the universal forwarders. My concern is with the License Master as this component communicates with the rest of the deployment. The latest documentation of version compatibility is found here: Configure a license master - Splunk Documentation, which is for 8.2.6 as there is no version (as of today, 6/21/2022) of this page for version 9.0. Upgrading the entire Splunk Enterprise deployment right now is not feasible (we are planning in the near future to complete it), but upgrading this one component to mitigate the vulnerability is priority. Should I expect any issues with the licensing after the upgrade of the License Master when the major versions don't match with the rest of the deployment?
Today I've seen something strange. I was preparing a small workshop for the customer and wanted to show the performance difference between index=_internal | stats count and | tstats count where... See more...
Today I've seen something strange. I was preparing a small workshop for the customer and wanted to show the performance difference between index=_internal | stats count and | tstats count where index=_internal I was completely baffled when the second search showed me (repeatedly) count of 0. If I run the search on any other splunk instance I have access to it shows me more or less the same number for both searches (of course they can differ slightly as the _internal is dynamic so a difference of few dozen entries is perfectly understandable). But this one showed 0 with tstats. Anyone encountered something like that? I didn't have time to investigate further, I hope I get some time tomorrow to look into it but I'm puzzled. To make thing more mysterious, for other indexes tstats shows proper counts. It's just the _internal index which lies that it has no events. It's a 8.2.6 clustered (both indexer cluster and shcluster) installation.
Hi All, is it possible to retrieve the (splunk soar) instance details inside a playbook? For instance when sending an email, I want to be able to tell if the playbook ran in dev or prod environme... See more...
Hi All, is it possible to retrieve the (splunk soar) instance details inside a playbook? For instance when sending an email, I want to be able to tell if the playbook ran in dev or prod environment. Is there a list of all the global environment variables?   Thanks in advance
Hi, I have created an email alert with cron schedule of every 4 hours, though I can see that even if there is search result, randomly email triggering is not happening. Also, I made sure to use s... See more...
Hi, I have created an email alert with cron schedule of every 4 hours, though I can see that even if there is search result, randomly email triggering is not happening. Also, I made sure to use simpler splunk commands which will be a bit faster in terms of execution. Can someone please suggest what could be the reason in such skipping of an email.
When I click on some correlation rules in content management in Splunk ES, I get the following error and it does not open:        Cannot read properties of undefined(reading 'entry') Can you please... See more...
When I click on some correlation rules in content management in Splunk ES, I get the following error and it does not open:        Cannot read properties of undefined(reading 'entry') Can you please tell me what might be causing this issue and how can I solve it.
We have some staging servers in the cloud and are turned down after business hours. Is there any method in With the Deployment server to ignore and not report on the node as being missing ?
Hello community, like to ask for support to get over conditional formatting. I have 3 different products in a group. Product A, B and C and I need to add for each of them a different formula (compe... See more...
Hello community, like to ask for support to get over conditional formatting. I have 3 different products in a group. Product A, B and C and I need to add for each of them a different formula (compensation factor) e.g.. PRODUCT A = group/3.33*100  PRODUCT B = group/3.061*100 PRODUCT C = group/3.0*100 I could only do this when I create search only for one PRODUCT. But how to include all the PRODUCTS with different formulas (compensation factors) ? | where group="PRODUCT_A" | eval ProductGroup=group/3.33*100 Thanks
Hi, We are trying to integrate Splunk Cloud with our Atlassian Jira cloud instance. We have configured the app 'Jira Service Desk Simple Add-On'(https://splunkbase.splunk.com/app/4958/) and under 'Tr... See more...
Hi, We are trying to integrate Splunk Cloud with our Atlassian Jira cloud instance. We have configured the app 'Jira Service Desk Simple Add-On'(https://splunkbase.splunk.com/app/4958/) and under 'Trigger Actions' I am able to see this action and also able to create/open ticket in Jira via this option. But I want to create ticket in Jira manually via splunk query using 'sendalert' command. When I tried to do, I'm getting the error 'Error in 'sendalert' command: Alert script returned error code 3.'  May be the fields that I'm providing is not correct. Could someone help me in fixing the issue that I'm facing. |sendalert jira_service_desk jira_account="JiraCloud" projectKey=“SOR” summary=“My Header” issueTypeName=“Task” priority=“Medium”  labels="Security"  It would be great if someone could provide me the fields that I should mention as part of this query inorder to create a ticket in Jira cloud.
Hello, we have a problem with persistent queue's in our infrastructure. We have TCP inputs sending SSL traffic to a heavy forwarder which acts as an intermediate forwarder. We do not parse on the h... See more...
Hello, we have a problem with persistent queue's in our infrastructure. We have TCP inputs sending SSL traffic to a heavy forwarder which acts as an intermediate forwarder. We do not parse on the hf! All we do is putting the data from TCP directly into the index queue. That mostly works perfectly fine for nearly 1 TB data per day. But sometimes the source pushes nearly 1 TB per hour which obviously overwhelms the HF, hence the persistent queue. We have the following inputs.conf:   [tcp-ssl:4444] index = xxx persistentQueueSize=378000MB sourcetype = xxx disabled=false queue = indexQueue   I expect all files in "/opt/splunk/var/run/splunk/tcpin/" for port 4444 to not exceed the allocated size of 378GB. But as can seen below, the total size of all files for port 4444 is 474GB! Way more than the allocated 378GB. Some files say corrupted probably because we hit our disk limit on the server and Splunk couldn't write to those files anymore. Did someone else experienced this behavior before? Thanks in advance and best regards, Eric
Has anyone developed eventtypes and tags for the sourcetype defined by the Proofpoint TAP Modular Input ([proofpoint_tap_siem])? I was surprised the addon doesn't include them.
Hello, I have a Splunk Cloud deployment and the alerts are not firing. I have searched for information and using the search index=_internal sourcetype=scheduler status="skipped" savedsearch_name="s... See more...
Hello, I have a Splunk Cloud deployment and the alerts are not firing. I have searched for information and using the search index=_internal sourcetype=scheduler status="skipped" savedsearch_name="search_name" you can see why the alerts are not going off. It says that the maximum disk usage quota for this user has been reached. The thing is that these alerts have no owner, the owner is "nobody", so if I am not mistaken the maximum disk usage quota is the default one. I think they don't recommend to change the default maximum disk usage quota. I need these alerts to trigger, what can I do to fix this problem? Thanks in advance and best regards.
I'm trying to run the below command on my search head cluster deployer:  splunk start-shcluster-migration kvstore -storageEngine wiredTiger -isDryRun true I receive the following message:  Admin... See more...
I'm trying to run the below command on my search head cluster deployer:  splunk start-shcluster-migration kvstore -storageEngine wiredTiger -isDryRun true I receive the following message:  Admin handler 'shclustercaptainkvstoremigrate' not found. This is after I have edited $SPLUNK_HOME/etc/system/local/ on each search head in the cluster, following: Migrate the KV store storage engine - Splunk Documentation   [kvstore] storageEngineMigration=true Please advise.
We use Siemplify add-on to ingest alerts from Splunk to Siemplify however, the fields in Siemplify come really horribly and are impossible to read. Does anyone knows how to map the field values fro... See more...
We use Siemplify add-on to ingest alerts from Splunk to Siemplify however, the fields in Siemplify come really horribly and are impossible to read. Does anyone knows how to map the field values from Splunk to Siemplify?  
Hello, we are using splunk http appender in our mulesoft applications and we use index sb-xylem, so we actually observed that some of our worker nodes were hung in PROD, then mule support mentioned i... See more...
Hello, we are using splunk http appender in our mulesoft applications and we use index sb-xylem, so we actually observed that some of our worker nodes were hung in PROD, then mule support mentioned its due to splunk threads hung then to reproduce the issue, we are running some high load which is 5000 requests in 30 seconds, without using splunk appender everything works fine, but as soon as we enable splunk logging and run the load test again, all are failing and our mule apps capacity is not able to handle the load, we even tried by increasing capacity still we could not even pass 2000 requests, mulesoft support said based on logs and thread dumps it appears splunk appender is causing issue and lot of threads are waiting, may be for a response from splunk.  Hoping to get some insights for any odd behavior like slow requests or something?
In a playbook, I have a decision tree. If option A -> Check List -> If Value Exists in custom list -> Do Nothing Else If Option b -> Check list -> If Value Exists in custom list -> Delete that li... See more...
In a playbook, I have a decision tree. If option A -> Check List -> If Value Exists in custom list -> Do Nothing Else If Option b -> Check list -> If Value Exists in custom list -> Delete that list entry. Checking in the SOAR Phantom app actions, I see several options for lists, but no option to "remove/delete listitem" (see attached pic) How do I go about deleting items from a Custom List? Thanks! (SOAR Cloud 5.3.1)