All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, when we are trying to push our app bundle, the following error message occurs:   splunk apply shcluster-bundle --answer-yes -target https://HOSTNAME123:8089 Error while deploying apps to... See more...
Hello, when we are trying to push our app bundle, the following error message occurs:   splunk apply shcluster-bundle --answer-yes -target https://HOSTNAME123:8089 Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=specificappname on target=https://XXXXXXXXX:8089: Non-200/201 status_code=500; {"messages":[{"type":"ERROR","text":"Read Timeout"}]   We checked the folder structure of "specificappname" on the deployer and on the shcnodes and couldn't find anything unusual. The Deployment is automated via a script and was running successful before. There are also no replication errors on the shc.   Any chance to debug this problem or find out what exactly is failing?
Hi Experts,   Im using Splunk Dashboard Studio, I have multiple table, I would like to hide the tables where there are no results. Could you advise workaround for achieving the same using dashboa... See more...
Hi Experts,   Im using Splunk Dashboard Studio, I have multiple table, I would like to hide the tables where there are no results. Could you advise workaround for achieving the same using dashboard studio.   Thanks
Hi Splunkers, I need a help in coming up with a logic in getting values from two lookups to my current search. I'm working on a search which has a field "customer" and I need to bring their ids fro... See more...
Hi Splunkers, I need a help in coming up with a logic in getting values from two lookups to my current search. I'm working on a search which has a field "customer" and I need to bring their ids from two different lookups. Basically, I have to check both the lookups for their ids and write them in the field called "ID" in my current search. TIA Search : customer       ID a                      b          c          d            e          lookup 1                                   lookup 2 customer       ids                    customer      ids a                         1                         d                      4 b                         2                         e                      5 c                         3                         a                      1                             
I have 2 events having fields 1. id_cse_event: sqsmessageid,timestamp 2. Scim: sqs_message_id, timestamp. I want to search all the messages published by id_cse_events in scim using messageid, th... See more...
I have 2 events having fields 1. id_cse_event: sqsmessageid,timestamp 2. Scim: sqs_message_id, timestamp. I want to search all the messages published by id_cse_events in scim using messageid, then find the difference between the time stamps This is the query i have wrote: sourcetype=id-cse-events | where isnotnull(sqsMessageId) | eval sqsmsgid=sqsMessageId | eval id_cse_time=timeStamp | table sqsmsgid, id_cse_time | map [search sourcetype=scim |fields line.message.sqs_message_id, line.timestamp|search line.message.sqs_message_id="$sqsmsgid$" | eval time_diff_in_seconds=strptime(id_cse_time,"%Y-%m-%dT%H:%M:%S")-strptime(line.timestamp,"%Y-%m-%dT%H:%M:%S") ]maxsearches=9999 | table line.message.sqs_message_id,time_diff_in_seconds id_cse_time= 2023-01-27T09:55:45.970831Z scim timestamp = 2023-01-27T08:24:28.601+0000 The events are getting matched, but i don't see any table with messageid and timediff. Can anyone help?
How can I achieve the query for retrieving data for a particular time for the last 6 days? Suppose I want to get the data for last 6 days from time 12.00 A.M to 4.00 P.M. Please help on the same
Hi Except if i am mistaken, Splunk ES contains a collection of add-ons. In combination, these add-ons provide the dashboards, searches, and tools that summarize the security posture of the enterpris... See more...
Hi Except if i am mistaken, Splunk ES contains a collection of add-ons. In combination, these add-ons provide the dashboards, searches, and tools that summarize the security posture of the enterprise, allowing users to monitor and act on security incidents and intelligence Does it means that Splunk ES works without any forwarder?  How the correlaation is done beteween these addns and the enterprise infrastructure? Is it automatic? The data are sent to the indexers lije with Splunk Enterprise or just to a search head? Sorry for these questions, but I am rookie in Splunk ES and I need to understand how the security events are ingested Thanks
Hey community,  I 'm new members, and sending you greetings.  I hope can learn good stuff from each other.
Hi, I want to ask about SLA which I refer from following URL: https://www.splunk.com/ja_jp/legal/splunk-cloud-service-level-schedule.html My questions are: 1. Service credit billing is a custo... See more...
Hi, I want to ask about SLA which I refer from following URL: https://www.splunk.com/ja_jp/legal/splunk-cloud-service-level-schedule.html My questions are: 1. Service credit billing is a customer's confidential information, but it is stated that the customer should contact Splunk. Is the customer in this statement and End-user who should contact Splunk directly? 2. Credits will be returned in time (up to 1 month) according to the availability % per quarter, but will the contract period be extended? 3. It states that a complete description is required when making a service credit claim. Is there any sample report content or regulatory form for it? Thanks, Emmy
I have sample.csv file with about 30000 rows with columns: sample data data  value1    value2 5600012345    abc  xxx 7890012345    fsfs rwrr I have bel... See more...
I have sample.csv file with about 30000 rows with columns: sample data data  value1    value2 5600012345    abc  xxx 7890012345    fsfs rwrr I have below query     index="b2c" |rex field=path1.path2.details "code=\'(?<data>[^\n\r\']{10})"     I can see the extracted 'data' field in the fields list. I want to  query  'data' column values in the csv file and return table with the data and other fields from the event and csv file. how to use inputlookup or lookup command to search the extracted field? Thanks for the help in advance
Hi Community ! Actually , im doing the Okta-Splunk enterprise integration , i´ve studied the documentation about this, but i have a question for the creation of certificate to encrypt the comunicati... See more...
Hi Community ! Actually , im doing the Okta-Splunk enterprise integration , i´ve studied the documentation about this, but i have a question for the creation of certificate to encrypt the comunication between Okta-Splunk , is that possible? which is the certificate type needed for this    Thanks!    
Greetings. My Splunk instance parses messages which has a JSON array type:   ``` { tags: ["info", "foo", "bar"] } ``` Let's say I want to search for events where precisely the second index of th... See more...
Greetings. My Splunk instance parses messages which has a JSON array type:   ``` { tags: ["info", "foo", "bar"] } ``` Let's say I want to search for events where precisely the second index of the tags field has the value "foo".   Having consulted the Splunk docs, I found Array and object expressions . I tried using Array and object expressions, and all of my queries ended poorly Eventually I was pointed to MultivalueEvalFunctions , which worked. Using Multivalue fns  left me with many questions: Why is my JSON array parsed as a multivalue? Why is it not an array? If I execute `typeof('tags')`, I get "Invalid". Why? Shouldn't it be Array or Multivalue? If I execute `typeof('tags{}')`, I get "Multivalue". Why? What did that operator do, and why was it required? More or less, as a polyglot programmer with a decade of experience, I found splunk operations on collections to be not just unintuitive, but counter intuitive.  Beyond my explicit three question categories above, if compelled, let me know other best-known-practices around searching with array-ish fields
Hi All,    I'm trying to make my dashboard dynamic.  for example, if the search query responds with 5 values, I want 5 row & panel to be created dynamically in the dashboard. Likewise will it be po... See more...
Hi All,    I'm trying to make my dashboard dynamic.  for example, if the search query responds with 5 values, I want 5 row & panel to be created dynamically in the dashboard. Likewise will it be possible to make the panels to be getting created based on the query output? Please assist me on this ask and I can add more details if needed   
Hybrid and multi-cloud deployments is the new reality for many organizations today who want to get the most out of their on-premises and cloud investments. With the pace of hybrid and multi-cloud dep... See more...
Hybrid and multi-cloud deployments is the new reality for many organizations today who want to get the most out of their on-premises and cloud investments. With the pace of hybrid and multi-cloud deployments on the rise how can one ensure your cloud is fully optimized and not froth with security and reliability concerns? Are there any best practices and recommendations for migrating self-managed Splunk Enterprise deployments to Splunk Cloud Platform (Splunk platform capabilities delivered as a service) efficiently and smoothly?
I feel like I'm dancing circles around the solution to this problem. I created a field named "Duration" with rex that has system recovery time in the 1d 1h 1m format but it doesn't always have all va... See more...
I feel like I'm dancing circles around the solution to this problem. I created a field named "Duration" with rex that has system recovery time in the 1d 1h 1m format but it doesn't always have all values; can also be 1d 1m, 1h 1m, 1m, 1h, or 1d (with values other than 1). I want to show the average downtime over 60days by system. index= ....... earliest=-60d latest=0h | rex field=issue ".*\((?P<Duration>\d[^\)]+" | rex field=Duration "((?P<Days>\d{0,2})d\s*)?((?P<Hours>\d{0,2})h\s*)?((?P<Mins>\d{0,2})m)?" | where isnotnull(Duration) | eval D=tonumber(Days)*1440 | eval H=tonumber(Hours)*60 | eval M=tonumber(Mins) | stats sum(D) as DT sum(H) as HT sum(M) as MT count(event_id) as Events by System | addtotals fieldname=TTotal | eval HTime=TTotal/60 This gets me the numbers I need but am having trouble displaying the average time by System. It still needs to be divided by event_id per system and I need this to be an ongoing report so I can't do it manually. | stats avg(HTime) by System - only gives me the HTime value per system, not the average per event per system. Suggestions?  
Dear experts ,  I am searching on my bot index, which contain conve-id and rest of the fields are stored as payload. Using spath i am able to extract required fields from payload into a table , now ... See more...
Dear experts ,  I am searching on my bot index, which contain conve-id and rest of the fields are stored as payload. Using spath i am able to extract required fields from payload into a table , now for trend analysis i want to use time chart command to see number of users per month , however its not working , below is the query for your reference , need help with the query : index=idx_chatbot logpoint=response-in AND service="journeyService" OR service="watsonPostMessage" |spath input=payload output=displayname path=context.displayName | spath input=payload output=Country path=context.countryCode | spath input=payload output=Intent path=intents{}.intent |spath input=payload output=ticketResponse path=response.createTicketResponse.Message | table conversation-id timestamp service duration logpoint userFeedback displayname text Country Intent category ticketResponse payload | dedup conversation-id | timechart span=1mon count(displayName)
Our teams have noticed an issue since we upgraded to Splunk 9.0.3 (from 8.1.x) with the chart legend interactions.  When the legend is a long list of series, attempting to scroll the legend list by c... See more...
Our teams have noticed an issue since we upgraded to Splunk 9.0.3 (from 8.1.x) with the chart legend interactions.  When the legend is a long list of series, attempting to scroll the legend list by clicking on the "scroll button" is instead mistakenly interpreted as a click on one of the legend's series, causing a drilldown search.  This is seen, at least, in the Search Assistant. Has anyone else seen this?  Is it a known (to Splunk) problem?  Any idea which versions/situations do/don't exhibit the bug?
Hello everyone, I have the same issue as this guy: https://community.appdynamics.com/t5/Licensing-including-Trial/I-haven-t-received-controller-info-after-trial-set-up/td-p/49483 My email: [Redac... See more...
Hello everyone, I have the same issue as this guy: https://community.appdynamics.com/t5/Licensing-including-Trial/I-haven-t-received-controller-info-after-trial-set-up/td-p/49483 My email: [Redacted] My coworker's email (he also has this issue): [Redacted] Could you guys send us the necessary information to activate the Controller? Best regards, Marcelo Contin ^ Post edited to remove the email address. Please don't share your or others' emails on community posts for security and privacy reasons. 
I am wondering if anyone has this issue or use case. We are trying to see if we can have a system that would alert us on when a host has stopped sending logs based on the specific index it belongs. F... See more...
I am wondering if anyone has this issue or use case. We are trying to see if we can have a system that would alert us on when a host has stopped sending logs based on the specific index it belongs. For example: We woudl like to know if a firewall has stopped sending logs within 30min and also lets say if a host for another less continuos feed has stopped, exmaple: host A of index=trickle_feed has not send in 4 hours, etc. We are good with the logic on those searches, what i am really looking for is direction on how you create those alerts and assigned them to someone to be follow up on? what other tools you might be using for the triaging and tracking of the alert/incident/ticket/work while the feed for the Quiet host is being restored?   
Hi folks, Our on-premise 5.3.1 SOAR's Ingest daemon is behaving funny in terms of memory management and was wondering if someone can give me any pointers to where to look for what is going wrong. ... See more...
Hi folks, Our on-premise 5.3.1 SOAR's Ingest daemon is behaving funny in terms of memory management and was wondering if someone can give me any pointers to where to look for what is going wrong. In essence, the ingestd keeps on using more and more virtual memory until it maxes out at 256GB and then stops ingesting more data. Restarting the service does solve the issue. I am thinking the root cause might be hiding in 3 places: - poorly written playbooks - I am thinking something might be wrong with the playbooks that we have. We have playbooks running as often as every 5 minutes, so I suppose they can cause resource starvation. Not sure how to dive deeper for potential memory leaks here though.  - something going wrong with the ingestion of containers/better clean-up of closed containers - is it possible that just closing containers without deleting them after X amount of time can cause this? - some weird bug that we've hit - not sure how likely this is but I saw that in version 5.3.4 a bug regarding memory usage has been fixed (PSAAS-9663) so it is on my list, if nothing else turns up   One relevant point to make is that this started occurring after migration from 4.9.X to our current version so I have no idea if this is linked to the fact that we migrated to Python 3 playbooks or the particular product version. Any pointers to where/how to start looking for the root cause are appreciated. Cheers.
A new splunk user here. I am trying to install splunk UF on ubuntu. I get this error while trying to run the package for the first time: Could not open log file "/opt/splunkforwarder/var/log/splu... See more...
A new splunk user here. I am trying to install splunk UF on ubuntu. I get this error while trying to run the package for the first time: Could not open log file "/opt/splunkforwarder/var/log/splunk/first_install.log" for writing (2). I saw some articles online but the suggestions did not resolve the issue for me. If I can get some step by step guide on resolving this, I will be grateful. Thank you.