All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How are you doing procedures for Notable Events? The description field doesn't support paragraph breaks. I'd been using Next Steps as my space for procedures. With the upgrade to 7.3.0, my Next Ste... See more...
How are you doing procedures for Notable Events? The description field doesn't support paragraph breaks. I'd been using Next Steps as my space for procedures. With the upgrade to 7.3.0, my Next Steps all have  {"version":1,"data":" appended at the start. If I try to update them, it appears Splunk upgrades the text to the new version and linebreaks are no longer supported and my procedures turn into giant blobs of text.
Hi Splunkers, I have to calculate daily ingested volume in a Splunk Enteprise environment. Here on community I found a lot of post, and related answer, to a similar question: daily license consumpti... See more...
Hi Splunkers, I have to calculate daily ingested volume in a Splunk Enteprise environment. Here on community I found a lot of post, and related answer, to a similar question: daily license consumption, but I don't know if it is what I need. I mean: we know that, once data are ingested by Splunk, compression factor is applied and, in a non clustered environment, it is more or less 50%. So, for example, if I have 100 GB data ingested by day, final size on disk will be 50 GB . Well, I have to calculate total GB BEFORE compression is applied. So, in my above example, search/method I need should NOT return 50 GB as final result, but 100 GB. Moreover, in my current env, I have an Indexers cluster.  So, what is not clear is: daily consumed License, is what I need? I mean: when I see daily consumed license by my environment, GB returned are the ingested one BEFORE compression, or the Compressed one?  
I am trying to create a Transaction where my starting and ending 'event' have exactly the same time. In _raw the time is "Wed Feb 21 08:15:01 CST 2024" My current SPL is:  | transaction keeporphans... See more...
I am trying to create a Transaction where my starting and ending 'event' have exactly the same time. In _raw the time is "Wed Feb 21 08:15:01 CST 2024" My current SPL is:  | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPLETED OK" OR "ABORTED, exiting with status") But my transaction only has the starting event. So I added the following which had no change ? | eval _time = case( match(_raw, "COMPLETED OK"), _time +5, match(_raw, "ABORTED"), _time +5, true(),_time) | sort _time | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPLETED OK" OR "ABORTED, exiting with status") When I added the above changes, when I look the the events in the 'Time' columns they are 5 seconds apart, yet Tranaction does not associate them ? 2/21/24 8:15:01.000 AM (Starting Event) 2/21/24 8:15:06.000 AM (Ending Event)
Anyone any experience with automated testing of Splunk dashboards.  I'm looking for something to test whether all drilldowns and dropdowns work and preferably data check if numbers add up.
I just realized that the NIX TA is being deployed to our forwarders via the deployment apps, to the indexers via the master apps and to the SHs via the SH apps. It was a surprise for me to realize th... See more...
I just realized that the NIX TA is being deployed to our forwarders via the deployment apps, to the indexers via the master apps and to the SHs via the SH apps. It was a surprise for me to realize that the TA is not being deployed to the deployment and deployer servers, the license master and the cluster master. And therefore, how can the TA be deployed to all Splunk servers?
Splunk Add-on for MYSQL Database: What role/permissions are required from MYSQL dba to use this add-on? What role should be assigned to the user created on MYSQL server to communicate with splunk db... See more...
Splunk Add-on for MYSQL Database: What role/permissions are required from MYSQL dba to use this add-on? What role should be assigned to the user created on MYSQL server to communicate with splunk db connect.
I'm trying to run a base search but it is throwing me an error. Reason being I have two search tags inside a panel.  EG: Base search: <search id="basesearch"> <query>index=main source=xyz </que... See more...
I'm trying to run a base search but it is throwing me an error. Reason being I have two search tags inside a panel.  EG: Base search: <search id="basesearch"> <query>index=main source=xyz </query> <earliest>$EarliestTime$</earliest> <latest>$LatestsTime$</latest> </search> Panel search: <chart depends="$abc$"> <title>Chart1</title> <search> <done> <eval abc="computer1"</eval> </done> <search base="basesearch"> <query> |search host="INFO" OR host="ERROR" panel=$panel1$ |timechart span=$TimeSpan$m count by panel usenull=f useother=f | eventstats sum("host") as _host</query> </search> <earliest>$InputTimeRange.earliest$</earliest> <latest>$InputTimeRange.latest$</latest> </search> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.drilldown">all</option> <option name="charting.fieldColors">{"host":0xFFFF00}</option> <option name="charting.legend.placement">bottom</option> <option name="refresh.display">progressbar</option> </chart> Warning msg : Node <search> is not allowed here Done section is required in the panel so I cannot remove it.  Is there a way to use a base search this way?  
Hello everyone, I was wondering if there are any ways to back up / version control Dashboards that were created directly on Splunk Cloud to either a local git or to at least make sure that they ca... See more...
Hello everyone, I was wondering if there are any ways to back up / version control Dashboards that were created directly on Splunk Cloud to either a local git or to at least make sure that they can be recoverable/rolled back if a user/administrator edits/deletes one. So far I found this App: #https://splunkbase.splunk.com/app/5061, but I think that this app is more like a dashboard and doesn't really provide any of the use cases that I just described. BR, Andreas
Hi, Could anyone pls guide me how we can detect an attacker moving laterally in the environment can be a challenge right, How we can write the correlation search is there any prerequisites  need to ... See more...
Hi, Could anyone pls guide me how we can detect an attacker moving laterally in the environment can be a challenge right, How we can write the correlation search is there any prerequisites  need to be followed. Thanks in advance
hi I have this situation index="idx" [| inputlookup name.csv | table id name ] idx= id name 1a2 aaa 1A2 aaa 12a bbb   lookup id name 1a2 aaa   the result is t... See more...
hi I have this situation index="idx" [| inputlookup name.csv | table id name ] idx= id name 1a2 aaa 1A2 aaa 12a bbb   lookup id name 1a2 aaa   the result is that it extracts the first 2 lines. How do I extract just the first line? Thank you Simone
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: =============================... See more...
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: ============================================== Part 1.    session start is completed Part 2.     Before app message row count    : 9000000                   Before app consolidation row count    :8888800 Part 3.     append message completed Part 4.     After app message flush row count : 0                    After app message flush row count     :1000000 =================================================   How can we use regex and get the fields from above event and show them in table like below parts                   message                                                  count Part 1               session start is completed part 2                 Before app message row count          9000000                                                                                                         8888800 part 3                   append message completed
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting... See more...
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting result in 30 mins internal say 11:30 AM, 12 PM, 12:30 PM, 1 PM etc.  
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in ... See more...
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in splunkd.log, inputs & outputs config are in place, there are no space issues as well. What could be the possible reason for this if anyone can help me? All our indexers & SH's are hosted in Splunk cloud.
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert ... See more...
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert if the log numbers drop by 30% or more in any period. I've seen this done with search and with _index, but I'm unsure which way is best. I don't want to build almost 100 searches for 100 different source types, and I'd much rather do it by the twenty-something indexes. I'm not sure if ML is the right way to do this but I've seen times when logs stop flowing and it isn't noticed for days and I want to prevent that from happening. Any help is appreciated. 
I am using Splunk Enterprise Version: 9.1.0.1. my search query is : index="webmethods_prd" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" InterfaceName=USCUS... See more...
I am using Splunk Enterprise Version: 9.1.0.1. my search query is : index="webmethods_prd" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" InterfaceName=USCUSTOMERPO Status=Success OR Status=Failure | eval timestamp=strftime(_time, "%F")|chart limit=30 dc(TxID) over Sender_ID by timestamp in result I am getting incomplete Sender_ID, splunk removed space from Sender_ID but actually it should be full name , like this : How can I preserve the full Sender_ID here?   Avik
Hello,  Please, in Splunk Enterprise, I would like to know if it is possible to apply an INGEST_EVAL processing at indexer layer for data that is coming to indexer  from a HEC (http event collector)... See more...
Hello,  Please, in Splunk Enterprise, I would like to know if it is possible to apply an INGEST_EVAL processing at indexer layer for data that is coming to indexer  from a HEC (http event collector). Thanks
We are working to link server information to the services in the ServiceNow CMDB. We are looking for example to relationship between CI.  
Hello all,  While developing my Splunk Add-On, I've run into a blocker concerning the timepicker on search. Currently, changing the time from the default "Last 24 Hours" to any other value has no ... See more...
Hello all,  While developing my Splunk Add-On, I've run into a blocker concerning the timepicker on search. Currently, changing the time from the default "Last 24 Hours" to any other value has no effect and the search always returns all of the elements from my KV store. I've read through about a dozen forum threads but haven't found a clear answer to this problem.  Any help with what settings/files need to be configured would be appreciated! I am developing in Splunk Enterprise 9.1.3
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, ... See more...
I created a table that outputs the organization,  threshold, count, and response time. If the response time is greater than the threshold, then I want the  response time  value to turn red. However, the threshold is different for every organization. Is there a way to dynamically set the threshold on the table for the Response Time column to turn red based on its respective threshold.    For example, organization A will have a threshold of 3, while organization B will have a threshold of 10.  I want the table to display all the organizations,count, the response time, and the threshold.    index | stats count as "volume" avg as "avg_value" by organization, _time, threshold   Kindly help.