All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does anyone know why we are getting such errors for our few DB inputs?? Is there a setting somewhere to increase the number of HECs on this HF for DBX usage.
I am querying a change in a value each week over last 4 weeks. Ineed to know the value from the week before the search window to work out the change correctly. index=ind sourcetype=src (type=instrum... See more...
I am querying a change in a value each week over last 4 weeks. Ineed to know the value from the week before the search window to work out the change correctly. index=ind sourcetype=src (type=instrument) earliest=-5w@w+1d latest=@w+1d | bucket _time span=7d | stats max(reading) as WeekMax by _time | streamstats current=f last(WeekMax) as LastWeekMax | eval WeekDelta = WeekMax - LastWeekMax | eval WeekDelta = if(WeekDelta < 0, 0.000000, WeekDelta) | table _time, WeekMax, WeekDelta I don't want to show the time for the week before the query (-5th week). Any tips on how to change this query to only show results for last 4 weeks but still calculating the change correctly?   Thanks
I basically have the exact same question as https://community.splunk.com/t5/Dashboards-Visualizations/How-to-have-a-panel-use-an-offset-from-a-time-picker/m-p/351003.   BUT I need to actually chang... See more...
I basically have the exact same question as https://community.splunk.com/t5/Dashboards-Visualizations/How-to-have-a-panel-use-an-offset-from-a-time-picker/m-p/351003.   BUT I need to actually change the value in the timerange picker token. E.G. if i select a timerange of "last 4 hour" and my modification is to add an hour, than the $token_time.earliest$ should not be  "-4h" but "-5h".
Even though I am providing accurate inputs, the Speakatoo API is not working as expected for me. Seeking assistance to resolve this issue.
Hi,  I want to schedule one splunk alert , please let me know if below option is possible: When the first alert received for xxx error  then query should check if this is the first occurance of an... See more...
Hi,  I want to schedule one splunk alert , please let me know if below option is possible: When the first alert received for xxx error  then query should check if this is the first occurance of an error in last 24 hours  if yes then Alert email can be triggered  If the error is not first occurance then may be based on threshold we should only send one email for more than 15 failures in an  hour or so. 2nd point is basically set up splunk alert for xxx error , threshold: trigger when count>15 in last 1 hour. 1st point is for , when 1st occurrence of error came , it will not wait for count>15 and 1 hr , it will immediately trigger an email.   Please help on this.
Hi SMEs, Hope you are doing great, i am curious to know how to check the daily data consumption (GB/Day) from a specific Heavy Forwarder using Splunk search when there are multiple HFs are there in ... See more...
Hi SMEs, Hope you are doing great, i am curious to know how to check the daily data consumption (GB/Day) from a specific Heavy Forwarder using Splunk search when there are multiple HFs are there in the deployment. thanks in advance
Please help me to get the time format for the below string in props.conf. I am confused with the last three patterns (533+00:00)   2023-12-05T04:21:21,533+00:00   Thanks in advance.
Hi All  Problem description: Search Head physical memory utilization increasing 2% per day Instance deployment: Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered wi... See more...
Hi All  Problem description: Search Head physical memory utilization increasing 2% per day Instance deployment: Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered with the main SH with this issue has allocated 48 CPU Cores | Physical Mem 32097 MB | Search Concurrency 10 | CPU usage 2% | Memory usage 57% | Linux 8.7 It is used to search across a cluster of 6 indexers. I've had Splunk look into it who reported this could be due to an internal bug fixed in 9.0.7 and 9.1.2(Jira SPL-241171 ). The actual bug fix is by the following Jira: SPL-228226: SummarizationHandler::handleList() calls getSavedSearches for all users which use a lot of memory, causing OOM at Progressive A workaround to change the limits.conf in the form of do_not_use_summaries = true did not fix the issue. splunkd server process seem to be the main component increasing it's memory usage over time. Splunk process restart seems to lower and restart the memory usage but trending upwards at a slow rate.   If anyone could share a similar experience so we can validate the Splunk support solution of upgrading to 9.1.2 based on the symptoms described above it would be appreciated. Thanks    
Hi Team, We're encountering a problem with the Incident Review History tab in Splunk ES. Clicking on Incident Review, then a specific notable (like 'Tunnelling Via DNS'), followed by History and cli... See more...
Hi Team, We're encountering a problem with the Incident Review History tab in Splunk ES. Clicking on Incident Review, then a specific notable (like 'Tunnelling Via DNS'), followed by History and clicking 'View all review activity for this Notable Event', results in an empty history being displayed for all the notables. Any leads on this would be highly appreciated. Note : Recently, we have upgraded to Splunk ES to 7.1.2 from 7.0.0 Regards VK18 
When I apply ingest actions and I specify host field and put in the IP address, it works fine but when I try to use _raw and for instance; filter on Teardown ICMP connection , it shows the affected... See more...
When I apply ingest actions and I specify host field and put in the IP address, it works fine but when I try to use _raw and for instance; filter on Teardown ICMP connection , it shows the affected events but when I check hours or days later, it still ingests the messages filtered by using the _raw as the field.    
I'm using current Cloud Splunk: It appears the older "Splunk Add-on for AWS" can stream in Cloudwatch log-group data through Inputs > Custom Data Type > Cloudwatch Logs. This asks for a comma separa... See more...
I'm using current Cloud Splunk: It appears the older "Splunk Add-on for AWS" can stream in Cloudwatch log-group data through Inputs > Custom Data Type > Cloudwatch Logs. This asks for a comma separated log-groups to feed of of and presumably setups up ingest for them. Data Manager has a Cloudwatch Logs section,  but it appears to only cover AWS Cloudtrail AWS Security Hub Amazon Guard Duty IAM Access Analyzer IAM Credential support Metadata (EC2, IAM, Network ACLs, EC2 sec groups) Am I just missing something in Data Manager, does it support ingesting Cloudwatch log-groups? Should I use "Splunk Add-On for AWS"? Should forgo both and instead use the splunk log driver with the container tasks as per https://repost.aws/knowledge-center/ecs-task-fargate-splunk-log-driver (posted a year ago) Thank you!
I'm trying to have a timechart showing the count of events by a category grouped by week. The search time is controlled by a radio button on the dashboard with options from 1w - 12 weeks with the end... See more...
I'm trying to have a timechart showing the count of events by a category grouped by week. The search time is controlled by a radio button on the dashboard with options from 1w - 12 weeks with the end date set to @w. I then have a drilldown that shows a table with more info about each event for that category in that time range. mysearch .... | dedup case_id | timechart span=1w count by case_category The chart looks fine but when I click on certain sections to load the drilldown, much more data appears than was suggested by the count in the timechart. For instance, looking at Nov 19-25, in the timechart it shows 26 events, but when I go to the drilldown it shows 61. When I open the drilldown search in Search, the issue seems to involve expanding the time range beyond one week. If I change the range from Nov 19-25 to Nov 19-27, the data from Nov 22-24 is either erased or reduced. Nov 19-25 stats count results: Nov 19: null Nov 20: 8 Nov 21: 14 Nov 22: 19 ** Nov 23: 20 ** Nov 24: 1 ** Nov 25: null Nov 19-28 stats count results: Nov 19: null Nov 20: 8 Nov 21: 14 Nov 22: 5 ** Nov 23: null ** Nov 24: null ** Nov 25: null Nov 26: null Nov 27: 35 Nov 28: 1
Greetings Community! I have a question regarding the Splunk Cloud License (classic), particularly when the license expires.  - Is there a message informing the license is about to expire? - After ... See more...
Greetings Community! I have a question regarding the Splunk Cloud License (classic), particularly when the license expires.  - Is there a message informing the license is about to expire? - After the expiration date, is there any grace period provided? - In case I decide to not renew the license, are we able somehow to download the company data before its total removal? Or after my license is expired I lose all indexed data? Thanks in advance any information on this matter. Kind Regards, Marcelo  
I want to run an Enrichment playbook inside a custom function. Looking to pass in a list of devices and call the playbook each time passing in a single deviceId at a time. What is the best way to do ... See more...
I want to run an Enrichment playbook inside a custom function. Looking to pass in a list of devices and call the playbook each time passing in a single deviceId at a time. What is the best way to do this?
I am getting error messages WARNING: web interface does not seem to be available! I just install the Splunk on my Mac.
I am trying to make a query which will give me the result of unique file names with month in column and a time span of 1 hour in row. Below is my query : index="app_cleo_db" origname="GEAC_Payroll*... See more...
I am trying to make a query which will give me the result of unique file names with month in column and a time span of 1 hour in row. Below is my query : index="app_cleo_db" origname="GEAC_Payroll*" | rex "\sorigname=\"GEAC_Payroll\((?<digits>\d+)\)\d{8}_\d{6}\.xml\"" | search origname="*.xml" | eval Date = strftime(_time, "%Y-%m-%d %H:00:00") | eval DateOnly = strftime(_time, "%Y-%m-%d") | transaction DateOnly, origname | timechart count by DateOnly But it is giving me an output with date as well as timestamp in the row like below: _time 2023-12-02 2023-12-03 2023-12-02 00:00:00 8 0 2023-12-02 00:30:00 0 0 2023-12-02 01:00:00 0 7 2023-12-02 01:30:00 0 0 2023-12-02 02:00:00 6 0 2023-12-02 02:30:00 0 0 2023-12-02 00:00:00 2 0 2023-12-03 00:30:00 0 5 2023-12-03 01:00:00 0 0 2023-12-03 01:30:00 0 20 2023-12-03 02:00:00 0 0 2023-12-03 02:30:00 34 0   I want the result to look like below _time 2023-12-02 2023-12-03 00:00:00 0 0 01:00:00 0 0  02:00:00 0 0 03:00:00 0 0
Hi All,   I have a Splunk search query executing the in the background(used Send to background option) while this is running my VPN got disconnected and after sometime I have reconnected to VPN and... See more...
Hi All,   I have a Splunk search query executing the in the background(used Send to background option) while this is running my VPN got disconnected and after sometime I have reconnected to VPN and the query is still runing in the background. My question is does it gives me complete results or any incomplete results?   Thanks
Hi, I am trying to implementing glass table for one of the use case. My use case have complex architecture but seems like I don't have much choice. It has only simple Arrow. For my use case I n... See more...
Hi, I am trying to implementing glass table for one of the use case. My use case have complex architecture but seems like I don't have much choice. It has only simple Arrow. For my use case I need flexible option so I can bend the arrow or having multiple staggered arrow. I tried to implement by joining multiple arrows but its very difficult and time consuming as small change require to adjust multiple arrows. Just looking for option. Is there any content pack ? or better option to connect services in glass table ? This is just simple example. my use case is way more complex.    
Got a search like this (I've obfuscated it a bit) | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") Got a great looking stats... See more...
Got a search like this (I've obfuscated it a bit) | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") Got a great looking stats table - and Im really pleased with the performance of tstats - awesome. I want to graph the results... easy right?  well no - I cannot for the life of me seem to break down a say, 60 minute span down by host, despite the fact I got this awesome oven ready totally graphable stats table so I am trying  | tstats count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") | timechart count by host but the count is counting the host, whereas I want to "count the count" ?  Any ideas?  this will be a super simple one I expect - I got a total mental block on this
I have installed Splunk Enterprise free trial into a VM as a root user. I know the best practice is to avoid using root to run as Splunk in case the underlying OS gets compromised and then the hacker... See more...
I have installed Splunk Enterprise free trial into a VM as a root user. I know the best practice is to avoid using root to run as Splunk in case the underlying OS gets compromised and then the hacker has access to your OS with root level. I am following the doc online and it says once you install Splunk as root, don't start the Splunk installation but rather add a new user and then change ownership of the Splunk folder to that new non-root user   But before I do that, when Splunk is installed I check its ownership and it's already configured to Splunk. Does this mean Splunk has already configured a non-root user automatically upon installation?   If so, how would I make sure it has read access to local files I want to monitor?