All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @phularah , the Forwarder License was created just for these purposes: input, parse and forward data without local indexing. You don't need a Splunk license on HF, you need it only on Indexers. ... See more...
Hi @phularah , the Forwarder License was created just for these purposes: input, parse and forward data without local indexing. You don't need a Splunk license on HF, you need it only on Indexers. Ciao. Giuseppe
Hi @Dharani , yes it's possible, you should: setup an action that when the first alert is triggered, it writes an event in a summary index (better) or in a lookup; add to you alert the condition ... See more...
Hi @Dharani , yes it's possible, you should: setup an action that when the first alert is triggered, it writes an event in a summary index (better) or in a lookup; add to you alert the condition that the alert wasn't already triggered reading from the summary index. If you have Enterprise Security, you don't need the summary index and you can use the Notable index. Ciao. Giuseppe
Hi,  I want to schedule one splunk alert , please let me know if below option is possible: When the first alert received for xxx error  then query should check if this is the first occurance of an... See more...
Hi,  I want to schedule one splunk alert , please let me know if below option is possible: When the first alert received for xxx error  then query should check if this is the first occurance of an error in last 24 hours  if yes then Alert email can be triggered  If the error is not first occurance then may be based on threshold we should only send one email for more than 15 failures in an  hour or so. 2nd point is basically set up splunk alert for xxx error , threshold: trigger when count>15 in last 1 hour. 1st point is for , when 1st occurrence of error came , it will not wait for count>15 and 1 hr , it will immediately trigger an email.   Please help on this.
Hi, Document was clear. I need to know AppDynamics are capturing windows service application [with Error Transaction] or not. unable to get the Error transactions details. Thanks in advance.
@Amolsbhalerao  You may have a stale splunk.pid file that is not being cleared on startup. This can prevent the splunkd process from starting. If this is the case, you can make a copy of the splunk.... See more...
@Amolsbhalerao  You may have a stale splunk.pid file that is not being cleared on startup. This can prevent the splunkd process from starting. If this is the case, you can make a copy of the splunk.pid file in the /splunkforwarder/var/run/splunk directory and replace it with the original. Run ps -ef | grep splunk command to verify the process is running.
Thanks Guys, changing the settings at HF solved this issue
Thanks for the valueable query, few points here 1- I am unable to locate my HF under h field (search from IP as well as hostname) 2- How can i put restriction on day basis, like to create bar chart... See more...
Thanks for the valueable query, few points here 1- I am unable to locate my HF under h field (search from IP as well as hostname) 2- How can i put restriction on day basis, like to create bar chart having license consumption during the week 3- I have another way to look into it as i mainly would like to calculate data ingestion where index name having common starting name like index="test*" and i found a field which is idx to query the same. However how to add all the data and show it in graph 4- Also i think this is license in GB , | eval licenseGB =round(license/1024/1024/1024,3). Why did you rename it to TB?
@uagraw01  00:00 is an offset from UTC. The %z value should parse this in -/+HHMM format.
@azteksites  I am still confused for 00:00 (for last two pattern )  
Hi @pm2012  you can use following query index=_internal source="*license_usage.log" type=Usage h="<forwader name>" | rename _time as Date | eval Date=strftime(Date,"%b-%y") | stats sum(b)... See more...
Hi @pm2012  you can use following query index=_internal source="*license_usage.log" type=Usage h="<forwader name>" | rename _time as Date | eval Date=strftime(Date,"%b-%y") | stats sum(b) as license by Date h | eval licenseGB =round(license/1024/1024/1024,3) | rename licenseGB as TB
You can try the following TIME_FORMAT value to parse the timestamp, TIME_FORMAT = %Y-%m-%dT%H:%M:%S,%3N%z  
Thanks for the answer . By the way have you missed %M ?   should be like this: %Y-%m-%dT%H:%M:%S,%3Q+00:00
Hi @uagraw01  it seems ,533 is milliseconds 2023-12-05T04:21:21,533+00:00 %Y-%m-%dT%H:%S,%3Q+00:00
Hi SMEs, Hope you are doing great, i am curious to know how to check the daily data consumption (GB/Day) from a specific Heavy Forwarder using Splunk search when there are multiple HFs are there in ... See more...
Hi SMEs, Hope you are doing great, i am curious to know how to check the daily data consumption (GB/Day) from a specific Heavy Forwarder using Splunk search when there are multiple HFs are there in the deployment. thanks in advance
Please help me to get the time format for the below string in props.conf. I am confused with the last three patterns (533+00:00)   2023-12-05T04:21:21,533+00:00   Thanks in advance.
I tired this method but it's giving me servers that is  monitored @ITWhisperer 
Hi All  Problem description: Search Head physical memory utilization increasing 2% per day Instance deployment: Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered wi... See more...
Hi All  Problem description: Search Head physical memory utilization increasing 2% per day Instance deployment: Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered with the main SH with this issue has allocated 48 CPU Cores | Physical Mem 32097 MB | Search Concurrency 10 | CPU usage 2% | Memory usage 57% | Linux 8.7 It is used to search across a cluster of 6 indexers. I've had Splunk look into it who reported this could be due to an internal bug fixed in 9.0.7 and 9.1.2(Jira SPL-241171 ). The actual bug fix is by the following Jira: SPL-228226: SummarizationHandler::handleList() calls getSavedSearches for all users which use a lot of memory, causing OOM at Progressive A workaround to change the limits.conf in the form of do_not_use_summaries = true did not fix the issue. splunkd server process seem to be the main component increasing it's memory usage over time. Splunk process restart seems to lower and restart the memory usage but trending upwards at a slow rate.   If anyone could share a similar experience so we can validate the Splunk support solution of upgrading to 9.1.2 based on the symptoms described above it would be appreciated. Thanks    
Hi @VK18 , If the notable is new and not being worked on by analysts/concerned team then you would find empty results. Check for notable that has been changed from New to other status such as 'In pr... See more...
Hi @VK18 , If the notable is new and not being worked on by analysts/concerned team then you would find empty results. Check for notable that has been changed from New to other status such as 'In progress' or closed. You should be able to find the history. ----- Srikanth Yarlagadda    
Hi Team, We're encountering a problem with the Incident Review History tab in Splunk ES. Clicking on Incident Review, then a specific notable (like 'Tunnelling Via DNS'), followed by History and cli... See more...
Hi Team, We're encountering a problem with the Incident Review History tab in Splunk ES. Clicking on Incident Review, then a specific notable (like 'Tunnelling Via DNS'), followed by History and clicking 'View all review activity for this Notable Event', results in an empty history being displayed for all the notables. Any leads on this would be highly appreciated. Note : Recently, we have upgraded to Splunk ES to 7.1.2 from 7.0.0 Regards VK18 
Thanks for the tag. Yeah there is some confusion with the latest release because we added the ability to set default api key as well as others and they way we implemented it is causing the issue.  ... See more...
Thanks for the tag. Yeah there is some confusion with the latest release because we added the ability to set default api key as well as others and they way we implemented it is causing the issue.  Basically we need users to enter a default ORG and default KEY but wanted to give them the option to add others as well and yeah I need to rework the setup page. The screenshots under installation section on GitHub should help clarify https://github.com/bentleymi/ChatGPT-4-Splunk