All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have been using WEF as our collection point for a while.  We started out small but have expanded the range of events over time.   We have ~5,000 hosts forwarding to a single collector. The collec... See more...
We have been using WEF as our collection point for a while.  We started out small but have expanded the range of events over time.   We have ~5,000 hosts forwarding to a single collector. The collector is busy, but seems to be healthy based on conventional Windows indicators. However,  we have some data loss between the centralized event and Splunk (cloud).   Logs show up in the WEF collection log but never make it to the index.   First,   are there any performance tuning suggestions you can offer UF on a WEF collector? Second,  can you think of any way to check on processing of a single event once it goes into the UF and heads to the indexer?
Is there any way we can add some filter in subsearch savedsearch so that we wont skip any data/records as its limiting the events. e.g I am using savedsearch under join command, but its limiting the... See more...
Is there any way we can add some filter in subsearch savedsearch so that we wont skip any data/records as its limiting the events. e.g I am using savedsearch under join command, but its limiting the data. Thanks in Advance. 
the "where" command checks only one condition  doesn't work like that my search: . . . .  | where NOT (id_old = id OR user = username)   but there is a separate input, then everything works cor... See more...
the "where" command checks only one condition  doesn't work like that my search: . . . .  | where NOT (id_old = id OR user = username)   but there is a separate input, then everything works correctly. help plz
Client's F5 Load Balancer is writing data to our Splunk Syslog Heavy Forwarder, but when searching in Splunk Search Head the data is incomplete/missing. Did a packet capture (tcpdump) on Syslog serve... See more...
Client's F5 Load Balancer is writing data to our Splunk Syslog Heavy Forwarder, but when searching in Splunk Search Head the data is incomplete/missing. Did a packet capture (tcpdump) on Syslog server from the F5 Load Balancer and copied the syslog-ng for the F5 host. Assumption is the Syslog server is receiving all the syslog messages sent from the F5 host, but syslog-ng is not writing all of them to file. In the packet capture, the Syslog server received 800+ syslog messages, but only wrote 68 syslog messages to file. Any suggestion as to why this is happening? Or any suggestion how to torubleshoot this issue?
Hello, I am working with a large form that essentially takes inputs and creates a record of scan. The part I am focusing on is a set of two inputs that I pass as tokens into a search that at the end... See more...
Hello, I am working with a large form that essentially takes inputs and creates a record of scan. The part I am focusing on is a set of two inputs that I pass as tokens into a search that at the end runs a collect function. The problem I am running into is that after I enter a value in the first input and tab into the next input. It seems the function runs. This is causing the second input to not be collected in the data unless is quickly enter and tab out of the input box. If I can do it quickly, the value is getting passed. It will not pass a value if the second input is entered but still clicked on.  Is there anyway to halt the collect function until both inputs are entered? 
What is the difference between using Spool vs OneShot CLI commands?   Unfortunately I'm unable to install UFs or directly poll the logs and need to index tar.gz.   Is there a performance benefit?  Do... See more...
What is the difference between using Spool vs OneShot CLI commands?   Unfortunately I'm unable to install UFs or directly poll the logs and need to index tar.gz.   Is there a performance benefit?  Does using spool allow the indexer Splunk server to index the data in the background?
Hi there, I am planning to move our Frozen bucket location from a local drive to a share on another server, I just have a few questions regarding this. Is it as simple as editing the indexes for th... See more...
Hi there, I am planning to move our Frozen bucket location from a local drive to a share on another server, I just have a few questions regarding this. Is it as simple as editing the indexes for this and will a UNC path work ok if the permissions are set or must it be a mapped local drive? Thanks in advance!    
Hi, I was just curious if Splunk Universal Forwarder has any dependency with JRE/JDK as I am planning to upgrade JRE/JDK on my Windows machines and if there is a dependency, how do I go about perfor... See more...
Hi, I was just curious if Splunk Universal Forwarder has any dependency with JRE/JDK as I am planning to upgrade JRE/JDK on my Windows machines and if there is a dependency, how do I go about performing the upgrade? Would I have to stop the Splunk Universal Forwarder service first before upgrading  or can I just upgrade as per 
Hey all! I've inherited a Splunk instance that has been running for about 8 years now. There are instances of Splunk_TA_windows all over it - most are 4.8.3, but a couple are 8.0.0 and 8.1.2. (The o... See more...
Hey all! I've inherited a Splunk instance that has been running for about 8 years now. There are instances of Splunk_TA_windows all over it - most are 4.8.3, but a couple are 8.0.0 and 8.1.2. (The overall Splunk instance is running at 7.2 currently). In the process of investigation, I have discovered that our Active Directory controllers had Universal Forwarders installed on them using the GUI installer. In the process, they were set to collect Windows event logs, but no other configuration was made. As a result, a ton of logging is flowing into our "main" index. In fact, the only thing in the "inputs.conf" file is the IP address of the host. Thanks to the help and pointers of many, I've determined that this is definitely "not good" and instead I should have some filters/blacklists in place. I've gotten the controllers in question hooked up to our deployment server, so I want to push some apps to them via that. My question is: Should I deploy the entire Splunk_TA_windows app to the domain controllers? Or should I just push custom apps that contain the filtering/settings I want, and leave Splunk_TA_windows to the Heavy Forwarders, Indexers, and Search Heads we plan on using? Or should I do both? I've consulted a few other resources, such as https://community.splunk.com/t5/All-Apps-and-Add-ons/Is-it-a-best-practice-to-use-the-Splunk-Add-on-for-Microsoft/td-p/427679 (Best practice to use Splunk_TA_windows) https://docs.splunk.com/Documentation/WindowsAddOn/8.1.2/User/AbouttheSplunkAdd-onforWindows (Deploy and use documentation) https://www.splunk.com/en_us/blog/tips-and-tricks/working-with-active-directory-on-splunk-universal-forwarders.html (working with AD on Splunk Universal Forwarders) Digging around, I'm seeing that some Windows logging is being put into the "ActiveDirectory" sourcetype already, but not from any configuration I can find applying to the system, so I assume it is just recognizing them as AD events. My biggest concern is that I want to build a "baseline" that is easy to maintain going forward. I know from my Data Admin training that deployed add-ons are evaluated in reverse-lexicographical  order (IE "Splunk_TA_Windows" has lower priority than "institution_windows_core"), so I should be able to stack things... but again, I just want to make sure I'm following what people recommend. ( May also be using this forum as a "Rubber ducky" situation. )
Hello All, I have created couple of correlation searches , ensured to select "Notable" under the Adaptive Responsive section  of these searches so that they create a notable but yet these are not vi... See more...
Hello All, I have created couple of correlation searches , ensured to select "Notable" under the Adaptive Responsive section  of these searches so that they create a notable but yet these are not visible in the Drop Down list of  Incident Review dashboard.   When i run the searches manually they haven't yet produced any events or results because a matching event hasn't yet occured but shouldn't their names be at least be visible in Incident Review if they are enabled?  Do i need to wait for the searches to produce an event and only then will they populate in IR ?      I have made sure to check the lookup file which these searches are using, is set to Global permissions.    
https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/chartsImage When you upload an image, it is stored in the KV store. Because of this, only Enterprise admins, Cloud sc_admins, and power... See more...
https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/chartsImage When you upload an image, it is stored in the KV store. Because of this, only Enterprise admins, Cloud sc_admins, and power users can upload or delete images. If you don't have the correct role assigned to upload images, you can ask someone with the correct role to add it for you Is there some permission I can assign to users to allow them to upload images without asking an admin/power user?
After upgrading to 8.2 it seems that there are over 600 tasks to fix up in the Cluster Master. We have a Cluster Master and four indexers. Two sites with two indexers per. The Cluster master was pl... See more...
After upgrading to 8.2 it seems that there are over 600 tasks to fix up in the Cluster Master. We have a Cluster Master and four indexers. Two sites with two indexers per. The Cluster master was placed into maintenance mode during the upgrade and we upgraded one site at a time while pointing the forwarders to the indexers that were not getting upgraded, and then swapped it for the upgrades.   There are 633 fixup tasks and have been there for about a day now. Manually selecting roll bucket and trying to roll the bucket by the splunk _internal call command on the indexers to roll did nothing either. What can I do to resolve this?
Hi All,   I have an application running on localhost. Its basically an API. So when I trigger a request using postman, the application connects to azure and downloads a file. Can this process be mo... See more...
Hi All,   I have an application running on localhost. Its basically an API. So when I trigger a request using postman, the application connects to azure and downloads a file. Can this process be monitored by Splunk such that I can see the requests and response as logs in Splunk enterprise. Basically, I need the logs of the request and response of the application running on my localhost. Which of Splunk features supports this kind of functionality?
Hello Everyone! I am currently trying to create a dashboard to show the overall availability of a synthetic test for the current month and compare it to the previous months. I have looked in both D... See more...
Hello Everyone! I am currently trying to create a dashboard to show the overall availability of a synthetic test for the current month and compare it to the previous months. I have looked in both Dash studio to do a time comparison but ran into limitations in calculating availability due to the lack of metric expressions. In a standard dashboard, we are finding no way to set widget specific time ranges for previous months. Any info/ ideas would be greatly appreciated,  
Hello everyone, I was just curious if there are any sure fire best practices for health rules pertaining to business transactions? Since I have started using AppD, I am used to using 2 standard devi... See more...
Hello everyone, I was just curious if there are any sure fire best practices for health rules pertaining to business transactions? Since I have started using AppD, I am used to using 2 standard deviations as a warning and 3 standard deviations as a critical. Of course every App is different and there are nuances.  I was just wondering if anyone had a good rule of thumb they like to use, or if current best practices are documented anywhere? Thanks
I am looking for App that can be helpful to create a full pic of the organization  (domains such as network/endpoint/email/fw ) we are collecting the data from different sources (syslog , Arcsight,a... See more...
I am looking for App that can be helpful to create a full pic of the organization  (domains such as network/endpoint/email/fw ) we are collecting the data from different sources (syslog , Arcsight,apps )
I upgraded from 7.2 to 8.0 and then 8.0 to 8.2 After the upgrade to our distributed deployment, I am getting bombarded with email Health Alerts. "sum_top3_cpu_percs__max_last_3m"  is red due to t... See more...
I upgraded from 7.2 to 8.0 and then 8.0 to 8.2 After the upgrade to our distributed deployment, I am getting bombarded with email Health Alerts. "sum_top3_cpu_percs__max_last_3m"  is red due to the following: "Sum of 3 highest per-cpu iowaits reached red threshold of 15" "avg_cpu__max_perc_last_3m" is red due to the following: "System iowait reached red threshold of 3" "single_cpu__max_perc_last_3m" is red due to the following: "Maximum per-cpu iowait reached red threshold of 10"   I was getting them on my Indexers yesterday but this morning it seems to be our Enterprise Security SH, our Deployment Server,  and our regular Search Head.   I am unable to disable these alerts due to our Company's policy.    What can I do to either a.) resolve this cpu/iowait issue or b.) change the alert settings? I don't notice a difference in performance. I'm just curious as to what's causing this CPU usage spike? Because it seems to me - as in the example of avg cpu max percent if the CPU usage is above 3%, it is going to alert me?
Hello All, We have data coming in as part of HEC ingestion in Splunk. And I would need help to extract fields either be it search time or index time. we need line breaking and field extractions Bel... See more...
Hello All, We have data coming in as part of HEC ingestion in Splunk. And I would need help to extract fields either be it search time or index time. we need line breaking and field extractions Below is the sample : INFO 2021-10-27 07:31:00,004 [[MuleRuntime].io.4090: [bcom_membermasterbatch1].schedulerjobstatusFlow.BLOCKING @7a0bb47e] d4fff913-36f7-11ec-ba0c-11010ad55507org.mule.extension.jsonlogger.JsonLogger: { "correlationId" : "e4ggf523-27h7-11ec-ba0c-33333ad55333", "message" : "no key retrived", "tracePoint" : "START", "priority" : "INFO", "elapsed" : 0, "locationInfo" : { "lineInFile" : "222", "component" : "json-logger:logger", "fileName" : "schedulerjobstatus.xml", "rootContainer" : "schedulerjobstatusFlow"
  eval _raw = msg | rex "InputAmountToCredit\"\:\"(?<PayloadAmount>[^\"]+)" | rex "Request\#\:\s*(?<ID1>\d+) with (?<Status>\w+.\w+)" | rex "CRERequestId\"\:\"(?<ID2>[^\"]+)" | eval ID=coalesce(ID1,... See more...
  eval _raw = msg | rex "InputAmountToCredit\"\:\"(?<PayloadAmount>[^\"]+)" | rex "Request\#\:\s*(?<ID1>\d+) with (?<Status>\w+.\w+)" | rex "CRERequestId\"\:\"(?<ID2>[^\"]+)" | eval ID=coalesce(ID1,ID2) | stats latest(Status) as Status values(PayloadAmount) as Amount by ID| stats count(list()) by Status| eval _time=relative_time(now(),"-1d@d")|