All Posts

Top

All Posts

I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,crit... See more...
I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,critical podc,critical     We also have data in splunk that gives us pod_name, status, and importance. Results from the below search would look like this:     index=abc sourcetype=kubectl | table pod_name, status, importance poda-284489-cs834 Running non-critical podb-834hgv8-cn28s Running critical     Note podc was not found..   I need to be able to compare the results from this search to the list from the inputlookup and show that podc was not found in the results and that it is a critical pod. Need to be able to count how many critical and non-critical pods are not found as well as table the list of missing pods.    I have tried several iterations of searches but havent came across one that allows me to compare a search result to an inputlookup using a partial match. eval result=if(like(pod_name_lookup...etc is close but requires a pattern and not the wildcard value of a field. Thoughts?      
You could stats count by query. Queries that are found by both detections will have count=2, while queries that are found by only one will have count=1. Then you can filter for count=1 to remove the ... See more...
You could stats count by query. Queries that are found by both detections will have count=2, while queries that are found by only one will have count=1. Then you can filter for count=1 to remove the hundreds of queries that are found by both detections. | stats count by query | where count = 1  
You could do that, if it gives you what you need.
It appears that the Fortinet FortiWeb Add-On receives the data from a UDP data input. The instructions on the Splunkbase page describe how to set a syslog log export configuration on FortiWeb. You c... See more...
It appears that the Fortinet FortiWeb Add-On receives the data from a UDP data input. The instructions on the Splunkbase page describe how to set a syslog log export configuration on FortiWeb. You could install this app on your indexers or a heavy forwarder to receive the logs directly from your FortiWeb device(s), but it's generally better to have a separate syslog server to collect logs rather than rely on Splunk's udp input. Your current log pipeline looks good. You could then install this app on your indexer tier so that the indexers perform index-time operations on the logs after receiving them from your syslog server. This app can also go on your search head to provide macros, eventtypes, and other knowledge objects used for searching. Because the app does not have any input configurations, it does not make sense to install it on a universal forwarder.
We're so glad you're here! The Splunk Community is place to connect, learn, give back, and have fun! It features Splunk enthusiasts from all kinds of backgrounds, working at just about every k... See more...
We're so glad you're here! The Splunk Community is place to connect, learn, give back, and have fun! It features Splunk enthusiasts from all kinds of backgrounds, working at just about every kind of organization, and working in a variety of roles and functions. If you're landed here, you belong here. And we welcome you! This space is home to several community programs and supported by both a team at Splunk and growing group of our community members, including the SplunkTrust. Please connect with any and all of us, and we've made it pretty easy to tell who's who by their Ranks and profiles.  Meet the Splunk Community Team! Meet the folks who make up our Community Team! If you ever have any questions, concerns, or just want someone to digitally high-five, we're here for you! Anam S, Community Manager (Splunk Answers) Renee W, Sr. Community Manager (SplunkTrust) Brian W, Sr. Community Technology Manager Gretchen F, Sr. Content Manager, Community (Blogs & Announcements) Kara D, Associate Community Manager (User Groups) Jenny B, Community Specialist (Slack) Looking for a spot to introduce yourself?  Drop us a comment below and let us know where you're joining us from!  To get started... Have a look around! You can navigate through our community and programs by using the main navigation, and you can learn a little more about specific programs and areas in this post. Review our Community Guidelines! These spell out some of our expectations and requirements of all community members. So be sure to take a few minutes to review them, and be sure to abide by them.  Ask questions! Splunk Answers is the place to ask questions, get answers, and find technical solutions, for any product in the Splunk portfolio. Join us on Slack! There, you'll have even more opportunities to ask questions, get answers, and connect with your fellow Splunk practitioners. Again, we're so glad you're here!  -- Splunk Community Team 
Are you saying that all the Application logs are not forwarding, or just the application logs for a specific source? There is a known issue with forwarder 9.0.4 where the event logs for Windows Defe... See more...
Are you saying that all the Application logs are not forwarding, or just the application logs for a specific source? There is a known issue with forwarder 9.0.4 where the event logs for Windows Defender will stop forwarding, (until next restart) but other logs will forward. Perhaps this issue is related. https://docs.splunk.com/Documentation/Splunk/9.0.4/ReleaseNotes/KnownIssues Could you try updating your forwarder version and seeing if it fixes the issue?
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and star... See more...
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and starts reporting again but the log file in windows is being continuously updated with recent logs though the modified time does not get updated because of the windows feature. The modified time for the log file is not an issue because the logs starts rolling in even when the modified time is same but the log file had latest logs.   we are using splunk forwarder 9.0.4 version currently. Can someone please help in triaging this issue? It is a problem with only one specific source with this windows host and other sources (windows event logs) are flowing in properly.
It needs equal access as you have in GUI. If you need to access data then you need access to those indexes and same for internal indexes.  The only exception is that there are some scheduled reports... See more...
It needs equal access as you have in GUI. If you need to access data then you need access to those indexes and same for internal indexes.  The only exception is that there are some scheduled reports which have run as owner. Those should work when you have access to those reports. But if those aren’t scheduled then that’s not working.
Since it's the same index with two different source types, could be SPL build differentially? ------------------- index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=bloc... See more...
Since it's the same index with two different source types, could be SPL build differentially? ------------------- index=firewall (sourcetype=collector OR sourcetype=metadata) enforcement_mode=block event_type="error" |table event_type, hostname, ip Thank you  
Hi @tony.lao If the reply helped answer your question, please click the “Accept as Solution” button on the reply. This confirmation that the question was answered alerts the community and helps bui... See more...
Hi @tony.lao If the reply helped answer your question, please click the “Accept as Solution” button on the reply. This confirmation that the question was answered alerts the community and helps build that bank of expertise for everyone in the community.  If the reply did not answer your question, jump back into the conversation to keep it going. 
host is a field sometimes populated by Splunk to identify where logs were ingested from - could this be your issue?
Hello @isoutamo  does this | tstats command requires to have data access or just internal logs? Thanks.  
Thank you. Because in different data source I see host name under different fields  i.e. in metadata "host1" and in collector just "host", I added rename index=firewall event_type="error" [sear... See more...
Thank you. Because in different data source I see host name under different fields  i.e. in metadata "host1" and in collector just "host", I added rename index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | rename host1 as host |dedup host | table host | format] | table event_type, host, ip -------- Now I am back to square 1  - it runs but no events produced and never finish.
Sorry, I missed a line, try this index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | dedup host | table host | format] | table event_type, host, ip
Thank you.  Unfortunately proposed change produced 0 events
I have soled the issue.    I needed to add quotes to the AccountType: | where AccountType IN ("$AccountType$")   I also needed to change the delimiter: <delimiter>,</delimiter>   This... See more...
I have soled the issue.    I needed to add quotes to the AccountType: | where AccountType IN ("$AccountType$")   I also needed to change the delimiter: <delimiter>,</delimiter>   This solved the problem for me! Thank you!
Try something like this index=firewall event_type="error" [search index=firewall sourcetype="metadata" enforcement_mode=block | dedup host | format] | table event_type, host, ip
| where ErrorType != ""
Its empty in the field .Attached screenshot. For some of the transaction we have multiple error type with empty values and with values. For the same transaction below events are there with empty and... See more...
Its empty in the field .Attached screenshot. For some of the transaction we have multiple error type with empty values and with values. For the same transaction below events are there with empty and with value. "timestamp" : "2024-03-21T17:33:53.993Z", "content" : { "ErrorType" : "", "ErrorMsg" : "" } "timestamp" : "2024-03-21T17:33:20.786Z", "content" : { "ErrorType" : "HTTP:NOT_FOUND", "ErrorMsg" : "HTTP /glimport' failed: not found (404)." },
So, it is expected to get the AccountTypes selected from the user on the dashboard from the multiselect filter.