All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here's a simple example <form version="1.1"> <label>HostDropdown</label> <fieldset submitButton="false"> <input type="dropdown" token="hosts" searchWhenChanged="true"> <label>Host Type... See more...
Here's a simple example <form version="1.1"> <label>HostDropdown</label> <fieldset submitButton="false"> <input type="dropdown" token="hosts" searchWhenChanged="true"> <label>Host Types</label> <choice value="prodhost*">Production</choice> <choice value="qahost*">QA</choice> <choice value="testhost*">Test</choice> </input> </fieldset> <row> <panel> <table> <search> <query> index=aaa source="/var/log/test1.log" host=$hosts$ |stats count by host </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> I suggest you look at this and have a look through the documentation that describes this https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML  
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has ... See more...
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt I am trying to write a query that captures the time between log a and log b without doing a subsearch, so far I have  index=a, env=a, account=a ("There is a file" OR "The file has been found")|field filename from log b | field filename2| eval Endtime = _time | ****Here is where I am lost, I was hoping to use if/match/like/eval to see to capture the start time where log b filename can be found in log a.  I have this so far******   | eval Starttime = if(match(filename,"There is%".filename2."%"),_time,0) I am not getting any 1s, just 0s.  I am pretty sure this is the problem "There is%".filename2."%", how do I correct it.
Thank you @marnall and @yuanliu . Yes, multiselect provides the same functionality but I was going for the same look and feel. Simple XML vs Studio:  
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from ... See more...
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from the search write the following: | rex field=event.url "^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+)" What should I add in the props.conf so that this instruction is fixed for the sourcetype "sec-web"?
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I h... See more...
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I have to wait for the changes t happen?
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with th... See more...
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with their associated entities as well as list associated alerts or severity. Is there a query that could achieve this? any pointers are very much appreciated! Also any pointers where I could potentially find the data and bring it together in a search would be very helpful too. Thanks!
@tscroggins, Is the suggested configuration restricted to certain Splunk Versions?, because we have tried different options but we are not seeing the CSV formated as expected also the instances ... See more...
@tscroggins, Is the suggested configuration restricted to certain Splunk Versions?, because we have tried different options but we are not seeing the CSV formated as expected also the instances were restarted.   Thanks in advance, we have ran the reports simple as possible. e.g.: "index=os earliest=-5m |timechart span=1m values(host)" Regards
As in outlook.com ? If so, there is an article here describing how to connect to it via SMTP: https://support.microsoft.com/en-us/office/pop-imap-and-smtp-settings-for-outlook-com-d088b986-291d-42b8-... See more...
As in outlook.com ? If so, there is an article here describing how to connect to it via SMTP: https://support.microsoft.com/en-us/office/pop-imap-and-smtp-settings-for-outlook-com-d088b986-291d-42b8-9564-9c414e2aa040 Enter the required credentials to your Splunk email settings, and it should work.
Could you try this SEDCMD in the props.conf file? (Make sure that the stanza is changed to match the sourcetype of the logs) [your_sourcetype] SEDCMD-maskpasswords = s/password: ([^;]+);cpassword: (... See more...
Could you try this SEDCMD in the props.conf file? (Make sure that the stanza is changed to match the sourcetype of the logs) [your_sourcetype] SEDCMD-maskpasswords = s/password: ([^;]+);cpassword: ([^;]+);/password: ####;cpassword: ####;/g  
@marnall - @richgalloway  solution is creating false alerts and triggering alerts even when servers are up. Can you provide any other solution in such way that alert is not triggered in maintenance w... See more...
@marnall - @richgalloway  solution is creating false alerts and triggering alerts even when servers are up. Can you provide any other solution in such way that alert is not triggered in maintenance window even though servers are down but alert gets only triggered outside window with condition atleast one of the server is down
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/1... See more...
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/15/2024)?
Not that I've seen. I've reached out to the developer and have a case logged with Splunk so I'll post once there's an update
We need to easily identify the SQL submitted by DB Connect. We'd like to use Oracle's SET_MODULE procedure. How do we accomplish this in DB Connect? call DBMS_APPLICATION_INFO.SET_MODULE ( module... See more...
We need to easily identify the SQL submitted by DB Connect. We'd like to use Oracle's SET_MODULE procedure. How do we accomplish this in DB Connect? call DBMS_APPLICATION_INFO.SET_MODULE ( module_name => 'Splunk_HF', action_name => 'DMP_Dashboard' ); <put our DB Input SQL here>
@richgalloway has a good solution. I think the "is_maintenance window" field in the condition has a typo so watch for that. Are either of you getting _time values when using "| eval current_time = _... See more...
@richgalloway has a good solution. I think the "is_maintenance window" field in the condition has a typo so watch for that. Are either of you getting _time values when using "| eval current_time = _time" after tstats? There are no _time fields specified in the first tstats command. Perhaps it would work better with "| eval current_time = now()"
I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,crit... See more...
I have an inputlookup that has a list of pod names that we expect to be deployed to an environment. The list would look something like:     pod_name_lookup,importance poda,non-critical podb,critical podc,critical     We also have data in splunk that gives us pod_name, status, and importance. Results from the below search would look like this:     index=abc sourcetype=kubectl | table pod_name, status, importance poda-284489-cs834 Running non-critical podb-834hgv8-cn28s Running critical     Note podc was not found..   I need to be able to compare the results from this search to the list from the inputlookup and show that podc was not found in the results and that it is a critical pod. Need to be able to count how many critical and non-critical pods are not found as well as table the list of missing pods.    I have tried several iterations of searches but havent came across one that allows me to compare a search result to an inputlookup using a partial match. eval result=if(like(pod_name_lookup...etc is close but requires a pattern and not the wildcard value of a field. Thoughts?      
You could stats count by query. Queries that are found by both detections will have count=2, while queries that are found by only one will have count=1. Then you can filter for count=1 to remove the ... See more...
You could stats count by query. Queries that are found by both detections will have count=2, while queries that are found by only one will have count=1. Then you can filter for count=1 to remove the hundreds of queries that are found by both detections. | stats count by query | where count = 1  
You could do that, if it gives you what you need.
It appears that the Fortinet FortiWeb Add-On receives the data from a UDP data input. The instructions on the Splunkbase page describe how to set a syslog log export configuration on FortiWeb. You c... See more...
It appears that the Fortinet FortiWeb Add-On receives the data from a UDP data input. The instructions on the Splunkbase page describe how to set a syslog log export configuration on FortiWeb. You could install this app on your indexers or a heavy forwarder to receive the logs directly from your FortiWeb device(s), but it's generally better to have a separate syslog server to collect logs rather than rely on Splunk's udp input. Your current log pipeline looks good. You could then install this app on your indexer tier so that the indexers perform index-time operations on the logs after receiving them from your syslog server. This app can also go on your search head to provide macros, eventtypes, and other knowledge objects used for searching. Because the app does not have any input configurations, it does not make sense to install it on a universal forwarder.
Are you saying that all the Application logs are not forwarding, or just the application logs for a specific source? There is a known issue with forwarder 9.0.4 where the event logs for Windows Defe... See more...
Are you saying that all the Application logs are not forwarding, or just the application logs for a specific source? There is a known issue with forwarder 9.0.4 where the event logs for Windows Defender will stop forwarding, (until next restart) but other logs will forward. Perhaps this issue is related. https://docs.splunk.com/Documentation/Splunk/9.0.4/ReleaseNotes/KnownIssues Could you try updating your forwarder version and seeing if it fixes the issue?
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and star... See more...
Hi All,   We have widnows event and other application logs ngested into splunk.   There is no problem with windows event logs but for our application related logs, the logs stop suddenly and starts reporting again but the log file in windows is being continuously updated with recent logs though the modified time does not get updated because of the windows feature. The modified time for the log file is not an issue because the logs starts rolling in even when the modified time is same but the log file had latest logs.   we are using splunk forwarder 9.0.4 version currently. Can someone please help in triaging this issue? It is a problem with only one specific source with this windows host and other sources (windows event logs) are flowing in properly.