All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do i change my wineventlogs to output like this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System>   <Provider Name="Microsoft-Windows-Security-Auditing" Gu... See more...
How do i change my wineventlogs to output like this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System>   <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />   <EventID>4625</EventID>   <Version>0</Version>   <Level>0</Level>   <Task>12544</Task>   <Opcode>0</Opcode>   <Keywords>0x8010000000000000</Keywords>   <TimeCreated SystemTime="2016-07-29T11:54:00.714207700Z" />   <EventRecordID>67620</EventRecordID>   <Correlation />   <Execution ProcessID="552" ThreadID="4700" />   <Channel>Security</Channel>   <Computer>***</Computer>   <Security />   </System>   instead of this... <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">- <System><Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /><EventID>4625</EventID><Version>0</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords><TimeCreated SystemTime="2016-07-29T11:54:00.714207700Z" /><EventRecordID>67620</EventRecordID><Correlation /><Execution ProcessID="552" ThreadID="4700" />  <Channel>Security</Channel> <Computer>***</Computer><Security /> </System>
I need to first issue an alert for overheat temperature 24 hours in advance for the affected locations, for their forecast to be above 100 F (long term query). Then I need to query for the next 2 ho... See more...
I need to first issue an alert for overheat temperature 24 hours in advance for the affected locations, for their forecast to be above 100 F (long term query). Then I need to query for the next 2 hours to 8 hours (for near term forecast), of the more recent temperature forecast for the same sets of locations. If the recent forecast for the same location has dropped below the threshed 100 F, I need to issue an alert to cancel the previous alert. If a location's recent forecast is above 100 F, but the prior forecast was below 100 F (no alert had been issued), I need to issue a new alert for the location. Effectively, the query for near term forecast needs to access the query results of the long term query (or redo a query for the previous long term query), to compare with the recent forecast results. (I'm especially not clear how to compare two queries' results with Splunk query.) I wonder how to implement a solution with Splunk? Thanks for pointers! Let's build an example to develop the solution. Assume the operation time in question is 8:00 AM on July 14, 2022, so the 24 hour in advance long term forecast should have been made at 8:00 AM on July 13, 2022 (long term forecast)  The time window to make the short term forecast should be 0:00 AM (8-8) and 6:00 AM (8-2) (8 to 2 hours before) on the same day.  Here is more concise requirements: 1. Hourly, the forecasts of 24 hours after for all locations shall be collected and evaluated. If the 24-hour-after temperature will be over the threshold (100 F), alert shall be sent for the to-be-overheat locations. 2. Also hourly, the forecasts for the window of next 2 hours to the next 8 hours should be collected and evaluated. Based on the evaluation of the 2-hours-8-hours-after forecast, revision shall be made according to the following rules: a. If a location’s 2-hours-8-hours-after forecast is below the threshold, while there had been an alert issued. A cancellation message shall be sent. b. If a location’s 2-hours-8-hours-after forecast is above the threshold, while there had not been alert sent, then a new alert shall be sent c. For the other case, no operation is needed 3. At 15 minutes interval, the real time temperature for the locations shall be collected and evaluated. . Based on the evaluation of the real-time temperature, revision shall be made according to the following rules: a. If a location’s real time temperature is below the threshold, while there had been an alert issued. A cancellation message shall be sent. b. If a location’s real time temperature is above the threshold, while there had not been alert sent, then a new alert shall be sent c. For the other case, no operation is needed
I have dashboards that are configured for input global_time. I also have time input dropdown. So that when I change the time all of my dashboards updates automatically. Now, through drill down settin... See more...
I have dashboards that are configured for input global_time. I also have time input dropdown. So that when I change the time all of my dashboards updates automatically. Now, through drill down setting, I am adding links to custom URL to each dashboards that will open unique Splunk search page. The problem is, I want to be able to change the hour to 4 hours instead of 24 from dashboard window, and when I click the bars on the grid for example, I would have Splunk search page that is 4 hours. And if I change the global time to 7 days I want to have search result with 7 days. I figured, I need to change the link with< &earliest=-4%40h&latest=now>  in the Link to Custom URL Drilldown settings. How I can relate the global_time to that timing?
I'm trying to find any new MFA factors(DUO) used by any user in the past X days in order to create an alert.  As an example a user uses push notifications every login for X-1 days then on the X day t... See more...
I'm trying to find any new MFA factors(DUO) used by any user in the past X days in order to create an alert.  As an example a user uses push notifications every login for X-1 days then on the X day they use passcode, I want to trigger an alert or show up in a report.   I'm having an issue wrapping my head around on how to search for the first instance of a new value for the field factor in the past X days without specifying the expected value ahead of time (some users use push, some use phone call, some use pass code I just want to know when they use something different.  Any assistance or tips would be helpful.
Hey everyone, I've got all our firewall logs going into separate index. When I perform a search just using the index as a value, for example index="sec-firewalls" the results vary quite a bit. ... See more...
Hey everyone, I've got all our firewall logs going into separate index. When I perform a search just using the index as a value, for example index="sec-firewalls" the results vary quite a bit. I get nothing for real-time unless I select all time (real-time). Under relative results I get nothing for today. Nothing for last 15 minutes, last 4 hours etc. Again, the only option that works is All time. When I'm looking at real-time results, it's about 2hr30m behind. I am using the Splunk Add-on for Cisco ASA for this index. Anyone able to assist me with what's happening here? Thanks, Will
Most of my operations are based off of saved searches and these are saved a few times weekly or monthly. The columns available should always align. I tried to get the base SPL down so I could have ... See more...
Most of my operations are based off of saved searches and these are saved a few times weekly or monthly. The columns available should always align. I tried to get the base SPL down so I could have an output with a table showing one column with result from offset=0 (current iteration), and another column with results from offset=1 (1 previous iteration), but I could not get this to work.  I was expecting the below: Available Columns Value from Offset=0 Value from Offset=1 # of hosts 1000 955   As an example, the current query would look like this: | loadjob artifact_offset=0 savedsearch="named_search" ```current week``` | loadjob artifact_offset=1 savedsearch="named_search" ```previous iteration``` Once the table gets figured out, I'm not sure how I could even use the data for a single value visualization, because it would need | timechart count to operate, but my "time" is the value from "artifact_offset" So, 2 things: Any help with the table to visualize differences between 2 jobs based on artifact_offset? With that table, would it even be possible to use the outputs to populate the single value visual? Any help here?  Or any other questions I need to answer?
Hello,  In my search I'm trying to get a series of events (transact - which is in the _raw field) counted out by another field in _raw for GET or POST. This is what I'm currently using:  host="EXAM... See more...
Hello,  In my search I'm trying to get a series of events (transact - which is in the _raw field) counted out by another field in _raw for GET or POST. This is what I'm currently using:  host="EXAMPLE-*" sourcetype=Hex4 /ps/* | rex mode=sed field=_raw "s/(\S+)(tx_\S+)(\/\S+)/\1trans\3/g" | rex mode=sed field=_raw "s/(\S+)(nce_\S+)(\/\S+)/\1nce\3/g" | rex mode=sed field=_raw "s/(\S+)(dce_\S+)(\/\S+)/\1dvc\3/g" | rex "POST (?<transact>\S+)" | stats count(eval(method="GET")) as GET, count(eval(method="POST")) as POST by transact It does bring up the transactions and columns for GET and POST, but the counts are blank so I know I'm doing something wrong.  Any help would be greatly appreciated! Thank you!
Hi, I am getting this error when trying to click on set-up option for the ServiceNow SecOps add-on. It was working at the beginning when it was installed and was able to provide the integration det... See more...
Hi, I am getting this error when trying to click on set-up option for the ServiceNow SecOps add-on. It was working at the beginning when it was installed and was able to provide the integration details. But later, i am seeing this error popped up when trying to view the set-up configuration. Any lead to this error would be helpful to better understand the issue. Error: "Unable to render setup. Most likely, the cause is that the setup.xml file for this app is not configured correctly. For example, it may not specify task and type attributes. Contact the application developer to resolve this issue. setup_stub"   Thanks in advance,      
I have a table like the below   Category   | Time |  Count of string A | t-5mins | 18 A | t-10mins | 7 A | t-15mins | 10 A | t-20 mins | 1 B | t-5mins | 6 B | t-10 mins | 18   I w... See more...
I have a table like the below   Category   | Time |  Count of string A | t-5mins | 18 A | t-10mins | 7 A | t-15mins | 10 A | t-20 mins | 1 B | t-5mins | 6 B | t-10 mins | 18   I would like to create a table with the latest (max) time and the sum of the count by category so that i get this   Category   | Max Time |  Sum A | t-5mins |  36 B | T-5mins | 24   I can get the max time and the sum individually into a table but am having issues getting them both into 1 table -  the time and sum values are coming up blank.  Can someone advise please?  
Hi All,   I have installed splunk universal forwarder on linux instance. Version is 8.2.4. I have created admin account as well through ansible play.  I am passing username and password through an... See more...
Hi All,   I have installed splunk universal forwarder on linux instance. Version is 8.2.4. I have created admin account as well through ansible play.  I am passing username and password through ansible command line, creating user-seed.conf file and copying the username and password here. This way : - name: Copy Contents to user-seed.conf file copy:       dest: /opt/splunkforwarder/etc/system/local/user-seed.conf       content: |            [user_info]            USERNAME = "{{ username }}"            PASSWORD = "{{ password }}" user-seed.conf file is getting created successfully. I am starting splunk UF through ansible play later . So, user seed.conf file gets deleted and passwd file is getting created successfully. But when I try to run command ./splunk list forward-server, it asks me for username and password. I gave same credentials what I gave through ansible command line. But login is getting failed. I am not understanding what is going wrong. Please help me  Regards, NVP
Hi , I have created one graph for Success and failure result, but not able to change the color, How I can have the red color for Failed and green color for success      
I want to compare the daily temperature measurements at the same period, but different days by a stacked temperature time series for multiple days. Using timechart I have the following query to or... See more...
I want to compare the daily temperature measurements at the same period, but different days by a stacked temperature time series for multiple days. Using timechart I have the following query to organize the data, as the _time value contains the date information, the resulted visualization yields no stacked but one after another.   index="weather" sourcetype=publicweatherdata (Location=C60*) | fields _time, Location, Temperature | eval Date=strftime(_time, "%D") | timechart span=30m max(Temperature) AS Temperature BY Date   I tried to only retain the hour, minutes in _time, resulting all _time value of the date of 20222-07-06, when I executed the query, I could have the time series chart stacked but it shows with much of the horizontal space blank! Here is the query alternative:   index="weather" sourcetype=publicweatherdata (Location=C60*) | fields _time, Location, Temperature | eval Date=strftime(_time, "%D") | eval hour_min=strftime(_time, "%H:%M") | eval _time = strptime(hour_min, "%H:%M") | timechart span=30m max(Temperature) AS Temperature BY Date   How can I improve the visualization to make time series stacked with x-axis free from the dates? Below are the charts needing improvement. Thanks!
We have a 10 members(16CPU,64GB RAM) search head cluster in the same data center. 3 members are preferred captain and F5 will not forward traffic  to these 3 members , and parameter captain_is_adhoc_... See more...
We have a 10 members(16CPU,64GB RAM) search head cluster in the same data center. 3 members are preferred captain and F5 will not forward traffic  to these 3 members , and parameter captain_is_adhoc_searchhead is configured on 10 members. Sometimes , one of the search head's load average exceeds 1 because of CPU or memory overuse, then this search head will be not able to response captain's call in time. This member will launch a captain election, and this member will become the new captain even if it not a preferred captain.  The captain election process is not over yet until a member with preferred captain parameter become the captain . The search head cluster is unstabitily during the captain election, profuse schedule search and alert will be skipped, some critical alert will miss . How to solve this problem, how to prevent a non-preferred captain to be elected as captain ?
I am new to Splunk and need help directing estreamer logs to a particular directory in Splunk
Is persistentQueueSize supported with splunktcp-ssl inputs?  When reviewing the documentation around persisting data it says network-based inputs are supported, but the documentation for inputs.conf ... See more...
Is persistentQueueSize supported with splunktcp-ssl inputs?  When reviewing the documentation around persisting data it says network-based inputs are supported, but the documentation for inputs.conf seems to indicate they are NOT supported. I have some Intermediate Forwarders (Using Splunk Universal Forwarder 8.2.5) configured with persistentQueueSize  and it looks like it's working.  I see active reading/writing of cache files under SPLUNK_HOME/var/run/splunk/splunktcpin/pq__9997_0 and pq__9997_1. Thoughts?  Thanks!
Good Day, I need help to calculate the time difference for field "@timestamp" containing time format 2022-07-14T09:05:08.21-04:00 Example: MYSearch | stats range(@timestamp) as Delay by "log_proce... See more...
Good Day, I need help to calculate the time difference for field "@timestamp" containing time format 2022-07-14T09:05:08.21-04:00 Example: MYSearch | stats range(@timestamp) as Delay by "log_processed.logId" | stats max(Delay) If I do the same with the Splunk _time field, it works perfectly
Running the specific scenario: 1 Splunk heavy forwarder with no direct internet access (on-prem Splunk 8.1.7.2) Splunk Add-on for Microsoft Cloud Services on the HF (4.3.3) 1 internet proxy Expre... See more...
Running the specific scenario: 1 Splunk heavy forwarder with no direct internet access (on-prem Splunk 8.1.7.2) Splunk Add-on for Microsoft Cloud Services on the HF (4.3.3) 1 internet proxy Express route for private network traffic to Azure   When trying to ingest data from an event hub input on the above setup with the proxy configuration enabled on the add-on: 1 - Service principal authentication succeeds (over internet proxy as expected) 2 - Connection to event hub fails because the event hub only accepts connections over the express route link and the heavy forwarder tries to connect through a public IP using the configured proxy - I've confirmed the HF resolves the event hub FQDN to a private IP but it still sends the connection request to the proxy. I've also confirmed this on the add-on code. When trying to ingest data from an event hub input on the above setup with the proxy configuration disabled on the add-on: 1 - Service principal authentication fails (no internet access)   In the above scenario, the add-on needs internet access to get an authentication token from the Microsoft API, but the connection to the event hub to ingest data needs to happen through the express route private link. The Add-on just seems to do all one way or the other depending on the proxy configuration being enabled or not. Is there a solution for this? Having the proxy configuration enabled also breaks all storage account inputs as they use a SAS key (no internet required for authentication) but are not routed through the express route link despite the storage account FQDN being resolved into a private IP. Regards, Marco
I have two dashboards. The first lower level dashboard has a dropdown to select between multiple hosts of the same type to view diagnostic information among other things. The second higher level dash... See more...
I have two dashboards. The first lower level dashboard has a dropdown to select between multiple hosts of the same type to view diagnostic information among other things. The second higher level dashboard has status indicators which represents the overall "health" of hosts. These status indicators are pre-built panels which are present in both the higher level and lower level dashboards. The status indicators do not have drilldown functionality. As such, my workaround is to use a statistics table under each status indicator which only displays "Click_Here". This works just fine to send the user to the desired dashboard. However, if the user selects the drilldown related to a particular host, I would like that host to be selected in the lower level dashboards dropdown as this will change what panels and status indicators are displayed dynamically. The lower level dashboards dropdown token is used in multiple base searches as well as a number of visualizations. The selection of the dropdown is also used to hide and show panels through a series of <change><condition> tags and tokens. This is an example of my lower level dropdown XML: <input type="dropdown" token="HOST_SELECTION"> <label>Host</label> <choice value="host1">host1</choice> <choice value="host2">host2</choice> <choice value="host3">host3</choice> <default>host1</default> <initialValue>host1</initialValue> <change> <condition label="host1"> <set token="HostType1_Indicator_Tok">true</set> <unset token="HostType2_Indicator_Tok">false</unset> <set token="Host1_Tok">true</set> <unset token="Host2_Tok">false</unset> <unset token="Host3_Tok">false</unset> </condition> Etc... Any input is appreciated. Thank you.
I would like to run my playbooks after the changes have been introduced without making commit messages 
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkfo... See more...
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/df_metric.sh] sourcetype = df_metric source = df interval = 300 disabled = 0 index = server_nixeventlog _meta = _application::<application_name>     I also added a new stanza to the fields.conf     [_application] INDEXED = false #* Set to "true" if the field is created at index time. #* Set to "false" for fields extracted at search time. This accounts for the # majority of fields. INDEXED_VALUE = false #* Set to "true" if the value is in the raw text of the event. #* Set to "false" if the value is not in the raw text of the event#.     The fields.conf is deployed to indexer and SH. But i still do not see the event. I tried searching for "_application::<application_name>" "_application=<application_name>" _application::* _application=* Nothing....  Can somebody explain to me where is the Problem?