All Topics

Top

All Topics

I'm trying to find any new MFA factors(DUO) used by any user in the past X days in order to create an alert.  As an example a user uses push notifications every login for X-1 days then on the X day t... See more...
I'm trying to find any new MFA factors(DUO) used by any user in the past X days in order to create an alert.  As an example a user uses push notifications every login for X-1 days then on the X day they use passcode, I want to trigger an alert or show up in a report.   I'm having an issue wrapping my head around on how to search for the first instance of a new value for the field factor in the past X days without specifying the expected value ahead of time (some users use push, some use phone call, some use pass code I just want to know when they use something different.  Any assistance or tips would be helpful.
Hey everyone, I've got all our firewall logs going into separate index. When I perform a search just using the index as a value, for example index="sec-firewalls" the results vary quite a bit. ... See more...
Hey everyone, I've got all our firewall logs going into separate index. When I perform a search just using the index as a value, for example index="sec-firewalls" the results vary quite a bit. I get nothing for real-time unless I select all time (real-time). Under relative results I get nothing for today. Nothing for last 15 minutes, last 4 hours etc. Again, the only option that works is All time. When I'm looking at real-time results, it's about 2hr30m behind. I am using the Splunk Add-on for Cisco ASA for this index. Anyone able to assist me with what's happening here? Thanks, Will
Most of my operations are based off of saved searches and these are saved a few times weekly or monthly. The columns available should always align. I tried to get the base SPL down so I could have ... See more...
Most of my operations are based off of saved searches and these are saved a few times weekly or monthly. The columns available should always align. I tried to get the base SPL down so I could have an output with a table showing one column with result from offset=0 (current iteration), and another column with results from offset=1 (1 previous iteration), but I could not get this to work.  I was expecting the below: Available Columns Value from Offset=0 Value from Offset=1 # of hosts 1000 955   As an example, the current query would look like this: | loadjob artifact_offset=0 savedsearch="named_search" ```current week``` | loadjob artifact_offset=1 savedsearch="named_search" ```previous iteration``` Once the table gets figured out, I'm not sure how I could even use the data for a single value visualization, because it would need | timechart count to operate, but my "time" is the value from "artifact_offset" So, 2 things: Any help with the table to visualize differences between 2 jobs based on artifact_offset? With that table, would it even be possible to use the outputs to populate the single value visual? Any help here?  Or any other questions I need to answer?
Hello,  In my search I'm trying to get a series of events (transact - which is in the _raw field) counted out by another field in _raw for GET or POST. This is what I'm currently using:  host="EXAM... See more...
Hello,  In my search I'm trying to get a series of events (transact - which is in the _raw field) counted out by another field in _raw for GET or POST. This is what I'm currently using:  host="EXAMPLE-*" sourcetype=Hex4 /ps/* | rex mode=sed field=_raw "s/(\S+)(tx_\S+)(\/\S+)/\1trans\3/g" | rex mode=sed field=_raw "s/(\S+)(nce_\S+)(\/\S+)/\1nce\3/g" | rex mode=sed field=_raw "s/(\S+)(dce_\S+)(\/\S+)/\1dvc\3/g" | rex "POST (?<transact>\S+)" | stats count(eval(method="GET")) as GET, count(eval(method="POST")) as POST by transact It does bring up the transactions and columns for GET and POST, but the counts are blank so I know I'm doing something wrong.  Any help would be greatly appreciated! Thank you!
Hi, I am getting this error when trying to click on set-up option for the ServiceNow SecOps add-on. It was working at the beginning when it was installed and was able to provide the integration det... See more...
Hi, I am getting this error when trying to click on set-up option for the ServiceNow SecOps add-on. It was working at the beginning when it was installed and was able to provide the integration details. But later, i am seeing this error popped up when trying to view the set-up configuration. Any lead to this error would be helpful to better understand the issue. Error: "Unable to render setup. Most likely, the cause is that the setup.xml file for this app is not configured correctly. For example, it may not specify task and type attributes. Contact the application developer to resolve this issue. setup_stub"   Thanks in advance,      
I have a table like the below   Category   | Time |  Count of string A | t-5mins | 18 A | t-10mins | 7 A | t-15mins | 10 A | t-20 mins | 1 B | t-5mins | 6 B | t-10 mins | 18   I w... See more...
I have a table like the below   Category   | Time |  Count of string A | t-5mins | 18 A | t-10mins | 7 A | t-15mins | 10 A | t-20 mins | 1 B | t-5mins | 6 B | t-10 mins | 18   I would like to create a table with the latest (max) time and the sum of the count by category so that i get this   Category   | Max Time |  Sum A | t-5mins |  36 B | T-5mins | 24   I can get the max time and the sum individually into a table but am having issues getting them both into 1 table -  the time and sum values are coming up blank.  Can someone advise please?  
Hi All,   I have installed splunk universal forwarder on linux instance. Version is 8.2.4. I have created admin account as well through ansible play.  I am passing username and password through an... See more...
Hi All,   I have installed splunk universal forwarder on linux instance. Version is 8.2.4. I have created admin account as well through ansible play.  I am passing username and password through ansible command line, creating user-seed.conf file and copying the username and password here. This way : - name: Copy Contents to user-seed.conf file copy:       dest: /opt/splunkforwarder/etc/system/local/user-seed.conf       content: |            [user_info]            USERNAME = "{{ username }}"            PASSWORD = "{{ password }}" user-seed.conf file is getting created successfully. I am starting splunk UF through ansible play later . So, user seed.conf file gets deleted and passwd file is getting created successfully. But when I try to run command ./splunk list forward-server, it asks me for username and password. I gave same credentials what I gave through ansible command line. But login is getting failed. I am not understanding what is going wrong. Please help me  Regards, NVP
Hi , I have created one graph for Success and failure result, but not able to change the color, How I can have the red color for Failed and green color for success      
I want to compare the daily temperature measurements at the same period, but different days by a stacked temperature time series for multiple days. Using timechart I have the following query to or... See more...
I want to compare the daily temperature measurements at the same period, but different days by a stacked temperature time series for multiple days. Using timechart I have the following query to organize the data, as the _time value contains the date information, the resulted visualization yields no stacked but one after another.   index="weather" sourcetype=publicweatherdata (Location=C60*) | fields _time, Location, Temperature | eval Date=strftime(_time, "%D") | timechart span=30m max(Temperature) AS Temperature BY Date   I tried to only retain the hour, minutes in _time, resulting all _time value of the date of 20222-07-06, when I executed the query, I could have the time series chart stacked but it shows with much of the horizontal space blank! Here is the query alternative:   index="weather" sourcetype=publicweatherdata (Location=C60*) | fields _time, Location, Temperature | eval Date=strftime(_time, "%D") | eval hour_min=strftime(_time, "%H:%M") | eval _time = strptime(hour_min, "%H:%M") | timechart span=30m max(Temperature) AS Temperature BY Date   How can I improve the visualization to make time series stacked with x-axis free from the dates? Below are the charts needing improvement. Thanks!
We have a 10 members(16CPU,64GB RAM) search head cluster in the same data center. 3 members are preferred captain and F5 will not forward traffic  to these 3 members , and parameter captain_is_adhoc_... See more...
We have a 10 members(16CPU,64GB RAM) search head cluster in the same data center. 3 members are preferred captain and F5 will not forward traffic  to these 3 members , and parameter captain_is_adhoc_searchhead is configured on 10 members. Sometimes , one of the search head's load average exceeds 1 because of CPU or memory overuse, then this search head will be not able to response captain's call in time. This member will launch a captain election, and this member will become the new captain even if it not a preferred captain.  The captain election process is not over yet until a member with preferred captain parameter become the captain . The search head cluster is unstabitily during the captain election, profuse schedule search and alert will be skipped, some critical alert will miss . How to solve this problem, how to prevent a non-preferred captain to be elected as captain ?
I am new to Splunk and need help directing estreamer logs to a particular directory in Splunk
Is persistentQueueSize supported with splunktcp-ssl inputs?  When reviewing the documentation around persisting data it says network-based inputs are supported, but the documentation for inputs.conf ... See more...
Is persistentQueueSize supported with splunktcp-ssl inputs?  When reviewing the documentation around persisting data it says network-based inputs are supported, but the documentation for inputs.conf seems to indicate they are NOT supported. I have some Intermediate Forwarders (Using Splunk Universal Forwarder 8.2.5) configured with persistentQueueSize  and it looks like it's working.  I see active reading/writing of cache files under SPLUNK_HOME/var/run/splunk/splunktcpin/pq__9997_0 and pq__9997_1. Thoughts?  Thanks!
Good Day, I need help to calculate the time difference for field "@timestamp" containing time format 2022-07-14T09:05:08.21-04:00 Example: MYSearch | stats range(@timestamp) as Delay by "log_proce... See more...
Good Day, I need help to calculate the time difference for field "@timestamp" containing time format 2022-07-14T09:05:08.21-04:00 Example: MYSearch | stats range(@timestamp) as Delay by "log_processed.logId" | stats max(Delay) If I do the same with the Splunk _time field, it works perfectly
Running the specific scenario: 1 Splunk heavy forwarder with no direct internet access (on-prem Splunk 8.1.7.2) Splunk Add-on for Microsoft Cloud Services on the HF (4.3.3) 1 internet proxy Expre... See more...
Running the specific scenario: 1 Splunk heavy forwarder with no direct internet access (on-prem Splunk 8.1.7.2) Splunk Add-on for Microsoft Cloud Services on the HF (4.3.3) 1 internet proxy Express route for private network traffic to Azure   When trying to ingest data from an event hub input on the above setup with the proxy configuration enabled on the add-on: 1 - Service principal authentication succeeds (over internet proxy as expected) 2 - Connection to event hub fails because the event hub only accepts connections over the express route link and the heavy forwarder tries to connect through a public IP using the configured proxy - I've confirmed the HF resolves the event hub FQDN to a private IP but it still sends the connection request to the proxy. I've also confirmed this on the add-on code. When trying to ingest data from an event hub input on the above setup with the proxy configuration disabled on the add-on: 1 - Service principal authentication fails (no internet access)   In the above scenario, the add-on needs internet access to get an authentication token from the Microsoft API, but the connection to the event hub to ingest data needs to happen through the express route private link. The Add-on just seems to do all one way or the other depending on the proxy configuration being enabled or not. Is there a solution for this? Having the proxy configuration enabled also breaks all storage account inputs as they use a SAS key (no internet required for authentication) but are not routed through the express route link despite the storage account FQDN being resolved into a private IP. Regards, Marco
I have two dashboards. The first lower level dashboard has a dropdown to select between multiple hosts of the same type to view diagnostic information among other things. The second higher level dash... See more...
I have two dashboards. The first lower level dashboard has a dropdown to select between multiple hosts of the same type to view diagnostic information among other things. The second higher level dashboard has status indicators which represents the overall "health" of hosts. These status indicators are pre-built panels which are present in both the higher level and lower level dashboards. The status indicators do not have drilldown functionality. As such, my workaround is to use a statistics table under each status indicator which only displays "Click_Here". This works just fine to send the user to the desired dashboard. However, if the user selects the drilldown related to a particular host, I would like that host to be selected in the lower level dashboards dropdown as this will change what panels and status indicators are displayed dynamically. The lower level dashboards dropdown token is used in multiple base searches as well as a number of visualizations. The selection of the dropdown is also used to hide and show panels through a series of <change><condition> tags and tokens. This is an example of my lower level dropdown XML: <input type="dropdown" token="HOST_SELECTION"> <label>Host</label> <choice value="host1">host1</choice> <choice value="host2">host2</choice> <choice value="host3">host3</choice> <default>host1</default> <initialValue>host1</initialValue> <change> <condition label="host1"> <set token="HostType1_Indicator_Tok">true</set> <unset token="HostType2_Indicator_Tok">false</unset> <set token="Host1_Tok">true</set> <unset token="Host2_Tok">false</unset> <unset token="Host3_Tok">false</unset> </condition> Etc... Any input is appreciated. Thank you.
I would like to run my playbooks after the changes have been introduced without making commit messages 
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkfo... See more...
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/df_metric.sh] sourcetype = df_metric source = df interval = 300 disabled = 0 index = server_nixeventlog _meta = _application::<application_name>     I also added a new stanza to the fields.conf     [_application] INDEXED = false #* Set to "true" if the field is created at index time. #* Set to "false" for fields extracted at search time. This accounts for the # majority of fields. INDEXED_VALUE = false #* Set to "true" if the value is in the raw text of the event. #* Set to "false" if the value is not in the raw text of the event#.     The fields.conf is deployed to indexer and SH. But i still do not see the event. I tried searching for "_application::<application_name>" "_application=<application_name>" _application::* _application=* Nothing....  Can somebody explain to me where is the Problem?    
Hello All, We are running on the splunk 8.2 and we would like to setup the tsidxWritingLevel to 4 . we have multisite cluster and want to deploy on all the indexer . Should i made the change on the c... See more...
Hello All, We are running on the splunk 8.2 and we would like to setup the tsidxWritingLevel to 4 . we have multisite cluster and want to deploy on all the indexer . Should i made the change on the cluster master (master-app) and push the bundle or i need to login on the individual indexer and change the parameter and restart the same.
Here is a reduced version of my JSON: {    records: [      {        errors: 4        name: name1        plugin: p1        type: type1      }      {        errors: 7        name: name2   ... See more...
Here is a reduced version of my JSON: {    records: [      {        errors: 4        name: name1        plugin: p1        type: type1      }      {        errors: 7        name: name2        plugin: p1        type: type2      }      {        errors: 0        name: name3        plugin: p2        type: type3      }    ]    session: {      document: my_doc      user: me      version: 7.1    } } There are 3 records in records{} so I expect to get 3 events using mvexpand, but I get 6 events. I'm using a similar query I've found in an answer in this community:   | spath | rename records{}.name AS name, records{}.type AS type, records{}.plugin as plugin, records{}.errors as errors | eval x=mvzip(mvzip(mvzip(name,type),plugin),errors) | mvexpand x | eval x=split(x,",") | eval name=mvindex(x,0) | eval type=mvindex(x,1) | eval plugin=mvindex(x,2) | eval errors=mvindex(x,3) | table name, type, plugin, errors     I get 6 rows instead of 3: name type plugin errors name1 type1 p1 4 name2 type2 p1 7 name3 type3 p2 0 name1 type1 p1 4 name2 type2 p1 7 name3 type3 p2 0   Any suggestion how to fix the query to avoid the duplication?  Thanks!
I'm running: Splunk Enterprise 8.2.5 on Windows 2019. 2 indexers in a cluster and a single search head and separate cluster master/license master/deployment server all on windows 2019.  and... See more...
I'm running: Splunk Enterprise 8.2.5 on Windows 2019. 2 indexers in a cluster and a single search head and separate cluster master/license master/deployment server all on windows 2019.  and have installed IT Essentials work version 4.31.1 and created the clustered indexes and enabled the apps I wish to use.  After a few mins the web interface on my single search head grinds to a halt and everything starts running very slowly. Compute on the search head and indexers seems fine and I have 32 cores and 64 GB RAM on each. If I disable all the apps that come with the IT Essentials work package performance returns to normal.  Any ideas on where to look to troubleshoot this?