All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,   I have installed splunk universal forwarder on linux instance. Version is 8.2.4. I have created admin account as well through ansible play.  I am passing username and password through an... See more...
Hi All,   I have installed splunk universal forwarder on linux instance. Version is 8.2.4. I have created admin account as well through ansible play.  I am passing username and password through ansible command line, creating user-seed.conf file and copying the username and password here. This way : - name: Copy Contents to user-seed.conf file copy:       dest: /opt/splunkforwarder/etc/system/local/user-seed.conf       content: |            [user_info]            USERNAME = "{{ username }}"            PASSWORD = "{{ password }}" user-seed.conf file is getting created successfully. I am starting splunk UF through ansible play later . So, user seed.conf file gets deleted and passwd file is getting created successfully. But when I try to run command ./splunk list forward-server, it asks me for username and password. I gave same credentials what I gave through ansible command line. But login is getting failed. I am not understanding what is going wrong. Please help me  Regards, NVP
Hi , I have created one graph for Success and failure result, but not able to change the color, How I can have the red color for Failed and green color for success      
I want to compare the daily temperature measurements at the same period, but different days by a stacked temperature time series for multiple days. Using timechart I have the following query to or... See more...
I want to compare the daily temperature measurements at the same period, but different days by a stacked temperature time series for multiple days. Using timechart I have the following query to organize the data, as the _time value contains the date information, the resulted visualization yields no stacked but one after another.   index="weather" sourcetype=publicweatherdata (Location=C60*) | fields _time, Location, Temperature | eval Date=strftime(_time, "%D") | timechart span=30m max(Temperature) AS Temperature BY Date   I tried to only retain the hour, minutes in _time, resulting all _time value of the date of 20222-07-06, when I executed the query, I could have the time series chart stacked but it shows with much of the horizontal space blank! Here is the query alternative:   index="weather" sourcetype=publicweatherdata (Location=C60*) | fields _time, Location, Temperature | eval Date=strftime(_time, "%D") | eval hour_min=strftime(_time, "%H:%M") | eval _time = strptime(hour_min, "%H:%M") | timechart span=30m max(Temperature) AS Temperature BY Date   How can I improve the visualization to make time series stacked with x-axis free from the dates? Below are the charts needing improvement. Thanks!
We have a 10 members(16CPU,64GB RAM) search head cluster in the same data center. 3 members are preferred captain and F5 will not forward traffic  to these 3 members , and parameter captain_is_adhoc_... See more...
We have a 10 members(16CPU,64GB RAM) search head cluster in the same data center. 3 members are preferred captain and F5 will not forward traffic  to these 3 members , and parameter captain_is_adhoc_searchhead is configured on 10 members. Sometimes , one of the search head's load average exceeds 1 because of CPU or memory overuse, then this search head will be not able to response captain's call in time. This member will launch a captain election, and this member will become the new captain even if it not a preferred captain.  The captain election process is not over yet until a member with preferred captain parameter become the captain . The search head cluster is unstabitily during the captain election, profuse schedule search and alert will be skipped, some critical alert will miss . How to solve this problem, how to prevent a non-preferred captain to be elected as captain ?
I am new to Splunk and need help directing estreamer logs to a particular directory in Splunk
Is persistentQueueSize supported with splunktcp-ssl inputs?  When reviewing the documentation around persisting data it says network-based inputs are supported, but the documentation for inputs.conf ... See more...
Is persistentQueueSize supported with splunktcp-ssl inputs?  When reviewing the documentation around persisting data it says network-based inputs are supported, but the documentation for inputs.conf seems to indicate they are NOT supported. I have some Intermediate Forwarders (Using Splunk Universal Forwarder 8.2.5) configured with persistentQueueSize  and it looks like it's working.  I see active reading/writing of cache files under SPLUNK_HOME/var/run/splunk/splunktcpin/pq__9997_0 and pq__9997_1. Thoughts?  Thanks!
Good Day, I need help to calculate the time difference for field "@timestamp" containing time format 2022-07-14T09:05:08.21-04:00 Example: MYSearch | stats range(@timestamp) as Delay by "log_proce... See more...
Good Day, I need help to calculate the time difference for field "@timestamp" containing time format 2022-07-14T09:05:08.21-04:00 Example: MYSearch | stats range(@timestamp) as Delay by "log_processed.logId" | stats max(Delay) If I do the same with the Splunk _time field, it works perfectly
Running the specific scenario: 1 Splunk heavy forwarder with no direct internet access (on-prem Splunk 8.1.7.2) Splunk Add-on for Microsoft Cloud Services on the HF (4.3.3) 1 internet proxy Expre... See more...
Running the specific scenario: 1 Splunk heavy forwarder with no direct internet access (on-prem Splunk 8.1.7.2) Splunk Add-on for Microsoft Cloud Services on the HF (4.3.3) 1 internet proxy Express route for private network traffic to Azure   When trying to ingest data from an event hub input on the above setup with the proxy configuration enabled on the add-on: 1 - Service principal authentication succeeds (over internet proxy as expected) 2 - Connection to event hub fails because the event hub only accepts connections over the express route link and the heavy forwarder tries to connect through a public IP using the configured proxy - I've confirmed the HF resolves the event hub FQDN to a private IP but it still sends the connection request to the proxy. I've also confirmed this on the add-on code. When trying to ingest data from an event hub input on the above setup with the proxy configuration disabled on the add-on: 1 - Service principal authentication fails (no internet access)   In the above scenario, the add-on needs internet access to get an authentication token from the Microsoft API, but the connection to the event hub to ingest data needs to happen through the express route private link. The Add-on just seems to do all one way or the other depending on the proxy configuration being enabled or not. Is there a solution for this? Having the proxy configuration enabled also breaks all storage account inputs as they use a SAS key (no internet required for authentication) but are not routed through the express route link despite the storage account FQDN being resolved into a private IP. Regards, Marco
I have two dashboards. The first lower level dashboard has a dropdown to select between multiple hosts of the same type to view diagnostic information among other things. The second higher level dash... See more...
I have two dashboards. The first lower level dashboard has a dropdown to select between multiple hosts of the same type to view diagnostic information among other things. The second higher level dashboard has status indicators which represents the overall "health" of hosts. These status indicators are pre-built panels which are present in both the higher level and lower level dashboards. The status indicators do not have drilldown functionality. As such, my workaround is to use a statistics table under each status indicator which only displays "Click_Here". This works just fine to send the user to the desired dashboard. However, if the user selects the drilldown related to a particular host, I would like that host to be selected in the lower level dashboards dropdown as this will change what panels and status indicators are displayed dynamically. The lower level dashboards dropdown token is used in multiple base searches as well as a number of visualizations. The selection of the dropdown is also used to hide and show panels through a series of <change><condition> tags and tokens. This is an example of my lower level dropdown XML: <input type="dropdown" token="HOST_SELECTION"> <label>Host</label> <choice value="host1">host1</choice> <choice value="host2">host2</choice> <choice value="host3">host3</choice> <default>host1</default> <initialValue>host1</initialValue> <change> <condition label="host1"> <set token="HostType1_Indicator_Tok">true</set> <unset token="HostType2_Indicator_Tok">false</unset> <set token="Host1_Tok">true</set> <unset token="Host2_Tok">false</unset> <unset token="Host3_Tok">false</unset> </condition> Etc... Any input is appreciated. Thank you.
I would like to run my playbooks after the changes have been introduced without making commit messages 
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkfo... See more...
Hi Splunkers, I try to get a new internal field "_application" added to certain events. So i added a new field via the _meta to the inputs.conf on the forwarder.     [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/df_metric.sh] sourcetype = df_metric source = df interval = 300 disabled = 0 index = server_nixeventlog _meta = _application::<application_name>     I also added a new stanza to the fields.conf     [_application] INDEXED = false #* Set to "true" if the field is created at index time. #* Set to "false" for fields extracted at search time. This accounts for the # majority of fields. INDEXED_VALUE = false #* Set to "true" if the value is in the raw text of the event. #* Set to "false" if the value is not in the raw text of the event#.     The fields.conf is deployed to indexer and SH. But i still do not see the event. I tried searching for "_application::<application_name>" "_application=<application_name>" _application::* _application=* Nothing....  Can somebody explain to me where is the Problem?    
Hello All, We are running on the splunk 8.2 and we would like to setup the tsidxWritingLevel to 4 . we have multisite cluster and want to deploy on all the indexer . Should i made the change on the c... See more...
Hello All, We are running on the splunk 8.2 and we would like to setup the tsidxWritingLevel to 4 . we have multisite cluster and want to deploy on all the indexer . Should i made the change on the cluster master (master-app) and push the bundle or i need to login on the individual indexer and change the parameter and restart the same.
Here is a reduced version of my JSON: {    records: [      {        errors: 4        name: name1        plugin: p1        type: type1      }      {        errors: 7        name: name2   ... See more...
Here is a reduced version of my JSON: {    records: [      {        errors: 4        name: name1        plugin: p1        type: type1      }      {        errors: 7        name: name2        plugin: p1        type: type2      }      {        errors: 0        name: name3        plugin: p2        type: type3      }    ]    session: {      document: my_doc      user: me      version: 7.1    } } There are 3 records in records{} so I expect to get 3 events using mvexpand, but I get 6 events. I'm using a similar query I've found in an answer in this community:   | spath | rename records{}.name AS name, records{}.type AS type, records{}.plugin as plugin, records{}.errors as errors | eval x=mvzip(mvzip(mvzip(name,type),plugin),errors) | mvexpand x | eval x=split(x,",") | eval name=mvindex(x,0) | eval type=mvindex(x,1) | eval plugin=mvindex(x,2) | eval errors=mvindex(x,3) | table name, type, plugin, errors     I get 6 rows instead of 3: name type plugin errors name1 type1 p1 4 name2 type2 p1 7 name3 type3 p2 0 name1 type1 p1 4 name2 type2 p1 7 name3 type3 p2 0   Any suggestion how to fix the query to avoid the duplication?  Thanks!
I'm running: Splunk Enterprise 8.2.5 on Windows 2019. 2 indexers in a cluster and a single search head and separate cluster master/license master/deployment server all on windows 2019.  and... See more...
I'm running: Splunk Enterprise 8.2.5 on Windows 2019. 2 indexers in a cluster and a single search head and separate cluster master/license master/deployment server all on windows 2019.  and have installed IT Essentials work version 4.31.1 and created the clustered indexes and enabled the apps I wish to use.  After a few mins the web interface on my single search head grinds to a halt and everything starts running very slowly. Compute on the search head and indexers seems fine and I have 32 cores and 64 GB RAM on each. If I disable all the apps that come with the IT Essentials work package performance returns to normal.  Any ideas on where to look to troubleshoot this? 
Hello, We have a dbinput that pull in data from an Oracle database. Yesterday, there was some problems with our indexer so we lost a bit of data in that time. I know that I can change the rising che... See more...
Hello, We have a dbinput that pull in data from an Oracle database. Yesterday, there was some problems with our indexer so we lost a bit of data in that time. I know that I can change the rising checkpoint value to yesterday but it will reindex data till that point to now. I want to know are there any other way to reindex those missing data in the pass without delete it all? Edit: I try the delete and reindex all by changing the tail_rising_column_checkpoint_value to the epoch time in the pass but when it's start it only start indexing from the point that I refresh please help
Hi Splunkers, this might be a dumb question but I am a bit confused in regard to ITSI licensing. I understand that ITSI requires an ingest of at least 50GB as well as a separate ITSI license in add... See more...
Hi Splunkers, this might be a dumb question but I am a bit confused in regard to ITSI licensing. I understand that ITSI requires an ingest of at least 50GB as well as a separate ITSI license in addition to the Splunk Core license. Does that mean a 50GB Core license and a 0GB ITSI license? Or could you get a 0GB Core license (like you would use on a HF that doesn't ingest anything) and a 50GB ITSI license?  And can both be installed in the same place in the GUI of the license master or is there a special process for the ITSI license?
Hello, I have a lookup on which we have two columns, one with the computer name and the other with the OS version. When I do a search in the windows index via splunk (event logs) I want to us... See more...
Hello, I have a lookup on which we have two columns, one with the computer name and the other with the OS version. When I do a search in the windows index via splunk (event logs) I want to use this lookup to add the OS version in the result In fact, I want to display the information in my lookup in the result field of my index search. Greetings
Hi All,  I have this simple search that shows logins from same SRC IP  to multiple Destination hosts.  Can someone pls explain why does dc(dest_ip) not match the # of values reported by values(dest)... See more...
Hi All,  I have this simple search that shows logins from same SRC IP  to multiple Destination hosts.  Can someone pls explain why does dc(dest_ip) not match the # of values reported by values(dest) in the Results ?  You will notice in the results, that if values(dest) shows 2 hostnames then dc(dest) shows 4 . Should't it be  that if dc(dest_ip) shows 4  then values(dest) should also report 4 unique host names?  What am i missing ?   Thanks   index=xxx source="WinEventLog:Security" EventCode=5140 | stats dc(dest_ip) as dest_count values(dest) values(Account_Name) values(user_first) values(user_last)by Source_Address | rename values(*) as *     Results:  Source_Address dest_count dest Account_name user_first user_last 10.x.x.11 4 server01@domain.com server02@domain.com xxxx xxx xxx 10.x.x.12 4 server01@domain.com server02@domain.com xxxx xx xx 10.x.x.13 2 server03@domain.com xxx xx xx
Hi  below is one of the requirement I have multiple lookuptable example  number  name   lookuptable 1               abc       1stlookuptable   number  name     lookuptable 1               a... See more...
Hi  below is one of the requirement I have multiple lookuptable example  number  name   lookuptable 1               abc       1stlookuptable   number  name     lookuptable 1               abc       2ndlookuptable   number  name    lookuptable 1               dxc       3rdlookuptable   number  name    lookuptable 1               xyz       4thlookuptable   number  name    lookuptable 1               abc       5thlookuptable   requirement is how to build query where name=abc (from above example) to shows below table fields stating abc belong to which lookuptable on run name lookuptable   example out name  lookuptable abc       1stlookuptable                2ndlookuptable                5thlookuptable
Lets assume, I have a linux machine and installed universal forwarder in that. can i improve the performance by changing some parameters in os kernel ?