All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If I need to increase the number of UBA nodes, is it necessary to change the license?
Hello I am using the Spunk_TA_nix and a server class to push that out to all nix boxes, but server class is not granular enough to select between RHEL 7 and RHEL 8 boxes.    In RHEL 8 I want to m... See more...
Hello I am using the Spunk_TA_nix and a server class to push that out to all nix boxes, but server class is not granular enough to select between RHEL 7 and RHEL 8 boxes.    In RHEL 8 I want to monitor the path /var/log/audit but NOT in RHEL 7.  Is there an inputs.conf stanza to try and accomplish directory monitoring by OS version?  Or how else would one go about this?
Already using a query with below to get total number: | timechart span=1d count What can I add to return, show a "0" if the query is null?
Do we have any Tarrask Malware detection queries for Splunk Enterprise? 
Hi all, I would like to use the SNMP modular input to collect SNMP data from ~100 network switches. SNMP modular input shall be installed at a forwarder server? The polling interval is 10 mins. ... See more...
Hi all, I would like to use the SNMP modular input to collect SNMP data from ~100 network switches. SNMP modular input shall be installed at a forwarder server? The polling interval is 10 mins. Is there any limit on the number of pollings supported? I think I may need to poll ~8000 OIDs in each 10-min polling interval. Is there any CPU/memory loading concern?  
Hi , I have a multisite indexer cluster (3 sites , 2 indexers each, total 6 indexers) .  we have kept -  site_replication_factor = origin:2,total:5  site_search_factor = origin:2,total:4   we ... See more...
Hi , I have a multisite indexer cluster (3 sites , 2 indexers each, total 6 indexers) .  we have kept -  site_replication_factor = origin:2,total:5  site_search_factor = origin:2,total:4   we have this query , if we scale up the indexer cluster and add up more indexers and change the site_replication_factor & site_search_factor accordingly , then what will be the impact on the whole architecture . anything we need to take care of before scaling up the architecture . Let me know .
hi all I am running on a windows heavy forwarder on Splunk Enterprise 8.1.7.2 and I listen to ports tcp 9514 and udp 514. The data comes in to the main index and I perform a transforms/ props t... See more...
hi all I am running on a windows heavy forwarder on Splunk Enterprise 8.1.7.2 and I listen to ports tcp 9514 and udp 514. The data comes in to the main index and I perform a transforms/ props to a other index and the logs go into my indexers and search heads (both search head and indexers are redhat 7.9 splunk enteprise 8.2.0) However in my heavy forwarders i send a copy off to another set of splunk redhat 7.9 heavy forwarders but it seems anything besides the default splunk logs on tcp 9997 does not reach them My config is follows   Inputs.conf [tcp:9514] disabled = false connection_host=ip index =main ##inputs.conf [udp:9514] disabled = false connection_host=ip index =main [udp:514] disabled = false connection_host=ip index =main [tcp:514] disabled = false connection_host=ip index =main     ##transforms.conf [index_redirect_to_pci] REGEX = . DEST_KEY = _MetaData:Index FORMAT = pci ### props.conf [host::x.x.x.x] TRANSFORMS-rt1 = host_rename_rt1,index_redirect_to_pci   How do I get the logs for the 514 and 9514 to be forwarded to the second set of heavy forwarders I have one redhat heavy forwarder that I installed syslog-ng on and change splunk to monitor that folder and remove the listen to port 514 and that's the only splunk heavy forwarder that can send syslog data over to the second set of splunk that is not receiving the logs from the transformed index
Hi All, I'm trying to get my existing addon updated with the latest splunk add-on builder v4.We had developed the existing addon with a previous version of add-on builder , we got a information fro... See more...
Hi All, I'm trying to get my existing addon updated with the latest splunk add-on builder v4.We had developed the existing addon with a previous version of add-on builder , we got a information from the splunk team to rebuild the addon with latest builder. It will be of great help if you can provide me links on how this can be done, or any advise.   Thanks.
To get the percentage increase of threshold value and to build a dashboard out of it to show as red if it is increased by 20%, can someone please suggest
  I need some assistance with the Splunk Cloud migration assessment tool. We plan to move to Splunk Cloud but there are no results for 2 of the preflight check searches but there is data in the ind... See more...
  I need some assistance with the Splunk Cloud migration assessment tool. We plan to move to Splunk Cloud but there are no results for 2 of the preflight check searches but there is data in the index the search pulls from. The searches are: | tstats dc(host) AS hosts where `scma_source_internal_index` source=*license_usage.log TERM(Usage) earliest=-24h@h by index   And   | tstats dc(host) AS hosts where `scma_source_internal_index` sourcetype=splunkd earliest=-4h@h by index   I was reading through the troubleshooting guide and it mentioned that there is a bug where tstats doesn’t work well with reading the internal indexes so it states to reach out to Splunk Support(which I have done but no response yet). The current version of splunk enterprise is 8.2.2.1
I have two Splunk queries, each of which uses the _rex command to extract the join field. Example:       QUERY 1 index=index1 "Query1" | rex field=_raw "abc(?<MY_JOIN_FIELD>def)" QUERY 2 i... See more...
I have two Splunk queries, each of which uses the _rex command to extract the join field. Example:       QUERY 1 index=index1 "Query1" | rex field=_raw "abc(?<MY_JOIN_FIELD>def)" QUERY 2 index=index2 "Query2" | rex field=_raw "ghi(?<MY_JOIN_FIELD>jkl)"       I want to use the Transaction command to correlate these two queries, but I can't figure out how to do it. Thanks! Jonathan
I have come across a unique issue with the MS Cloud Services app when assigning an input for a single storage table. The API works as expected when I initially set the input with a Start Time. Howeve... See more...
I have come across a unique issue with the MS Cloud Services app when assigning an input for a single storage table. The API works as expected when I initially set the input with a Start Time. However, the data does not continue to collect/ingest beyond the timestamp when the input is configured, unless there is manually intervention/manipulation on my part to the input Start Time. Example: If I set a Start Time to 2 weeks prior and save the input, it will collect data from the Storage Table at 2 weeks prior up to the time the input is saved in Splunk. The input will generate 0 results after that time. I checked in Azure Storage Explorer and the Table in question continues to write new entries in the same format. I have confirmed Splunk can see that data because it will start collecting new data only after I manually update the Start Time in the input. I checked the mscs:storage:table:log and there are no errors with the API input functionally and it shows attempts at the designated interval(5 minutes). Historically with this input, I've had success by leaving the Start Time to the default(30 days) and setting the table list to * to collect everything. However, this table is part of a very large blob that cannot be pulled in the same fashion. I'm hoping to get some ideas about what could be causing this break in log collection and see if there is something I may be overlooking. Any input would be greatly appreciated.
I'm trying to do my own "poor man's certificate check" Ideally I'd like to pick up from the config (btool output) the paths to certs so I could check them with openssl CLI tool. I don't want to ... See more...
I'm trying to do my own "poor man's certificate check" Ideally I'd like to pick up from the config (btool output) the paths to certs so I could check them with openssl CLI tool. I don't want to do any python modular input stuff for that since I want it to run as a simple script on any machine with UF. The question therefore is where should I get my certs from. serverCert, RootCA, clientCert, sslRootCAPath entries in inputs.conf, outputs.conf, servers.conf, deploymentclients.conf (of course they don't have to be defined in each file). For now I assume the "new" configuration format with a single pem. Any files that I forgot? Any more entries I missed?  
I have tried reassigning the orphaned search to the new owner, but couldn't able to fix it. I am getting the error message as "couldn't find the object ID". what it means? The search is shared global... See more...
I have tried reassigning the orphaned search to the new owner, but couldn't able to fix it. I am getting the error message as "couldn't find the object ID". what it means? The search is shared globally. Will recreating the old owner and deactivating the old owner account after reassigning to new owner will clear the warnings or else suggest me the way to get rid of this
Hello all In our environment some universal forwarders are not reporting to Splunk cloud. When I tried to view forwarder log file i.e. splunkd.log I found that for past one week no log was present ... See more...
Hello all In our environment some universal forwarders are not reporting to Splunk cloud. When I tried to view forwarder log file i.e. splunkd.log I found that for past one week no log was present in the file. What maybe the reason? Is it related to forwarder not sending logs to Splunk index?   Thank you
Hi to all, I have three machines: 1 deployment-server, 1 SH/Indexer and 1 forwarder. Looking at "monitoring console-panoramics" on deployment-server, i don't see the correct configuration (is avai... See more...
Hi to all, I have three machines: 1 deployment-server, 1 SH/Indexer and 1 forwarder. Looking at "monitoring console-panoramics" on deployment-server, i don't see the correct configuration (is available only deployment server, SH/Indexer and forwarder are not visible). The data arrives correctly in the index and in "forwarder management" I see correctly the forwarder client. Finally, the lookup "dmc_forwarder_assets" is empty. Can someone help me please? Thanks. 
Hi, I need to use the Event Timeline Viz to show a timeline of the the different URLs been hit over time. This is the first time I used this visualization and I am struggling. At the moment, ... See more...
Hi, I need to use the Event Timeline Viz to show a timeline of the the different URLs been hit over time. This is the first time I used this visualization and I am struggling. At the moment, this is all I have outputted:   And here is the XML for the panel: Can you please please help as it has been 4 days now since I started wit this diagram ....   Many thanks, Patrick
Hello dears, How can i change timechart _time axis y to x ? <base search> | timechart span=1h sum(REQUESTNAME) as Sikayet count by ilce |sort -count | untable _time Xaxis Yaxis |where Yaxis > 3 ... See more...
Hello dears, How can i change timechart _time axis y to x ? <base search> | timechart span=1h sum(REQUESTNAME) as Sikayet count by ilce |sort -count | untable _time Xaxis Yaxis |where Yaxis > 3   Regards
Hello, I am trying to integrate Splunk (on premise) with DUO ( the DUO application is in cloud) , but I am receiving an error. The error I am receiving is after I am entering  the integration key,s... See more...
Hello, I am trying to integrate Splunk (on premise) with DUO ( the DUO application is in cloud) , but I am receiving an error. The error I am receiving is after I am entering  the integration key,secret key and the API hostname in the Duo Security Add-on which I have installed it on my SH. The integration key,secret key and the API hostname I am having is after I have configured an ADMIN Api on the DUO Applications.( I need to mention that I gave my application the following permissions: Grant read information , Grant read log and Grant read resource). I have also tried the solution explained in the following article, to create the inputs.conf directly , but without success (Solved: DUO Splunk Connector: Error "Validation for scheme... - Splunk Community). The Error :Encountered the following error while trying to save: The provided admin API credentials cannot get the necessary logs. Please verify that the Admin API settings are correctly configured.   Can anyone help me ? Many thanks, Dragos  
This is kind of open ended, but essentially I'm looking for things that you view as bad config, or at least configuration settings that should be flagged for review. Some ideas I've had so far: ... See more...
This is kind of open ended, but essentially I'm looking for things that you view as bad config, or at least configuration settings that should be flagged for review. Some ideas I've had so far: - Indexes with a very short retention period (100 seconds or the like) - Searches with `index=*` in them - A deployment server targeturi that doesn't match the name of your actual DS What other sorts of config would you flag as concerning? Do you have any automated checks for anything like this in house?