All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

in the distsearch.conf on our search head we can blacklist applications   [replicationBlacklist] splunk_app1_blacklist = apps/splunk_app1/... splunk_app2_blacklist = apps/splunk_app2/... We can ... See more...
in the distsearch.conf on our search head we can blacklist applications   [replicationBlacklist] splunk_app1_blacklist = apps/splunk_app1/... splunk_app2_blacklist = apps/splunk_app2/... We can also define groups of search peers (here we have peers in two groups) [distributedSearch:dmc_group_main] servers = 172.21.72.127:8089,172.21.72.128:8089 default = true [distributedSearch:dmc_group_satelite] servers = 10.1.1.1:8089,10.1.1.2:8089 default = true   If we dont have indexes for app1 and app2 on peers dmc_group_satelite, can the [replicationBlacklist] be written in a way  so that its not pushing the peerbundles to it ONLY want  to push bundles to group_main (no need to have a knowledge bundle on peers that dont have searchable index) Many thanks, Jon  
Hello, I am trying to bring up a search that will tell me how much each index is being used, but the search_index field doesn't work. Here is the search: index=_audit action=search (id=* OR search_i... See more...
Hello, I am trying to bring up a search that will tell me how much each index is being used, but the search_index field doesn't work. Here is the search: index=_audit action=search (id=* OR search_id=*) | rex "user=(?<user>.*?)," | search user!=splunk-system-user | search user!=admin | search search!=*_internal* search!=*_audit* | rex max_match=0 field=search_index "((?:index(\")?=(?:\\|\\\"|\")?)|(?:s\w+\s\S))(?<my_indexes>[^\\\s\"]+)" | eval search_index=mvdedup(search_index) | convert num(total_run_time) | eval time_of_search=strftime(_time, "%F %T") | table user time_of_search total_run_time savedsearch_name search_index search
Hello Team, I want the stanza to ingest logs from a specific date in Linux or Window environment. Currently am using windows (ignoreOlderThan = 365d) and the same using for Linux it's not working. ... See more...
Hello Team, I want the stanza to ingest logs from a specific date in Linux or Window environment. Currently am using windows (ignoreOlderThan = 365d) and the same using for Linux it's not working. Requirement: I want to ingest logs from Linux via UF and windows machines to Splunk, so I want only 356days or 180days. Can anyone share other than the above stanza? Example:  [WinEventLog://Security] disabled = 0 index = trendmicro sourcetype = %trendmicro% ignoreOlderThan = 365d whitelist = 4625,4648,4723,4728,4732,4740,4777,5031,4624,4634    
I have log file which polls an endpoint and if new version has come then only performs the operation. All the polling (whether new version is available or not ) are logged into log file. I am trying ... See more...
I have log file which polls an endpoint and if new version has come then only performs the operation. All the polling (whether new version is available or not ) are logged into log file. I am trying to read this log file which is working fine. But I want to avoid redundant polling logs and send only those logs where new version was found. Can this be done on splunk forwarder using input.conf file?
Hello Experts, The CSV file is located on file share and file is having columns  Hostname, type, IP.  From these three columns I would like to ingest Hostname and IP columns and ignore Type column... See more...
Hello Experts, The CSV file is located on file share and file is having columns  Hostname, type, IP.  From these three columns I would like to ingest Hostname and IP columns and ignore Type column. I want to do this to save disk space on splunk indexers. Please suggest. Thank you.  
We are running a Splunk cluster (version 8.1.2) and trying to secure the forwarding from the Universal Forwarders (also version 8.1.2) to the Heavy Forwarders in our cluster. I've followed the docum... See more...
We are running a Splunk cluster (version 8.1.2) and trying to secure the forwarding from the Universal Forwarders (also version 8.1.2) to the Heavy Forwarders in our cluster. I've followed the documentation to accomplish this using custom certificates and we have succeeded to secure the traffic between the Universal Forwarders running on Linux and our Heavy Forwarders (also running on Linux). However, the Universal Forwarders on Windows fail to successfully sent their data. Our configuration is as follows: We have created a root CA that is shared by all Splunk nodes We have created a server certificate signed by the root CA that is shared by the Heavy Forwarders We have created a certificate signed by the root CA that is shared by the Universal Forwarders The Universal Forwarders contain an app with a outputs.conf with the following content     [tcpout]     defaultGroup = ufw_group     [tcpout:ufw_group]     server = splunkhf1d:9997     clientCert = C:\Program Files\SplunkUniversalForwarder\etc\apps\ufw_base\local\splunkUfd_chained.pem     sslPassword = $7$1x1tBdfWOZKofTNvhO1BD2/EJqF6yzM6fyiGVpqdDWEFQdm8Y1J+SGrN   Note that the sslPassword was pasted in plain text and was encrypted by Splunk upon restart.   The log of the Universal Forwarder shows:     ERROR AesGcm - Text decryption - error in finalizing: No errors in queue     ERROR AesGcm - AES-GCM Decryption failed!     ERROR Crypto - Decryption operation failed: AES-GCM Decryption failed!     WARN  ConfigEncryptor - Decryption operation failed: AES-GCM Decryption failed!   I have also tried to specify the path to the root CA in the server.conf but this did not help either. Finally, I have tried to install the Universal Forwarder using the graphical user interface and to specify the certificates in the installation wizard. The strange thing is that the certificate options do not show up in any of the configuration files after the installation is complete and forwarding also does not work.   Has anyone successfully configured forwarding over SSL/TLS from a Windows host or is this only supported on Linux hosts?
How do I know I am already login  
Hi  I want to write my results into outputlookup from saved search. but only when new results are there it should append it to mu lookup.which i am failing to do so query| outputlookup append=true ... See more...
Hi  I want to write my results into outputlookup from saved search. but only when new results are there it should append it to mu lookup.which i am failing to do so query| outputlookup append=true output.csv. This is writing multiple copies of same data into lookup. quyery|[|inputlookup output.csv |dedup S] |outputlookup output.csv append=true. This isnt working Any suggestions
Hello all,   I am facing an issue in appending an query. Here my objective is to update the kv store with the list of servers, alert_flag(if the alert has been raised) and count(number of times the... See more...
Hello all,   I am facing an issue in appending an query. Here my objective is to update the kv store with the list of servers, alert_flag(if the alert has been raised) and count(number of times the server has created an event). Below is the query that I have used.   index= index | lookup source_host_kvstore_001 source_host OUTPUT source_host as temp_source_host count alert_flag| dedup source_host | eval count=if(isnull(count),0,count)| eval count = count+1 | eval alert_flag = if(isnull(alert_flag),0,if((alert_flag=1),1,0)) | eval _time=now() | table _time source_host alert_flag count | sort -_time | outputlookup source_host_kvstore_001 append=true When the above is ran everytime the same host is updated and also added in the new row, however, I need a single update of the count and alert_flag for a host. The data is pushed to the kv store as below by a new increase in the count.   _time             alert_flag            count           source_host 2021-03-05 13:01:50      0         1          Server 1 2021-03-05 13:01:50      0         1          Server 2 2021-03-05 13:01:50      0         1          Server 3 2021-03-05 13:01:53      0         2          Server 1 2021-03-05 13:01:53      0         2          Server 2 2021-03-05 13:01:53      0         2          Server 3   However, I am looking for the data to be updated in the KV store like below. _time             alert_flag            count           source_host 2021-03-05 13:01:53      0         2          Server 1 2021-03-05 13:01:53      0         2          Server 2 2021-03-05 13:01:53      0         2          Server 3   Please guide me through this.   Regards
Hi Team, I am looking to Configure HTTP Event collector to log client source-IP instead of the source host. Is there a way we can enable this on the Splunk cloud environment?
We have following query used for generating few dashboards. However we would like to setup an alert whenever the sum(connection_count) goes above a threshold value say 100. Tried few options but the ... See more...
We have following query used for generating few dashboards. However we would like to setup an alert whenever the sum(connection_count) goes above a threshold value say 100. Tried few options but the filter contion is not working. Can someone please help.   index=app sourcetype=DBConnectionUsage NOT(application_user="No User" OR application_user="SYS" OR application_user="C##GGS_OWNER") | spath cdb | spath pdb | spath application_user | search cdb=* pdb=* application_user = "*" cluster="E3"| timechart span=1H sum(connection_count) by application_user    
Hi,  Output of the below query has been attached, I need only the total value to be displayed in the dashboard.  Here the total value is 578 only that should be displayed in the dashboard   index... See more...
Hi,  Output of the below query has been attached, I need only the total value to be displayed in the dashboard.  Here the total value is 578 only that should be displayed in the dashboard   index=abc sourcetype=xyz | rex field=_raw "INFO\s+(?<action>\w+\s\:?\s?\w+\s?\w+\s?\w+\s?\w+\s?\w+)" | search action="getActiveRecords response" OR action="SUCCESS : get active records" | stats count by action | addtotals count col=t row=t labelfield=action label=output  
I set up a flexible Splunk dashboard that responds to 4 custom input fields: 1. clientType (text) 2. region (text) 3. report window (dropdown) 4. refresh interval (dropdown)   My base query is:... See more...
I set up a flexible Splunk dashboard that responds to 4 custom input fields: 1. clientType (text) 2. region (text) 3. report window (dropdown) 4. refresh interval (dropdown)   My base query is:   <search id="baseSearch"> <query>index=myIndex $clientType$ region=$region$</query> <earliest>$reportWindow.earliest$</earliest> <latest>$reportWindow.latest$</latest> <refresh>$refreshInterval$</refresh> <refreshType>delay</refreshType> </search>     Now, I want to create a new panel in the form of a statistics table based on the `baseSearch` query:   <search base="baseSearch"> <query>search | stats first(timestamp) as timestamp, first(applicationName) as applicationName, values(message) as message by entryKey | table timestamp, entryKey, applicationName, message </query> <refresh>$refreshInterval$</refresh> <refreshType>delay</refreshType> </search>     However, the Splunk panel shows "No Results found". Yet, when I press "Open in Search", the Splunk query looks correct, and I can see results in a table in Splunk search. Why doesn't the statistics table in the Splunk panel get populated in the same way?
Hi, According to the Splunk Docs page How urgency is assigned to notable events in Splunk Enterprise Security if I assign an asset Medium priority and High severity in the related Correlation Search... See more...
Hi, According to the Splunk Docs page How urgency is assigned to notable events in Splunk Enterprise Security if I assign an asset Medium priority and High severity in the related Correlation Search (CS) it should register as a High Notable, however it still persists to register as a Medium causing me to up the Severity to Critical. Has there been a change in how ES operates where the table as written no longer works or is there something wrong with the ES instance? Also its Splunk Cloud
I want to create object for Glass Table in my Splunk. But I don't know how create object for showing my information in Glass Table. for example, i want object of Glass Table show me my firwall traf... See more...
I want to create object for Glass Table in my Splunk. But I don't know how create object for showing my information in Glass Table. for example, i want object of Glass Table show me my firwall traffic,what is its command in Glass Table? Everybody can help me?  
HiI need to uninstall the package: application/splunkforwarder from a  LOCAL WHOLE ROOT ZONE Solaris 11.3 O.S. The WHOLE ROOT ZONE is running on a  Global zone Solaris 11.3 (which is the Container/p... See more...
HiI need to uninstall the package: application/splunkforwarder from a  LOCAL WHOLE ROOT ZONE Solaris 11.3 O.S. The WHOLE ROOT ZONE is running on a  Global zone Solaris 11.3 (which is the Container/physical server). Please let me know if you have any suggestion. Thanks. Roberto
Hello Splunk Community,  I have two indexes: index=vc_xyz_idx  and index=xp_zzz_summary_idx and I am checking to see if a value named docNum is in the index=xp_zzz_summary_idx. The docNum should b... See more...
Hello Splunk Community,  I have two indexes: index=vc_xyz_idx  and index=xp_zzz_summary_idx and I am checking to see if a value named docNum is in the index=xp_zzz_summary_idx. The docNum should be in both indexes, but I am only interested in the docNum missing from index=xp_zzz_summary_idx .  I have created two eval's and renamed the indexes, since they both have the same field name - index. The issue is that I am getting false negatives. I have put in  | search Missing_in_Blue="No"  because I only want the docNum that is missing in index=xp_zzz_summary_idx, but I get docNum that is actually in the  index=xp_zzz_summary_idx. Can someone please help? (index="vc_xyz_idx") OR (index="xp_zzz_summary_idx") | eval Blue=case(index=index="xp_zzz_summary_idx", docNum), Missing_in_Blue=if(docNum==xp_zzz_summary_idx, "Yes", "No") | search Missing_in_Blue="No" | stats values(Missing_in_Blue) as Missing_in_Blue by docNum  
Hi All, We recently upgraded our Splunk Enterprise from V7.x to 8.x. After the upgrade, the security team observed that some searches are delayed, and mostly due to the data model acceleration from ... See more...
Hi All, We recently upgraded our Splunk Enterprise from V7.x to 8.x. After the upgrade, the security team observed that some searches are delayed, and mostly due to the data model acceleration from Splunk ES.  Sample logs and screenshot below. We do not want to disable acceleration on default datamodels.. so how can we fix this issue? Note that there was no such issue before the upgrade..   03-05-2021 09:54:37.135 +0800 INFO SavedSplunker - savedsearch_id="nobody;Splunk_SA_CIM;_ACCELERATE_DM_Splunk_SA_CIM_Performance_ACCELERATE_", search_type="datamodel_acceleration", user="nobody", app="Splunk_SA_CIM", savedsearch_name="_ACCELERATE_DM_Splunk_SA_CIM_Performance_ACCELERATE_", priority=highest, status=success, digest_mode=1, scheduled_time=1614908940, window_time=0, dispatch_time=1614909165, run_time=110.969, result_count=431, alert_actions="", sid="scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5534aac642f80d961_at_1614908940_35488",    
Hi, I need help in finding the average memory usage of 100+ linux server. we dont have permon in splunk so i cant use that to get the memory data. We have 1000s of server. For CPU , I somehow found... See more...
Hi, I need help in finding the average memory usage of 100+ linux server. we dont have permon in splunk so i cant use that to get the memory data. We have 1000s of server. For CPU , I somehow found below queries . But couldn't get one for memory usage. Average CPU : index=os host=hostname sourcetype=cpu | multikv | search CPU="all" | eval pctCPU=100-pctIdle | stats avg(pctCPU) by host For max CPU : index=os sourcetype=top host=hostname |stats max(pctCPU) AS maxCPU by _time, PID, COMMAND|sort -maxCPU  
I know that a Universal Forwarder doesn't have a graphic user interface. But, does a HEAVY forwarder have a GUI?