All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Our data flow is syslog server sending more number of data to one HF1, then its routing to a indexer cluster as well as to another HF2. from this another HF2, routing data to syslogNG and  another in... See more...
Our data flow is syslog server sending more number of data to one HF1, then its routing to a indexer cluster as well as to another HF2. from this another HF2, routing data to syslogNG and  another indexer cluster, located in different environment Due to high volume of data in our syslog server, we increased the pipeline queue size as 2500MB. we faced backpressure in syslog and HFs , so vendor recommended to increase the pipeline size as 2500MB under server.conf , in both HFs and syslog server.  now the issue is HF2 consuming full memory(92GB) recently after the server reboot.  after consume 100% memory , HF2 went hung . if we decrease the parallel pipeline from 2 to 1 in HF2, it create backpressure in syslog server and HF1 , and pipelines getting burst.  before the HF2 reboot, the memory consumption was less than 10GB only with pipeline size as 2500MB and Splunkd process was normal. Note: so far HF1 not facing any memory(92GB) issue, located in between syslog server and HF2 now in this situation , increasing the memory in HF2 will be helpful ? or what will be best solution to overcome this scenario in future    
I have recently created an addon via UCC Framework (Version-5.56.0),However I am facing issue while (editing or cloning) inputs and account in configuration page. #UCC framework    
Hello I have a search head configured with assets and identity from current ad domain. I have 5 more ad domains without trust and on different networks. In each domain / network I have a HF sendin... See more...
Hello I have a search head configured with assets and identity from current ad domain. I have 5 more ad domains without trust and on different networks. In each domain / network I have a HF sending data to indexers. How can I set those domains to send assets and identity information to my search head? Thank you Splunk Enterprise Security  
Hello. This search returns zero results, but a manual "OR" search shows results. I cannot find the reason (neither can ChatGPT). The end result would be a query where I can input any format of MAC a... See more...
Hello. This search returns zero results, but a manual "OR" search shows results. I cannot find the reason (neither can ChatGPT). The end result would be a query where I can input any format of MAC address in one section, but automatically search for all formats shown.  Any guidance would be appreciated. BTW, this is a local Splunk installation.  (Please ignore the "xxxx".) | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1:\2:\3:\4:\5:\6") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1-\2-\3-\4-\5-\6") | eval mac_dot=replace(mac_clean, "(....)(....)(....)", "\1.\2.\3") | fields mac_clean mac_colon mac_hyphen mac_dot | eval search_string="\"" . mac_clean . "\" OR \"" . mac_colon . "\" OR \"" . mac_hyphen . "\" OR \"" . mac_dot . "\"" | table search_string | map search="search index=main sourcetype=syslog ($search_string$) | table _time host _raw"
Hi, this is my first interaction with Splunk Community so be patient please   I'm trying to output some fields from an Alert to a kvs lookup. I'm using a Lookup editor app and a KVS app, but proba... See more...
Hi, this is my first interaction with Splunk Community so be patient please   I'm trying to output some fields from an Alert to a kvs lookup. I'm using a Lookup editor app and a KVS app, but probably i'm missing some theory. Thanks!
Hello I'm trying to monitor SMTP failures in my Splunk cloud environment.  I know for sure that at some date we had problem and did not receive any emails but when im running this query :    inde... See more...
Hello I'm trying to monitor SMTP failures in my Splunk cloud environment.  I know for sure that at some date we had problem and did not receive any emails but when im running this query :    index=_internal sendemail source="/opt/splunk/var/log/splunk/python.log"   I don't see any errors.  How can I achieve my goal ? Thanks 
Hi Everyone,  I encountered an error while ingesting sourcetype=aws:cloudtrails in AWS Apps. I attempted to ingest data from the following sources: aws:waflogs, aws:network-firewall-log, aws:cloudtr... See more...
Hi Everyone,  I encountered an error while ingesting sourcetype=aws:cloudtrails in AWS Apps. I attempted to ingest data from the following sources: aws:waflogs, aws:network-firewall-log, aws:cloudtrails, aws:securityhub-log-group. However, upon checking, only aws:waflogs and aws:network-firewall-log were ingested. Attached below are the errors from the logs.  Also i screenshot inputs config from the apps side here :  Last i show you the proof if i only received that 2 sourctypes here :    If you have any experience from this issue, please give me the answer.    Danke,   Zake  
Hi Splunkers, a colleague team si facing some issues related to .csv file collection. Let me share  the required context. We have a .csv file that is sent to a sftp server. The sending is 1 per day:... See more...
Hi Splunkers, a colleague team si facing some issues related to .csv file collection. Let me share  the required context. We have a .csv file that is sent to a sftp server. The sending is 1 per day: this means that every day, the file is write once and never modified. In addiction to this, even if the file is a csv one, it has a .log extension. On this server, the Splunk UF is installed and configured to read this daily file. What currently happen is the following: The file is read many time: multiple occurrence of error message like:  INFO  WatchedFile [23227 tailreader0] - File too small to check seekcrc, probably truncated.  Will re-read entire file=<file name here> can be got from internal logs   The csv header is viewed like an event. This means that, for example, the file contains 1000 events, performing a search in assigned index we have 1000 + x  events; each of this x events does not contains real events, but the csv header file. So, we see the header as an event/logs. For the first problem, I suggested to my team to use the initCrcLength parameter, properly set. For the second one, I shared them to ensure that following parameter are set: INDEXED_EXTRACTIONS = csv HEADER_FIELD_LINE_NUMBER = 1 CHECK_FOR_HEADER = true In addition to this, I suggested them to avoid the default line breaker; in the inputs.conf file is set the following one: LINE_BREAKER = ([\r\n]+) That could be the root cause/one of the cause of header extraction as events. I don't know if those changes has fixed the events (they are still performing required restarts), but I would ask you if any other possible fix should be applied. Thanks!
Hi Splunk Community, We’re currently trying to drop specific logs using props.conf and transforms.conf, but our configuration doesn’t seem to be working as expected. Below is a summary of what we’ve... See more...
Hi Splunk Community, We’re currently trying to drop specific logs using props.conf and transforms.conf, but our configuration doesn’t seem to be working as expected. Below is a summary of what we’ve done: transforms.conf [eliminate-accesslog_coll_health] REGEX = ^.*(?:H|h)ealth.* DEST_KEY = queue FORMAT = nullQueue [eliminate-accesslog_coll_actuator] REGEX = ^.*actuator.* DEST_KEY = queue FORMAT = nullQueue props.conf [access_combined] TRANSFORMS-set = eliminate-accesslog_coll_actuator, eliminate-accesslog_coll_health [iis] TRANSFORMS-set = eliminate-accesslog_coll_health [(?::){0}kube:*] TRANSFORMS-set = eliminate-accesslog_coll_actuator The main issue is that events are not being dropped, even when a specific sourcetype is defined (like access_combined or iis). Additionally, for logs coming from Kubernetes, there is no single consistent sourcetype, so we attempted to match using [source::] logic via a regex ([(?::){0}kube:*]), but this doesn’t seem to be supported in this context. From what we've read in the documentation, it looks like regex patterns for [source::] are not allowed in props.conf, and must instead be written explicitly. Is that correct? And if so, what’s the best way to drop events from dynamic sources or where the sourcetype is inconsistent? Any help or suggestions would be greatly appreciated. Thanks in advance!  
Hello Guys,   We have SCOM on physical box & want to onboard in AppDynamics for monitoring. customer wants to onboard without agent installation on SCOM. Could you please let me know what is best a... See more...
Hello Guys,   We have SCOM on physical box & want to onboard in AppDynamics for monitoring. customer wants to onboard without agent installation on SCOM. Could you please let me know what is best approach to SCOM monitoring in APpDynamics.   Thanks  
Hi Team, Planned to upgrade Splunk Enterprise from Version 9.2.1 to 9.4.2 Latest - Currently my Splunk UF version is 8.0.5. Will 8.0.5 support or i need to upgrade UF version too? Compatibility ... See more...
Hi Team, Planned to upgrade Splunk Enterprise from Version 9.2.1 to 9.4.2 Latest - Currently my Splunk UF version is 8.0.5. Will 8.0.5 support or i need to upgrade UF version too? Compatibility between forwarders and Splunk Enterprise indexers - Splunk Documentation It says UF 8.0.X will be compatible for 9.4.X (E,M) Events and metrics. Need further clarification on this whether i should upgrade UF or it's ok to be on 8.0.X version. Thanks  
we have a index where the data is currently being stored and indexed on the indexer . Now i am making Search head standalone and i want to send the data from indexer to sh . How to do it.
We are now using the Python for Scientific Computing app (v2.0.2) on a on-premise Linux instance, and planning to upgrade the app to the latest version 4.2.3. When upgrading, should we just upload t... See more...
We are now using the Python for Scientific Computing app (v2.0.2) on a on-premise Linux instance, and planning to upgrade the app to the latest version 4.2.3. When upgrading, should we just upload the app package through the splunk web and check the upgrade checkbox? Python for Scientific Computing (for Linux 64-bit) | Splunkbase
I have a query that detects missing systems.  the lookup table has fields System, Location, responsible. I am trying to get the location and responsible to show in the end result.  It appears the ... See more...
I have a query that detects missing systems.  the lookup table has fields System, Location, responsible. I am trying to get the location and responsible to show in the end result.  It appears the join is losing those values.   Is there a way to get those values to the final result? | inputlookup system_info.csv | eval System_Name=System | table System_Name | join type=left Sensor_Name [| search index=servers sourcetype=logs     | stats latest(_time) as Time by System_Name     | eval mytime=strftime(Time,"%Y-%m-%dT%H:%M:%S")     | sort Time asc | eval now_time = now()     | eval last_seen_ago_in_seconds = now_time - Time     | sort -last_seen_ago_in_seconds ] | stats values(*) as * by System_Name | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200,"MISSING","GOOD") | where MISSING=="MISSING" | table System_Name Location Responsible MISSING
I need a query that will tell me the count of a substring within a string like this ... "This is my [string]" and I need find the word and count of [string]. "This is my" is always the same but [str... See more...
I need a query that will tell me the count of a substring within a string like this ... "This is my [string]" and I need find the word and count of [string]. "This is my" is always the same but [string] is dynamic and can be many things, such as apple, banana etc. I need tabular data returned to look like  Word           Count apple          3 I tried this but doesnt seem to working  rex field=_raw ".*This is my (?<string>\d+).*" | stats count by string   
Hello after I installed Splunk 9.4.3 on Linux (Ubuntu) I am unable to run it. When I try to start Splunk, it says the directory does not exist. When I found it in the directory, I prompted with a KVs... See more...
Hello after I installed Splunk 9.4.3 on Linux (Ubuntu) I am unable to run it. When I try to start Splunk, it says the directory does not exist. When I found it in the directory, I prompted with a KVstore error message.  Any help is greatly appreciated and needed.
I’ve developed a custom Splunk app that fetches log data from external sources. Currently, I need to dynamically create dashboards whenever new data types/sources are ingested, without manual interve... See more...
I’ve developed a custom Splunk app that fetches log data from external sources. Currently, I need to dynamically create dashboards whenever new data types/sources are ingested, without manual intervention.
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. I... See more...
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. Is there documentation that I am missing? 
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onbo... See more...
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onboarded the Apache web services adding the argument in catalina.sh.  
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the ind... See more...
Good afternoon, I have a monitoring architecture with three nodes with the Splunk Enterprise product. One node acts as SearchHead, one as Indexer and one for all other roles. I have a HEC on the indexer node to be able to receive data from third parties. The sourcetype configured to store the data is as follows: [integration] DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = test disabled = false pulldown_type = 1 INDEXED_EXTRACTIONS = none KV_MODE = json My problem is that when I fetch the data, there are events where the field extraction is done in duplicate and others where the field extraction is done only once. Please, can you help me? Best regards, thank you very much