All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @zksvc  It looks like the inputs are polling AWS Cloudwatch too frequently, which is giving your Rate Limit exception.  If you have just set this up then it will be trying to pull logs back from... See more...
Hi @zksvc  It looks like the inputs are polling AWS Cloudwatch too frequently, which is giving your Rate Limit exception.  If you have just set this up then it will be trying to pull logs back from whatever the only_after date you set was (see https://splunk.github.io/splunk-add-on-for-amazon-web-services/CloudWatchLogs/ for input config descriptions) If you left this field blank then I believe it tries to load all the events in the Cloudwatch logs group in AWS. Ultimately it looks like its repeatedly querying CW Logs to get more logs which is why it is hitting the rate limit. The number of polls to CW Logs will reduce once it has caught up to the current date. It might be worth enabling one at a time to allow them to catch up gradually. If you do not need the historic data then I would suggest cloning the inputs and setting the only_after date to a recent date and then deleting the old input. I dont think it is possible to change the only_after once created because of how the checkpoint of the current date/time is recorded, but I may be wrong here.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Everyone,  I encountered an error while ingesting sourcetype=aws:cloudtrails in AWS Apps. I attempted to ingest data from the following sources: aws:waflogs, aws:network-firewall-log, aws:cloudtr... See more...
Hi Everyone,  I encountered an error while ingesting sourcetype=aws:cloudtrails in AWS Apps. I attempted to ingest data from the following sources: aws:waflogs, aws:network-firewall-log, aws:cloudtrails, aws:securityhub-log-group. However, upon checking, only aws:waflogs and aws:network-firewall-log were ingested. Attached below are the errors from the logs.  Also i screenshot inputs config from the apps side here :  Last i show you the proof if i only received that 2 sourctypes here :    If you have any experience from this issue, please give me the answer.    Danke,   Zake  
Hi @SplunkExplorer  I think the message about re-reading the file shouldnt be an issue in your case. You mentioned setting LINE_BREAKER in inputs.conf, however this should be in props.conf - having... See more...
Hi @SplunkExplorer  I think the message about re-reading the file shouldnt be an issue in your case. You mentioned setting LINE_BREAKER in inputs.conf, however this should be in props.conf - having said that - I think the default should be sufficient for your CSV file. If you set HEADER_FIELD_LINE_NUMBER=0 (default) do you get the same results? What does the first line with the headers look like, is it a typical comma (,) separated list of headers? No quotes, spaces,tabs etc etc? If so the default FIELD_DELIMITER should suffice but want to check. I'm not 100% sure I follow what you mean about the headers, do you mean that for each event you also see the header printed?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cybers1  So does this data go through the HF? If so these props/transforms will need putting on there. Certain props/transforms happen at search time (e.g. search time field extractions), the o... See more...
Hi @Cybers1  So does this data go through the HF? If so these props/transforms will need putting on there. Certain props/transforms happen at search time (e.g. search time field extractions), the other config which happens at parsing/index time needs to occur on the first full instance of Splunk that the data hits (e.g. a HF), apart from a vary small set of exceptions (such as RULESET)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cybers1 , if you're sure that the not filtered data don't pass trhough any HF, it isn't the issue. then, please try these regexes: [eliminate-accesslog_coll_health] REGEX = Health|health DEST_... See more...
Hi @Cybers1 , if you're sure that the not filtered data don't pass trhough any HF, it isn't the issue. then, please try these regexes: [eliminate-accesslog_coll_health] REGEX = Health|health DEST_KEY = queue FORMAT = nullQueue [eliminate-accesslog_coll_actuator] REGEX = actuator DEST_KEY = queue FORMAT = nullQueue Ciao. Giuseppe
Thank you for the kind reply. We have an elasticsearch setup without authentication and without Certificates so I tried to comment stanza as you suggest use_ssl = 0 # opt_ca_certs_path = but no ... See more...
Thank you for the kind reply. We have an elasticsearch setup without authentication and without Certificates so I tried to comment stanza as you suggest use_ssl = 0 # opt_ca_certs_path = but no success   Than i decided to go into the python scripts and try to comment out the cert(strip) line   /opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py #opt_ca_certs_path = opt_ca_certs_path.strip()     Now I get another SSL  Error in log : 2025-06-11 12:00:03,503 ERROR pid=2813813 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 153, in collect_events results = search_index(opt_elasticsearch_instance_url, opt_port, opt_user, opt_secret, opt_elasticsearch_indice, opt_date_field_name, opt_time_preset, size, from_number, opt_ca_certs_path) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 102, in search_index response = client.search(**search_params, scroll="1m") File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/_sync/client/utils.py", line 414, in wrapped return api(*args, **kwargs) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/_sync/client/__init__.py", line 3859, in search return self.perform_request( # type: ignore[return-value] File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/_sync/client/_base.py", line 285, in perform_request meta, resp_body = self.transport.perform_request( File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elastic_transport/_transport.py", line 329, in perform_request meta, raw_data = node.perform_request( File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elastic_transport/_node/_http_urllib3.py", line 199, in perform_request raise err from None elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1161))) If anyone managed to onboard elasticsearch data without authentication or certificate validation please adivse 
OK. People use their own spare time to help others. Not specifying the problem properly is simply wasting their time. Present your problem as clearly and completely as possible. Sure, there might som... See more...
OK. People use their own spare time to help others. Not specifying the problem properly is simply wasting their time. Present your problem as clearly and completely as possible. Sure, there might sometimes be some unclear things but guessing not only the solution but also the problem itself is not gonna cut it. So if you want to get some serious help, invest something firstly into specifying what you're trying to achieve - what the data is, what is the relation between the data and the desired output, what is the logic behind the output. Just saying "three versions per day" doesn't tell anything about how the input data corresponds to the output. Maybe some counts should be aggregated, maybe not. How the versions are ordered? Do the counts "spill down"? What is going on with that data?
Hi Splunkers, a colleague team si facing some issues related to .csv file collection. Let me share  the required context. We have a .csv file that is sent to a sftp server. The sending is 1 per day:... See more...
Hi Splunkers, a colleague team si facing some issues related to .csv file collection. Let me share  the required context. We have a .csv file that is sent to a sftp server. The sending is 1 per day: this means that every day, the file is write once and never modified. In addiction to this, even if the file is a csv one, it has a .log extension. On this server, the Splunk UF is installed and configured to read this daily file. What currently happen is the following: The file is read many time: multiple occurrence of error message like:  INFO  WatchedFile [23227 tailreader0] - File too small to check seekcrc, probably truncated.  Will re-read entire file=<file name here> can be got from internal logs   The csv header is viewed like an event. This means that, for example, the file contains 1000 events, performing a search in assigned index we have 1000 + x  events; each of this x events does not contains real events, but the csv header file. So, we see the header as an event/logs. For the first problem, I suggested to my team to use the initCrcLength parameter, properly set. For the second one, I shared them to ensure that following parameter are set: INDEXED_EXTRACTIONS = csv HEADER_FIELD_LINE_NUMBER = 1 CHECK_FOR_HEADER = true In addition to this, I suggested them to avoid the default line breaker; in the inputs.conf file is set the following one: LINE_BREAKER = ([\r\n]+) That could be the root cause/one of the cause of header extraction as events. I don't know if those changes has fixed the events (they are still performing required restarts), but I would ask you if any other possible fix should be applied. Thanks!
| rest splunk_server_group=dmc_group_license_master /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | mvexpand stack_id | join type=outer stack_id splunk_server [ rest s... See more...
| rest splunk_server_group=dmc_group_license_master /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | mvexpand stack_id | join type=outer stack_id splunk_server [ rest splunk_server_group=dmc_group_license_master /services/licenser/pools] | fields splunk_server, stack_id, effective_quota, used_bytes | stats sum(used_bytes) as used_bytes max(effective_quota) as stack_quota by stack_id | eval usedGB=round(used_bytes/1024/1024/1024, 3),totalGB=round(stack_quota/1024/1024/1024, 3), percentage=round((used_bytes / stack_quota) * 100, 2) | eval alert_level=case(percentage >= 90, "Critical: >90%",percentage >= 80, "Warning: >80%",percentage >=70, "Info: >70%",true(), null()) | where isnotnull(alert_level) | rename stack_id AS Instance, percentage AS "License quota used (%)",usedGB AS "License quota used (GB)",totalGB AS "Total license quota (GB)",alert_level AS "Alert Level"
Hi, @gcusello  Thanks for your reply! The configurations were applied on the main Splunk instance, not on the Heavy Forwarder. However, within the same configuration files on the main instance (pro... See more...
Hi, @gcusello  Thanks for your reply! The configurations were applied on the main Splunk instance, not on the Heavy Forwarder. However, within the same configuration files on the main instance (props.conf and transforms.conf), there are already similar settings that are currently working as expected. Also, we have tested the regex patterns used in the transforms.conf, and they are working correctly — they match the intended log lines when tested separately. That’s why we’re a bit puzzled — some TRANSFORMS stanzas are effective, while the ones mentioned in the original post aren’t having any impact. Any further insights would be greatly appreciated! Best regards,
Hi @Cybers1 , I suppose that you already checked the regexes that you're using and they are correct to filter logs. The question is: where do you located these conf files? They must be located on ... See more...
Hi @Cybers1 , I suppose that you already checked the regexes that you're using and they are correct to filter logs. The question is: where do you located these conf files? They must be located on the first full Splunk instance that dara pass trhough, in other words on the first Heavy Forwarder (if present) or in the Indexers. Ciao. Giuseppe
Hi Splunk Community, We’re currently trying to drop specific logs using props.conf and transforms.conf, but our configuration doesn’t seem to be working as expected. Below is a summary of what we’ve... See more...
Hi Splunk Community, We’re currently trying to drop specific logs using props.conf and transforms.conf, but our configuration doesn’t seem to be working as expected. Below is a summary of what we’ve done: transforms.conf [eliminate-accesslog_coll_health] REGEX = ^.*(?:H|h)ealth.* DEST_KEY = queue FORMAT = nullQueue [eliminate-accesslog_coll_actuator] REGEX = ^.*actuator.* DEST_KEY = queue FORMAT = nullQueue props.conf [access_combined] TRANSFORMS-set = eliminate-accesslog_coll_actuator, eliminate-accesslog_coll_health [iis] TRANSFORMS-set = eliminate-accesslog_coll_health [(?::){0}kube:*] TRANSFORMS-set = eliminate-accesslog_coll_actuator The main issue is that events are not being dropped, even when a specific sourcetype is defined (like access_combined or iis). Additionally, for logs coming from Kubernetes, there is no single consistent sourcetype, so we attempted to match using [source::] logic via a regex ([(?::){0}kube:*]), but this doesn’t seem to be supported in this context. From what we've read in the documentation, it looks like regex patterns for [source::] are not allowed in props.conf, and must instead be written explicitly. Is that correct? And if so, what’s the best way to drop events from dynamic sources or where the sourcetype is inconsistent? Any help or suggestions would be greatly appreciated. Thanks in advance!  
Hello Guys,   We have SCOM on physical box & want to onboard in AppDynamics for monitoring. customer wants to onboard without agent installation on SCOM. Could you please let me know what is best a... See more...
Hello Guys,   We have SCOM on physical box & want to onboard in AppDynamics for monitoring. customer wants to onboard without agent installation on SCOM. Could you please let me know what is best approach to SCOM monitoring in APpDynamics.   Thanks  
Hi @Naoki  If you are using a standalone SearchHead then yet you can upload via the UI, ticking the upgrade button - however you might hit issues due to the size of the tarball file being higher tha... See more...
Hi @Naoki  If you are using a standalone SearchHead then yet you can upload via the UI, ticking the upgrade button - however you might hit issues due to the size of the tarball file being higher than the limits.conf set for file uploads. To overcome this see the following snippet from https://splunk.my.site.com/customer/s/article/Python-for-Scientific-Computing 1) To upgrade via UI needs to change max_upload_size at path $SPLUNK_HOME/etc/system/local/web.conf: [settings] max_upload_size = 2048 2) For clustered environment, increase the bundle size to the same setting as max_content_length on the SHs as follow: 1. Navigated to /opt/splunk/etc/system/local/distsearch.conf on the server. 2. Append/Update the following parameters:- [replicationSettings] maxBundleSize = 2048 or 3072 3. Save the file and restart all SHs.  Alternatively you can copy it to the Splunk instance's app folder ($SPLUNK_HOME/etc/apps) overwritting the previous content.  If in any doubt please make a backup copy of the PSC app on your Splunk instance first.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @RAVISHANKAR  Whilst you are right in that the 8.0.x UF forward can send events/metrics to 9.4.x it is important to note that 8.0.x UFs are no longer supported by Splunk. So technically, yes, it ... See more...
Hi @RAVISHANKAR  Whilst you are right in that the 8.0.x UF forward can send events/metrics to 9.4.x it is important to note that 8.0.x UFs are no longer supported by Splunk. So technically, yes, it will work - but from a support standpoint you need to upgrade UFs to 9.1.x to still be supported by Splunk, although that is only until 28th June (17 days!) so I would recommend a minimum of 9.2.x For more info on supported Splunk versions check out https://www.splunk.com/en_us/legal/splunk-software-support-policy.html?locale=en_us  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @cdevoe57  If you want to use the lookup as a source of truth for the list of hosts I would use the following, also just a note that I'm suggesting tstats here which is *much* more performant tha... See more...
Hi @cdevoe57  If you want to use the lookup as a source of truth for the list of hosts I would use the following, also just a note that I'm suggesting tstats here which is *much* more performant than a regular index= search. | tstats latest(_time) as _time WHERE index=servers sourcetype=logs by host | eval last_seen_ago_in_seconds = now() - _time | eval System_Name = host | append [|inputlookup system_info.csv | eval last_seen_ago_in_seconds=9999] | stats min(last_seen_ago_in_seconds) as last_seen_ago_in_seconds, values(Location) AS Location, values(Responsible) AS Responsible by System_Name | eval MISSING = if(isnull(last_seen_ago_in_seconds) OR last_seen_ago_in_seconds>7200, "MISSING", "GOOD") | where MISSING=="MISSING" | sort -last_seen_ago_in_seconds This works by appending the system_info.csv with a large last_seen_ago_in_seconds which is updated by a lower last_seen_ago_in_seconds value if the host has been found.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, Planned to upgrade Splunk Enterprise from Version 9.2.1 to 9.4.2 Latest - Currently my Splunk UF version is 8.0.5. Will 8.0.5 support or i need to upgrade UF version too? Compatibility ... See more...
Hi Team, Planned to upgrade Splunk Enterprise from Version 9.2.1 to 9.4.2 Latest - Currently my Splunk UF version is 8.0.5. Will 8.0.5 support or i need to upgrade UF version too? Compatibility between forwarders and Splunk Enterprise indexers - Splunk Documentation It says UF 8.0.X will be compatible for 9.4.X (E,M) Events and metrics. Need further clarification on this whether i should upgrade UF or it's ok to be on 8.0.X version. Thanks  
  also in future we would be Decommissioning the indexer after i have send the data to sh and then i will be sending data directly to sh
also in future we would be Decommissioning the indexer after i have send the data to sh and then i will be sending data directly to sh
@SN1  If you must move indexed data from the indexer to the Search Head, you can copy the data files,  Stop Splunk on both the indexer and the Search Head. Copy the index data directories from... See more...
@SN1  If you must move indexed data from the indexer to the Search Head, you can copy the data files,  Stop Splunk on both the indexer and the Search Head. Copy the index data directories from the indexer to the Search Head: Example: Copy $SPLUNK_HOME/var/lib/splunk/<index_name> from the indexer to the same path on the Search Head. Ensure file ownership,permissions,storage size,os and splunk versions are correct on the Search Head. Also make sure you have configuration for the indexes.conf for the indexes you have. Start Splunk on the Search Head. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!