All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I had tried your method before Apparently I screwed up the syntax The | lookup system_info.csv System as System_Name line was failing.
Thanks for the input.  In fact, your first solution is what I ended up doing.  That one works. The second solution does not work.   The query doesn't have the list of all systems when it calculate... See more...
Thanks for the input.  In fact, your first solution is what I ended up doing.  That one works. The second solution does not work.   The query doesn't have the list of all systems when it calculates the missing
Hi @sarit_s6  SMTP logs arent directly logged into your Splunk Cloud environment, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails boun... See more...
Hi @sarit_s6  SMTP logs arent directly logged into your Splunk Cloud environment, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this could help confirm that  a) If the alert actually fired correctly from Splunk b) Email accepted by the mail relay c) If the relay had any issue sending on to the final destination. At a previous customer we had a number of issues with the customer email server detecting some of the Splunk Cloud alerts as spam and silently bouncing them. You can contact Support via https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
we know for sure that Splunk had issue with sending emails during this time so for sure its in splunk's end 
If there are no errors on Splunk's end then your email provider should be contacted to find out why the messages were not delivered.  It's possible the message were treated as spam or there was anoth... See more...
If there are no errors on Splunk's end then your email provider should be contacted to find out why the messages were not delivered.  It's possible the message were treated as spam or there was another problem that prevented delivery.
Hello I'm trying to monitor SMTP failures in my Splunk cloud environment.  I know for sure that at some date we had problem and did not receive any emails but when im running this query :    inde... See more...
Hello I'm trying to monitor SMTP failures in my Splunk cloud environment.  I know for sure that at some date we had problem and did not receive any emails but when im running this query :    index=_internal sendemail source="/opt/splunk/var/log/splunk/python.log"   I don't see any errors.  How can I achieve my goal ? Thanks 
Thanks for your reply,  I will try to change the interval time to 600 seconds first. 
Hi @zksvc  It looks like the inputs are polling AWS Cloudwatch too frequently, which is giving your Rate Limit exception.  If you have just set this up then it will be trying to pull logs back from... See more...
Hi @zksvc  It looks like the inputs are polling AWS Cloudwatch too frequently, which is giving your Rate Limit exception.  If you have just set this up then it will be trying to pull logs back from whatever the only_after date you set was (see https://splunk.github.io/splunk-add-on-for-amazon-web-services/CloudWatchLogs/ for input config descriptions) If you left this field blank then I believe it tries to load all the events in the Cloudwatch logs group in AWS. Ultimately it looks like its repeatedly querying CW Logs to get more logs which is why it is hitting the rate limit. The number of polls to CW Logs will reduce once it has caught up to the current date. It might be worth enabling one at a time to allow them to catch up gradually. If you do not need the historic data then I would suggest cloning the inputs and setting the only_after date to a recent date and then deleting the old input. I dont think it is possible to change the only_after once created because of how the checkpoint of the current date/time is recorded, but I may be wrong here.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Everyone,  I encountered an error while ingesting sourcetype=aws:cloudtrails in AWS Apps. I attempted to ingest data from the following sources: aws:waflogs, aws:network-firewall-log, aws:cloudtr... See more...
Hi Everyone,  I encountered an error while ingesting sourcetype=aws:cloudtrails in AWS Apps. I attempted to ingest data from the following sources: aws:waflogs, aws:network-firewall-log, aws:cloudtrails, aws:securityhub-log-group. However, upon checking, only aws:waflogs and aws:network-firewall-log were ingested. Attached below are the errors from the logs.  Also i screenshot inputs config from the apps side here :  Last i show you the proof if i only received that 2 sourctypes here :    If you have any experience from this issue, please give me the answer.    Danke,   Zake  
Hi @SplunkExplorer  I think the message about re-reading the file shouldnt be an issue in your case. You mentioned setting LINE_BREAKER in inputs.conf, however this should be in props.conf - having... See more...
Hi @SplunkExplorer  I think the message about re-reading the file shouldnt be an issue in your case. You mentioned setting LINE_BREAKER in inputs.conf, however this should be in props.conf - having said that - I think the default should be sufficient for your CSV file. If you set HEADER_FIELD_LINE_NUMBER=0 (default) do you get the same results? What does the first line with the headers look like, is it a typical comma (,) separated list of headers? No quotes, spaces,tabs etc etc? If so the default FIELD_DELIMITER should suffice but want to check. I'm not 100% sure I follow what you mean about the headers, do you mean that for each event you also see the header printed?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cybers1  So does this data go through the HF? If so these props/transforms will need putting on there. Certain props/transforms happen at search time (e.g. search time field extractions), the o... See more...
Hi @Cybers1  So does this data go through the HF? If so these props/transforms will need putting on there. Certain props/transforms happen at search time (e.g. search time field extractions), the other config which happens at parsing/index time needs to occur on the first full instance of Splunk that the data hits (e.g. a HF), apart from a vary small set of exceptions (such as RULESET)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Cybers1 , if you're sure that the not filtered data don't pass trhough any HF, it isn't the issue. then, please try these regexes: [eliminate-accesslog_coll_health] REGEX = Health|health DEST_... See more...
Hi @Cybers1 , if you're sure that the not filtered data don't pass trhough any HF, it isn't the issue. then, please try these regexes: [eliminate-accesslog_coll_health] REGEX = Health|health DEST_KEY = queue FORMAT = nullQueue [eliminate-accesslog_coll_actuator] REGEX = actuator DEST_KEY = queue FORMAT = nullQueue Ciao. Giuseppe
Thank you for the kind reply. We have an elasticsearch setup without authentication and without Certificates so I tried to comment stanza as you suggest use_ssl = 0 # opt_ca_certs_path = but no ... See more...
Thank you for the kind reply. We have an elasticsearch setup without authentication and without Certificates so I tried to comment stanza as you suggest use_ssl = 0 # opt_ca_certs_path = but no success   Than i decided to go into the python scripts and try to comment out the cert(strip) line   /opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py #opt_ca_certs_path = opt_ca_certs_path.strip()     Now I get another SSL  Error in log : 2025-06-11 12:00:03,503 ERROR pid=2813813 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 153, in collect_events results = search_index(opt_elasticsearch_instance_url, opt_port, opt_user, opt_secret, opt_elasticsearch_indice, opt_date_field_name, opt_time_preset, size, from_number, opt_ca_certs_path) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 102, in search_index response = client.search(**search_params, scroll="1m") File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/_sync/client/utils.py", line 414, in wrapped return api(*args, **kwargs) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/_sync/client/__init__.py", line 3859, in search return self.perform_request( # type: ignore[return-value] File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/_sync/client/_base.py", line 285, in perform_request meta, resp_body = self.transport.perform_request( File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elastic_transport/_transport.py", line 329, in perform_request meta, raw_data = node.perform_request( File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elastic_transport/_node/_http_urllib3.py", line 199, in perform_request raise err from None elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1161))) If anyone managed to onboard elasticsearch data without authentication or certificate validation please adivse 
OK. People use their own spare time to help others. Not specifying the problem properly is simply wasting their time. Present your problem as clearly and completely as possible. Sure, there might som... See more...
OK. People use their own spare time to help others. Not specifying the problem properly is simply wasting their time. Present your problem as clearly and completely as possible. Sure, there might sometimes be some unclear things but guessing not only the solution but also the problem itself is not gonna cut it. So if you want to get some serious help, invest something firstly into specifying what you're trying to achieve - what the data is, what is the relation between the data and the desired output, what is the logic behind the output. Just saying "three versions per day" doesn't tell anything about how the input data corresponds to the output. Maybe some counts should be aggregated, maybe not. How the versions are ordered? Do the counts "spill down"? What is going on with that data?
Hi Splunkers, a colleague team si facing some issues related to .csv file collection. Let me share  the required context. We have a .csv file that is sent to a sftp server. The sending is 1 per day:... See more...
Hi Splunkers, a colleague team si facing some issues related to .csv file collection. Let me share  the required context. We have a .csv file that is sent to a sftp server. The sending is 1 per day: this means that every day, the file is write once and never modified. In addiction to this, even if the file is a csv one, it has a .log extension. On this server, the Splunk UF is installed and configured to read this daily file. What currently happen is the following: The file is read many time: multiple occurrence of error message like:  INFO  WatchedFile [23227 tailreader0] - File too small to check seekcrc, probably truncated.  Will re-read entire file=<file name here> can be got from internal logs   The csv header is viewed like an event. This means that, for example, the file contains 1000 events, performing a search in assigned index we have 1000 + x  events; each of this x events does not contains real events, but the csv header file. So, we see the header as an event/logs. For the first problem, I suggested to my team to use the initCrcLength parameter, properly set. For the second one, I shared them to ensure that following parameter are set: INDEXED_EXTRACTIONS = csv HEADER_FIELD_LINE_NUMBER = 1 CHECK_FOR_HEADER = true In addition to this, I suggested them to avoid the default line breaker; in the inputs.conf file is set the following one: LINE_BREAKER = ([\r\n]+) That could be the root cause/one of the cause of header extraction as events. I don't know if those changes has fixed the events (they are still performing required restarts), but I would ask you if any other possible fix should be applied. Thanks!
| rest splunk_server_group=dmc_group_license_master /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | mvexpand stack_id | join type=outer stack_id splunk_server [ rest s... See more...
| rest splunk_server_group=dmc_group_license_master /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | mvexpand stack_id | join type=outer stack_id splunk_server [ rest splunk_server_group=dmc_group_license_master /services/licenser/pools] | fields splunk_server, stack_id, effective_quota, used_bytes | stats sum(used_bytes) as used_bytes max(effective_quota) as stack_quota by stack_id | eval usedGB=round(used_bytes/1024/1024/1024, 3),totalGB=round(stack_quota/1024/1024/1024, 3), percentage=round((used_bytes / stack_quota) * 100, 2) | eval alert_level=case(percentage >= 90, "Critical: >90%",percentage >= 80, "Warning: >80%",percentage >=70, "Info: >70%",true(), null()) | where isnotnull(alert_level) | rename stack_id AS Instance, percentage AS "License quota used (%)",usedGB AS "License quota used (GB)",totalGB AS "Total license quota (GB)",alert_level AS "Alert Level"
Hi, @gcusello  Thanks for your reply! The configurations were applied on the main Splunk instance, not on the Heavy Forwarder. However, within the same configuration files on the main instance (pro... See more...
Hi, @gcusello  Thanks for your reply! The configurations were applied on the main Splunk instance, not on the Heavy Forwarder. However, within the same configuration files on the main instance (props.conf and transforms.conf), there are already similar settings that are currently working as expected. Also, we have tested the regex patterns used in the transforms.conf, and they are working correctly — they match the intended log lines when tested separately. That’s why we’re a bit puzzled — some TRANSFORMS stanzas are effective, while the ones mentioned in the original post aren’t having any impact. Any further insights would be greatly appreciated! Best regards,
Hi @Cybers1 , I suppose that you already checked the regexes that you're using and they are correct to filter logs. The question is: where do you located these conf files? They must be located on ... See more...
Hi @Cybers1 , I suppose that you already checked the regexes that you're using and they are correct to filter logs. The question is: where do you located these conf files? They must be located on the first full Splunk instance that dara pass trhough, in other words on the first Heavy Forwarder (if present) or in the Indexers. Ciao. Giuseppe
Hi Splunk Community, We’re currently trying to drop specific logs using props.conf and transforms.conf, but our configuration doesn’t seem to be working as expected. Below is a summary of what we’ve... See more...
Hi Splunk Community, We’re currently trying to drop specific logs using props.conf and transforms.conf, but our configuration doesn’t seem to be working as expected. Below is a summary of what we’ve done: transforms.conf [eliminate-accesslog_coll_health] REGEX = ^.*(?:H|h)ealth.* DEST_KEY = queue FORMAT = nullQueue [eliminate-accesslog_coll_actuator] REGEX = ^.*actuator.* DEST_KEY = queue FORMAT = nullQueue props.conf [access_combined] TRANSFORMS-set = eliminate-accesslog_coll_actuator, eliminate-accesslog_coll_health [iis] TRANSFORMS-set = eliminate-accesslog_coll_health [(?::){0}kube:*] TRANSFORMS-set = eliminate-accesslog_coll_actuator The main issue is that events are not being dropped, even when a specific sourcetype is defined (like access_combined or iis). Additionally, for logs coming from Kubernetes, there is no single consistent sourcetype, so we attempted to match using [source::] logic via a regex ([(?::){0}kube:*]), but this doesn’t seem to be supported in this context. From what we've read in the documentation, it looks like regex patterns for [source::] are not allowed in props.conf, and must instead be written explicitly. Is that correct? And if so, what’s the best way to drop events from dynamic sources or where the sourcetype is inconsistent? Any help or suggestions would be greatly appreciated. Thanks in advance!  
Hello Guys,   We have SCOM on physical box & want to onboard in AppDynamics for monitoring. customer wants to onboard without agent installation on SCOM. Could you please let me know what is best a... See more...
Hello Guys,   We have SCOM on physical box & want to onboard in AppDynamics for monitoring. customer wants to onboard without agent installation on SCOM. Could you please let me know what is best approach to SCOM monitoring in APpDynamics.   Thanks