All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Let me give this a try.  Thank you, Giuseppe
Hi! Did you ever experience this message from clicking the "View results in Splunk" link included in the email? I was trying to edit the dispatch.ttl to make the search life a little bit large,... See more...
Hi! Did you ever experience this message from clicking the "View results in Splunk" link included in the email? I was trying to edit the dispatch.ttl to make the search life a little bit large, but did not succeed.  I was wondering if the action.email.ttl is the one for this issue. Regards
Hello guys, We are getting on one heavyforwarder this message in splunkd.log, we are using TCP-SSL inputs.conf : “11-14-2024 16:59:44.129 +0100 WARN  SSLCommon [53742 FwdDataReceiverThread] - Recei... See more...
Hello guys, We are getting on one heavyforwarder this message in splunkd.log, we are using TCP-SSL inputs.conf : “11-14-2024 16:59:44.129 +0100 WARN  SSLCommon [53742 FwdDataReceiverThread] - Received fatal SSL3 alert. ssl_state='SSLv3 read client certificate A', alert_description='unknown CA'.”   How do you identify the sourceHost ? Is it blocking incoming data or just warning?   Maybe this can help? index=_* host=myhf1 source="/OPT/splunk/var/log/splunk/metrics.log" tcp_Kprocessed="0.000"   Thanks for your help.
Thanks PeteAve! I'll try that and see what happens..... 
I believe the solution is to disable the feature: Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.   adding... See more...
I believe the solution is to disable the feature: Create a health.conf entry in /opt/splunk/etc/system/local on the affected machines being sure to restart splunk after the entry is made.   adding:   [health_reporter] aggregate_ingestion_latency_health = 0 [feature:ingestion_latency] alert.disabled = 1   disabled = 1
A cluster requires 3 or more (odd counts only).  Quorum is obtained by having 50%+1 in sync.  Having only 2 nodes means there will never be quorum.
If you want to reflect a detector status on a chart, you may want to try creating a chart using the signal that you want to monitor. Then, use the “link detector” option so that the status of that de... See more...
If you want to reflect a detector status on a chart, you may want to try creating a chart using the signal that you want to monitor. Then, use the “link detector” option so that the status of that detector will show on that chart. To view alerts by application (service) or by severity, navigate to “Detectors & SLOs”. To filter by severity, click the box next to “Filter” and type “sf_severity:” and then choose “Critical” or whatever severity you want. To filter by application/service, click the “Any service/Endpoint” box next to “APM Filters” and select your application from the list.
Check the docs. https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Viz/GenerateMap  
Thank you once again! I will review it on my side and let you know once I successfully complete it.
Is this bug this an ongoing issue???? We have upgraded to version 9.3.1 and receives Forwarder Ingestion Latency message. Stating "Root Cause(s) Indicator "ingestion_latency_gap_multipilier' exceeded... See more...
Is this bug this an ongoing issue???? We have upgraded to version 9.3.1 and receives Forwarder Ingestion Latency message. Stating "Root Cause(s) Indicator "ingestion_latency_gap_multipilier' exceeded configured value. The observed value is 1362418. Unhealthy instances: Indexer3. If this bug is still ongoing, can someone please "post the workaround"??  Thanks in advance!!  
@tscroggins Thanks for your answer and spending your time, even if we use makeresults , at every point we need to  mention that CSV data ?   
@ITWhisperer thanks for your time , we can replace L & R values with some number like L as 9 and R as 10 , then we can make it visualise.  
Thanks for clarifying.  Try this query. | rex mode=sed "s:<EventID>4702<\/EventID>|<TimeCreated SystemTime='[^']+'\/>|<Computer>[^<]+<\/Computer>|<Data Name='[^']+'>[^<]+<\/Data>::g"
Hello   How did you manage to eliminate duplicate fields?   Thanks!
Try something like this | rex mode=sed "s/(?ms).*(?<ei>\<EventID\>\d+\<\/EventID>).*(?<TimeCreated>\<TimeCreated SystemTime='[^']+'\/>).*(?<Computer>\<Computer\>[^\<]+\<\/Computer\>).*(?<TaskName>\<... See more...
Try something like this | rex mode=sed "s/(?ms).*(?<ei>\<EventID\>\d+\<\/EventID>).*(?<TimeCreated>\<TimeCreated SystemTime='[^']+'\/>).*(?<Computer>\<Computer\>[^\<]+\<\/Computer\>).*(?<TaskName>\<Data Name='TaskName'\>[^\<]+\<\/Data\>).*/\1\2\3\4/g" Caveat: XML sometimes has namespace aliases either embedded or used or both which a proper XML parser would understand but these are not shown in your sample and therefore not catered for in the regex
Hi @narenpg , yes, it's possible but you pay twice the Splunk license. You have to modify the outputs.conf to create a fork. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.... See more...
Hi @narenpg , yes, it's possible but you pay twice the Splunk license. You have to modify the outputs.conf to create a fork. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Routeandfilterdatad Ciao. Giuseppe
We have currently configured to send the logs to splunk cloud also we are setting up a DR on-perm server, now the question is how to configure the UF to send to both the cloud and DR (On-Perm).  NO i... See more...
We have currently configured to send the logs to splunk cloud also we are setting up a DR on-perm server, now the question is how to configure the UF to send to both the cloud and DR (On-Perm).  NO issues with the cloud environment. Is it possible to send it to both? On the UF the certificate is for splunk cloud and i am not sure how to add our on-perm certificate.
Hi @Thomas2 , at first don't use the search command after the mai nsearch. then anyway, you can use the rex command to extract the first part of the field or eval to use the first 20 chars. index=... See more...
Hi @Thomas2 , at first don't use the search command after the mai nsearch. then anyway, you can use the rex command to extract the first part of the field or eval to use the first 20 chars. index=cloud_servers host="*server_name-h-nk01-*" | rex field=host "^(?<host>[^\.]+)" | stats dc(host) AS count Ciao. Giuseppe
Hi,  Our app is built upon Splunk Add-on builder. Builder's code is responsible for most of input and output for our app. We modified the pulling module to reach out to our server to pull data. Then... See more...
Hi,  Our app is built upon Splunk Add-on builder. Builder's code is responsible for most of input and output for our app. We modified the pulling module to reach out to our server to pull data. Then Builder will send the pulled data into Splunk engine to process.  Splunk cloud store has been updating their inspection criteria few times in past years. Almost every time, Builder needs to update to comply to the new criteria. We was told to import our app upon the new Builder and export to our app, to take in Builder's updates.  Unless last month.  We have got another notice from Splunk store, saying our app no longer apply to updated criteria and will be removed from Splunk store by 18th this month. Only this time, Splunk Add-on Builder no longer do its part to update to apply to the same rules in the same store.  Here is the cause: check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an acceptably-recent version in order to avoid compatibility issues between your app and the Splunk Platform or the Python language runtime used to execute your app’s code. Please update your Splunk SDK for Python version to the least 2.0.2. More information is available on this project’s GitHub page: https://github.com/splunk/splunk-sdk-python Versions affected by this check are: 1.6.1   We would like to seek some information about  1. Why Builder can violates the Splunk cloud criteria but can stay on Splunk store.  2. If Builder does follow new rules as everyone else, when do they update to new version to pass inspection test.  3. If Builder does NOT update. Is there any instructions for the apps that built upon Builder that can fix builder's issue and still allow to be hosted on Splunk store.    Thanks for any feedback and information.     Lixin
Hi @richgalloway, thank you for your reply. Apologies, I should have been a bit more descriptive. I am trying to implement a SEDCMD in transforms.conf to reduce a single raw event's size, specifica... See more...
Hi @richgalloway, thank you for your reply. Apologies, I should have been a bit more descriptive. I am trying to implement a SEDCMD in transforms.conf to reduce a single raw event's size, specifically by removing elements that will never be used while keeping the event intact for compliance purposes. My intent is not to extract fields but to ensure that only the necessary elements remain in the raw event. A single regex that can clean up the event by removing unused parts while leaving the required fields would be ideal. Thanks in advance for your guidance! Best regards, D Alex